| ||
IBM home | Products & services | Support & downloads | My account |
|
Eye on performance: Improve your development processes | ||||
Compilation speed, exceptions, and heap size get the
regulars at the Big Moose Saloon talking
This past month we spent a lot of time down at the JavaRanch's Big Moose Saloon to see what kind of performance questions the JavaRanch greenhorns are asking. Most are about J2SE and development procedures -- questions about the Java language, core classes, and how to improve their development processes. Have you found your compilation phase to be slow? Does OK, so the discussions at the JavaRanch about Jikes weren't quite as direct as our homemade advert here, but several readers definitely suggested that the Jikes Java compiler was designed for speedy compilation. That's useful to know, especially for projects with many files to compile. Beware, though, that while Jikes can help speed up your development process, you are probably better off doing your final compilation with the compiler that comes with the JVM that you will be using in production. Things can be different enough across JVM versions that problems can occur when using compilers that are different from their JVMs. Exceptions are
expensive We can say that you don't need to abandon the good
Building those stack traces requires taking a snapshot of the runtime
stack, and that's the expensive
part. The runtime stack is not designed for efficient exception creation;
it's designed to let the runtime run as fast as possible. Push and pop,
push and pop. Get the job done, with no unnecessary delays. But when an
So creating exceptions is the expensive bit. Technically, the stack
trace snapshot happens in the native method
Technically, you can even throw exceptions freely without too much
cost. It's not the
Fortunately, good programming practice already teaches us that you should not be throwing exceptions willy-nilly. Exceptions are designed for exceptional conditions, and should be kept that way. But just in case you don't like following good programming practices, the Java language gives you an added incentive by making your program run faster if you do.
Maximum heap size The JVM has a memory space that it manages. The part of the space where objects live (and die) is called the heap space. Objects are created in the heap space, and they are moved around the heap space by the JVM garbage collector at various times, such as when defragmenting (or compacting) the heap. Objects can die in the heap, too. A dead object is simply one that is no longer accessible by the application. The JVM garbage collector looks for these dead objects and reclaims the space they used, in order to make space available for new objects. When the garbage collector can no longer free-up space by reclaiming dead objects, the heap is said to be full. A full heap is a problem. When the heap is full and the application
tries to create more objects, the JVM can ask the underlying operating
system for more memory, so it can make the heap larger. If the JVM cannot
obtain more memory, then allocating a new object will throw
So what can we do about it? Most JVMs have an optional parameter that
specifies the largest size that the heap is allowed grow to. After this
size is reached, the JVM is no longer allowed to request more memory from
the operating system. In recent JVMs from Sun and IBM, that parameter is
specified with the So what should the maximum heap size be to ensure optimal performance? You might think the answer is "as large as possible," so that you stave off out-of-memory errors and give your application as much memory as it can use. Well, it turns out that too large a heap can be a significant problem because of the way operating systems work. Specifically, modern operating systems have a real memory and a virtual memory. Virtual memory creates the illusion of having more memory than you actually have by supplementing real memory with disk space in swap files, which act as a kind of overflow memory. The operating system can take pages that are not being actively used and put them on the disk until they are needed again, freeing up real memory (temporarily) for other uses. This way, the available memory can appear to be larger than the real memory, allowing more or larger processes to run. The trade-off is that those pages on the disk have to be moved back to the real memory when they are needed, and that can be really slow. Disks are a lot slower than memory is. If you allow the heap to get bigger than the real memory of the system (the physical RAM installed on your machine), then your heap can start paging. That in itself might not be such a problem -- after all, only the infrequently used pages are shunted off to disk. However, when it comes to garbage collection, the whole of the heap tends to get scanned, causing all those seldom used pages to be paged into real memory, with other pages needing to be moved out to the disk to make space for those old pages. And this is a vicious cycle, because the pages that have just been moved to disk are themselves likely to be seldom used pages in the heap, which the garbage collector is just about to scan as part of the garbage collection. The result is that you will spend more time moving pages in and out of memory than you will getting any useful work done. Garbage collection is often an application bottleneck already. But if you make the heap so large that the operating system must page significantly in order for the JVM to perform a garbage collection, the result is a cascade of very slow paging activity, which will slow your application down to a crawl. So make sure that the maximum heap size is smaller than the available system RAM, taking into account other processes that may also need to be running at the same time, to prevent this paging disaster.
|
About IBM | Privacy | Terms of use | Contact |