Thursday 13 March 2014

Performance - Running out of memory (heap)

I am not going to waste any words to emphasize how important memory management is for C++. Let's cut into this topic directly. What should you do when running out of memory? (I am not going to talk about the program failing to allocate a huge chunk of memory, like 4G in a 2G machine. Thing like this you should avoid directly by choosing a more memory-efficient algorithm.) Here I am going to talk about what can be done when the program are gradually losing the control of the memory and when the memory consumption grows up and never down again. How to approach this issue?

The very first thing to me (I would like to do first) is to collect a memory trace when memory allocation failure happens. Tools like Valgrind, Dr. Memory (from Google)  and IBM Rational PurifyPlus. I would strongly recommend  Valgrind on Linux, free and powerful. For Windows if you can afford, Rational PurifyPlus is great. Otherwise Dr. Memory, it is free from Google and it does the job. Ok, as long as you get a memory trace. Fix any memory leak on "definitely loss" and pay attention to "possible loss". Fix the memory leaks first before move on.

If the code is in high quality and has no memory leak at all, the process will be tough to spot the hot data-structure and find the solution. Here is a few things you can check.
1. Spot the hot data structure from the trace. Check the life time of data. Ask yourself if the data live beyond its life-span. Especially on global variables, static variables, big data chunk memory. The point here is that create data/variables when needed and destroy them asap.
2. Check any user-defined data structure in hot memory area. If yes, check the amount and its actually size. The point here is memory alignment. In resource critical applications this is very important. Normally the actual size of memory the user-defined data structure takes is more than the value you get from "sizeof()" dynamic operator. Check the actual size it takes in memory, re-arrange its data member and ensure as least waste as possible. 
3. Check the frequency of calling new/delete and malloc/free. The point here is memory fragment. Thank about that the memory is like a white paper at the beginning. After millions of memory allocation/de-allocation of small objects, it is just like scattering black pepper on the white paper. The memory is polluted and never has a decent size blank white area available when you need to write a long message. If this happens, think about having an objects pool or self-implemented memory-management unit. Scenarios like using a node-based container with frequent add/remove operations. Having a self-implement memory management unit is quite common in embedded world.
4. Check your compiler vendor. See the implementation like std::string and STL containers. Some string implementation have default minimal size, for instance 32 at least. It would be bad when the program is using a lot of small strings. As well check the STL container implementation and their default minimal size.
5. Use node-base based container if fails to allocate a large chunk of continuous memory. For instance choose std::list over std::vector, std::queue. Choose self-implemented node-based container (link-list) over array.

I see memory shortage happens mostly in embedded applications. Self-implemented MMU is the solution on the most occasions. In scientific computing often fails to allocate a large chunk of continuous memory. Think about a different algorithm. Remember better memory management sometimes means better performance. Especially with better data alignment and object pools.


No comments:

Post a Comment