Is it ever OK to *not* use free() on allocated memory?

Easy: just read the source of pretty much any half-serious malloc()/free() implementation. By this, I mean the actual memory manager that handles the work of the calls. This might be in the runtime library, virtual machine, or operating system. Of course the code is not equally accessible in all cases.

Making sure memory is not fragmented, by joining adjacent holes into larger holes, is very very common. More serious allocators use more serious techniques to ensure this.

So, let’s assume you do three allocations and de-allocations and get blocks layed out in memory in this order:

+-+-+-+
|A|B|C|
+-+-+-+

The sizes of the individual allocations don’t matter. then you free the first and last one, A and C:

+-+-+-+
| |B| |
+-+-+-+

when you finally free B, you (initially, at least in theory) end up with:

+-+-+-+
| | | |
+-+-+-+

which can be de-fragmented into just

+-+-+-+
|     |
+-+-+-+

i.e. a single larger free block, no fragments left.

References, as requested:

  • Try reading the code for dlmalloc. I’s a lot more advanced, being a full production-quality implementation.
  • Even in embedded applications, de-fragmenting implementations are available. See for instance these notes on the heap4.c code in FreeRTOS.

Leave a Comment

Hata!: SQLSTATE[HY000] [1045] Access denied for user 'divattrend_liink'@'localhost' (using password: YES)