It's surprising how often it comes up in Hollywood movies: a crucial plot device is that a character has lost their memory, or part of their memory. Off the top of my head I can think of several examples: Memento; Total Recall; Fight Club; Eternal Sunshine of the Spotless Mind. Pick your favourite long-running TV series, and I’m sure that you will be able to think of an episode that uses this device too.

Of course computers never lose their memory; or do they? We have come to depend on a limitless supply of memory in order for our applications to get their jobs done. Since the 1970s, computer operating systems have had "virtual memory systems", i.e. a system that can “swap out” things from the program memory (RAM) to a hard disk, giving the impression that much more memory is available than the actual physical RAM.

Similarly, computer languages have evolved to use dynamic memory in addition to the fixed structures (like arrays, integers etc) that were available in the languages that we probably used at school. Some languages, like "C" and C++ give the programmer explicit control of allocating and deallocating memory, for example alloc or malloc in "C". In some cases (like C#, SmallTalk) the process is at least partly automatic and involves a "garbage collector" that runs autonomously, recovering unloved pieces of memory. There are examples where the distinction between disk and RAM is deliberately blurred (like screens in FORTH, or memory-mapped files in Win32).

So what started this train of thought? On the Codeproject site, they have a survey entitled When programming, do you explicitly test for out-of-memory conditions?. The majority of respondents (58%) said that they do not explicitly test for out-of-memory conditions. In other words 58% of the apps assume limitless supplies of memory. My vote isn’t in there, but if I did vote, I would also be in that majority group. Plenty of my apps look like this:

struct Fred *fred;

fred = (struct Fred *) malloc ( sizeof(struct Fred) );
fred->type = 42;
:
:

In other words, I didn’t test ‘fred’ to see if malloc returned 0, even though that is a possible outcome. I remember once writing an app to call malloc until it returned 0, i.e. grabbing every last piece of memory that the O/S would give me. It may not surprise you to discover that the program does terminate, and, yes, there is a limit to how much memory you can have.

I can’t answer for anyone else, but I justify my code like this. (1) There’s no point in testing for zero, as I’m just about to find out in the next line whether fred is zero. When I try to follow a zero pointer, my program will trap and stop. (2) If malloc no longer has any memory left, it’s a fair bet that the entire system is just about to fail in the next few seconds. (3) Yes, I could check for zero, and do some kind of graceful shutdown of the app, but this is about my only option. Without dynamic memory available it is unlikely that my app will be able to continue doing anything useful.

Which way do you vote?