We tend to take multi-tasking applications for granted these days, as the tools and APIs for developers have really become very user-friendly. In fact even the term "multi-tasking" has been absorbed into mainstream business jargon, because it's so much understood that not just machines, but people also need to be able to execute many tasks at the same time.

I was reflecting this week on a system I worked on years ago where the “multi-tasking” was actually provided by the application programmers themselves.  The system had a 64kbyte space that you could swap a small code overlay into.   This is not a typo: 64 kilobytes, rather than megabytes.  If your app exceeded 64k, then you had to split it into multiple overlays which would be swapped in and out of the same space one at a time.

The most difficult part was that you absolutely could not block, or delay in your overlay at any time, otherwise the system would hang.  The concurrency (or I should say apparent concurrency) depended on each application running in a few hundreds of milliseconds and then quitting again, leaving the space free for another user task.

Now modern operating systems like Linux and Windows manage tasks for you, by automatically pre-empting apps in order to share the CPU and other resources fairly.  You could say at the time (in the 1980’s) that operating systems were not that sophisticated, at least not in desktop computers.  UNIX existed then, but had yet to become popular outside of universities.  Back then only minicomputers (like VAX) and mainframes (like the IBM System/370) had fully preemptive multitasking.

Through the 90’s I worked with UNIX operating systems that brought more convenience with multi-processing, so you could use the ‘fork()’ command to make an identical copy of your process, and therefore create sub-processes, each in their own safe address space.  Later on Windows popularized ‘threading’, or lightweight processes, which were adopted in Linux/UNIX and now even on embedded systems like phones.

In some ways threading brings back a little of the “danger” of those early times.  Threads all share a single address space, so it’s possible for one thread to corrupt the memory and spoil the environment for all of the co-operating tasks.  The tradeoff is that threads are much lighter (both the CPU and memory usage) for the system to manage, so you can have many more of them in a single server. 

Making threads communicate safely, and rendezvous with each other safely still requires care and skill from the software engineer, but in a lot of ways our development environments are really a very comfortable environment compared with those of twenty years ago.