Optimization: Your Worst Enemy

Living Well is the Best Revenge

(OK, this is really a sidebar, and a bit of an ego trip. You can skip to the next section if you don't want to read it. You've been warned).

Back in the early days of C, the C storage allocator was one of the worst-performing storage allocators in existence. It was "first fit", which meant that the way it worked was the allocator went traipsing down the free list, looking for a block at least as big as the one requested, and if it found one, it split it and returned the residue to the free list. This had the advantages of being as slow as possible and fragmenting memory as badly as possible. In fact, it was worse than you can imagine. It actually walked the list of all blocks of storage, free and allocated, and had to ignore the allocated blocks. So as you got more and more blocks, its performance degraded, and as the blocks got too small to be usable, they simply added to the overhead without adding to the value.

I was at CMU on a one-year research contract. My first remark upon using the Unix environment was to turn to one of the people and say "How can you live this way?" The software technology in 1990 was exactly what it had been a decade previous when I left CMU, except that in the modern case the compiler didn't work (it generated incorrect code for simple C constructs), the debugger didn't work, the backtrace (being entirely a list of hex addresses with no symbols) was useless, the linker didn't work, and there wasn't anything resembling a decent document production system. Other than these minor defects, I suppose it was an OK environment. Having used Microsoft C, with CodeView, and even the earliest Visual C environment, I had certain high standards of what I expected out of my tooling, and Unix (at least in that era) fell short. By miles. I was, of course, outspoken on this subject.

One day we were discussing an algorithm, and it required doing some storage allocation. I was assured that this was unacceptable because storage allocation was very expensive. I said something like "Well, of course, if you use the brain-dead Unix allocator, you're bound to have performance problems. A decent storage allocator makes this a non-issue". One of the people at the meeting immediately challenged me: "I'm sick of hearing you put down Unix. What do you know about storage allocators, anyway?". I said "hold that thought, I'll be right back". I went to my office, where I had a copy of the IDL book, brought it back, and held it open to the chapter labeled "Storage allocation". "See this?" "Yes". "What is the title of this chapter?" "Storage allocation". I closed the book and pointed to the cover. "Recognize this name?" "Yes, of course, that's you". "Fine. I wrote that chapter. It is on how to write a high-performance, minimum-fragmentation storage allocator. So you asked what I know about storage allocation. Well, I wrote the book on it."

I was never again challenged when I put down Unix.

Just as an aside, the allocators used in NT work very much like the algorithm I described in the IDL book, which is based on the "quickfit" work that Chuck Weinstock did for his PhD dissertation at CMU around 1974.

You might also like...

Comments

Contribute

Why not write for us? Or you could submit an event or a user group in your area. Alternatively just tell us what you think!

Our tools

We've got automatic conversion tools to convert C# to VB.NET, VB.NET to C#. Also you can compress javascript and compress css and generate sql connection strings.

“Computer Science is no more about computers than astronomy is about telescopes.” - E. W. Dijkstra