Linux Memory Usage

I recently became concerned about the performance of a particular application (cmatrix on the 1080p screen – it looks beautiful but it’s quite choppy) so I began some investigating and turned up some surprising statistics.

Uptime: 12:53; of that time, Chrome has been running for most of it and we all know web browsers are notorious for leaking all over your RAM.  Chrome is better than firefox but still pretty leaky.  On Windows XP, it was not uncommon for chrome to be using more than half a GB of RAM.

Now, the surprising thing is, Linux has managed to gobble up all my RAM.  But has it?  If you observe carefully, I’ll explain to you why Linux has -not- gobbled up all of my RAM (despite what ‘top’ tells me) whereas Windows, had it been given the same constraints, would’ve.

The way Linux handles memory usage differs extraordinarily to Windows.  If something leaks in Windows, that RAM is unrecoverable, especially after the application has quit.  However, in Linux, if something leaks, once the application quits, all the memory Linux allocated it is returned to Linux in the form of ‘cached’ memory.  In other words, the memory doesn’t get -deleted-, so if the application comes back and wants it again it’s entitled to the same memory (efficiency++) but if another application wants to use the memory that application -was- using, Linux isn’t gonna say no.  In other words, the memory is still ‘in use’ but it’s also still ‘available’ to new applications who request it.

This presents a difficult situation when you’re measuring RAM usage – do you count the memory as ‘in use’ or as ‘available’?  Well, ‘top’ (and some other tools) see it as ‘in use’, and ‘free’ will also tell you (on the Memory line) that it’s ‘in use’.  However, ‘free’ also has a “-/+ buffers/cache” line which basically takes the buffered memory into account, giving you a fuller picture of how much RAM is available to new applications.  So when you’re measuring RAM, you should use -that- line in your statistics.  Not the above raw figure which counts unreferenced (‘dead’) memory as ‘used’.

Windows XP, on the other hand (but don’t quote me about Vista or 7, I’m unfamiliar with the new kernel but I’m pretty sure it’s the same story in terms of memory allocation), trusts applications to free memory.  If they don’t, Windows will never reclaim the lost memory (not until a reboot anyway).  I’m unsure exactly how the kernel works (only Microsoft know) for this behaviour to be apparent, but I’m not the only one to have made that observation, and I can assure you Windows XP, if left to its own devices, will start swapping out pages like there’s no tomorrow after a couple of days (depending how much RAM you have).

Suffice it to say, if you run WinXP, you’re gonna get slowdown after a while, if you run Linux you can (in theory) continue running it forever.  Note that kernel leaks (which are rare but do happen) and certain other situations (eg video driver bugs) can cause a required reboot due to memory loss, but the situation is far rarer in the Linux world than in Windows.

It is owing mainly to this reason (and the kernel’s greater stability, owing to a better design and fewer crash-inducing bugs) that Linux servers are able to run for…

chris@w1zard:~$ uptime
20:54:58 up 125 days,  4:24,  2 users,  load average: 0.00, 0.00, 0.00

Yup, w1zard (my server) has now been running for 125 days, 4 hours and 24 minutes without a reboot.  The load average statistics being 0.00 is due to the quad core processor; the system doesn’t strain under the minute amount of traffic it’s required to handle (mail only at the moment, too).  I’m very proud.
Have fun hacking!