Discover more from Bitesized Engineering
So, what is a Stack, really?
Chronicles of C++ Memory Management - Part 4
Keeping my promise of sharing some thoughts on C++ and Memory Management deep-dives. Last time we discussed that it really all is in RAM, but what differentiates it is how that RAM is organized and used. This time I want to put a magnifier over the Stacks, and I will probably keep doing that for at least one more article. Anyway, let’s talk Stacks:
(click on image to enlarge)
One description that I personally liked is that "Stack is a part of RAM provided by OS as a scratch space for your thread". Each thread gets it the moment it's spawned, and it goes stale the moment your thread ends.
Due to it's design (small, preallocated, contains handful of stuff) it also is MUCH faster than Heap. And if you think about it, it kind of does make sense. Stack is like a fixed-size memory pool you get immediately, and you can really do whatever you want with it. There's no bookkeeping, no waiting for OS to provide memory chunk for you, etc. And again, due to it's nature, lot's of variables tend to end up in CPU cache anyway, making it way faster.
But Stack is of LIMITED size (e.g. around 1MB per default on Windows). So, why is that? Well one reason seems to be - to prevent infinite loops. Think about it - if you write a loop that keeps calling a recursive function over & over, what you really do is keep adding function calls to stack. If there wasn't a "maximum", you'd effectively eat up the whole memory. BTW, good luck googling "why we have stack overflow" :)
Stack variables are also famous for "not having to deallocate them". Why is that? It's actually really simple - it's because the OS gives your thread a stack of memory (e.g. 1MB) and once your thread dies, OS simply removes the whole portion. So technically it's not the Stack is fast because it's implemented as Stack, but simply because it's a small memory pool that gets wiped out once it's not needed any more. You could really achieve the same thing with Heaps, but you'd be wasting precious memory. More words about it in future articles ;)
Finally, a bit on memory leaks. And this is really amusing. If you say "I want to allocate 32 bits" (e.g. to store integer there), you'd get them on Stack (i.e. if you do "int a"). But if you say "I need to allocate 30 megabytes of RAM", well now you have an issue because Stack is only 1MB, so you have to store those 30 megas somewhere else. And that else is the rest of RAM - a heap, if you like. So you allocate 30 megs OUTSIDE of your Stack, but you really need to know WHERE they are, right? So you get a pointer back, telling you - "your 30 megs are located THERE". And that "there" is a memory address really, which is nowadays usually 64 bits. Those 64 bits (i.e. a pointer to memory where you can store 30 megs) are stored on Stack. You effectively now have 64 bits on stack and 30megs somewhere OUTSIDE (on a heap). Well, what really happens is that if your thread exits (or your pointer goes out of scope), that Stack gets wiped out (and as such 64 bits go out of existence), but you STILL have 30 megs laying around; and they will be there until your program exits. And those 30 megs, that NOBODY is using any more are just hanging there, and OS has no clue that they hang, because OS doesn't really track which spaces are used and which aren't. Repeat this pattern multiple times and sooner or later you end up with tons of wasted memory.
In order to overcome this, you really have couple of options. One - don't use Heap, obviously; but let's assume you'll need WAY more memory to, for example, start a Chrome Tab. Two - allocate 1 gigabyte of RAM on Heap (so that you can start that Tab), but ensure that the moment the tab is closed, you actually tell OS that you don't need them any more. And three - you could create Memory Pools (which I'll be discussing in future articles) and then just deallocate them in one go (i.e. like it happens with Stack).
And there you have it. From OS point of view - it's really just memory. But it's more about HOW is that memory managed and what useful can you do with it :) And next article will dive a bit deeper into it (i.e. Why is Stack much faster than Heap and what are the intricacies of it all).
Until then, if you learned something new, please feel free to share this post and newsletter with your colleagues! :)
Thanks for reading Bitesized Engineering! Subscribe for free to receive new posts and support my work.
P.S. If you missed past articles, here they are (sorted from latest to newest):