The book “Machine Code for Beginners” (by Usborne, 1983) likened computer memory to a shelf or a cabinet with lots of tiny little drawers, all of which can hold just one byte of information at a time. The pictures had robots carrying the information in and out of them, on little pieces of paper.
One could make the argument that memory is the most important component of a modern computer. There’s not much you can do if there is no memory at all. Yet, at its simplest, it can be described as in the image above.
In the earliest computers there were only a few thousand of these “drawers”, and today the smartphone in your pocket can easily have more than 500 billion. It’s hard to illustrate things when their scale overshadows the structure.
I heard this sentence at a programming lecture over 20 years ago: “you don’t need to worry about pointers or memory utilization, those are concepts of the past”. What a dangerous thing to say! The reality is that neither one of them is a concept of the past. Memory exists, there’s just a lot more of it. Pointers exist (more on them below). There’s towering stacks of millions of drawers stretching into the horizon, but they still exist.
Pointers
A pointer is an address that points at the memory, to a specific drawer. Even if the memory address is virtual, as it is in most modern operating systems, it’s still there. And even that virtual memory window is usually pointing at somewhere in the physical memory. It is an address that will take you to a specific location in memory.
You may choose to use a language that has no visible concept of pointers. The underlying interpreter still utilizes them heavily. Here’s an example in Python:
computer = {"cpu": "80386", "ram": "32 MB", "disk": "580 MB"}
print(computer["ram"])
# Output: 32 MB
other_computer = computer
other_computer["ram"] = "64 MB"
What’s the RAM in the first computer now? It’s 64 MB.
print(computer["ram"])
# Output: 64 MB
This, too, is indirectly due to memory pointers. Internally, the variable “other_computer” is now the exact same thing as “computer”. Modifying one changes both, since they’re both windows to the same data in memory.
The same process in C looks very different, as you have to purposefully declare the variables as pointers, denoted by the asterisk * after data type:
typedef struct {
char cpu[8];
int ram_mb;
int disk_mb;
} computer_t;
void pointer_demo() {
computer_t *computer = malloc(sizeof(computer_t));
computer_t *other;
strcpy(computer->cpu, "80386");
computer->ram_mb = 32;
computer->disk_mb = 580;
other = computer;
other->ram_mb = 64;
// please, be responsible
free(computer);
}
C requires more setup and strict memory management but that also makes it an excellent learning tool.
It also presents you with the classic pointer problem: when you no longer need to reserve memory for the data, you have to free it up, but just once.
Both computer
and other
are declared as (computer_t *
), a pointer. Freeing up either one of them will do the trick, but freeing both will make things blow up rather nicely. This is why reference counting was introduced.
Reference counting
Any time you take a step further away from the hardware, you’ll likely encounter Reference Counting, with languages such as C#, Objective-C, C++ or Rust. Reference Counting combines pointers with stack-style addressing and automatic memory management to deal with the memory reservation problem. It can be summarized like this: “If no one’s using it, and no one knows what it is or where it is, why keep it around?”
In C#, it’s a built-in feature of the language, in C++, you’ll have to either implement it yourself or use one of the available frameworks that implement it, and with Objective-C it depends on which compiler flags you’re using.
In the above example, when computer (now wrapped in a class that implements RC) is copied to other, the reference count in the shared object is incremented. When one of these instances leaves the scope, the count is decremented – and once it reaches zero, the memory is freed.
Reference counting combines pointers with automatic memory management. It’s like a business owner trying to close the shop to take a break. Each time there’s a new “customer” in through the doors, the counter is incremented, and as they exit the door, it’s decreased by one. When the counter reaches zero, he can go and lock the door and leave. In the same way, the memory the pointer is pointing at will be released and it becomes available again. I guess this is not a great illustration as it would mean the shop owner will never be able to find his shop again.
Interpreted and bytecode languages like Python or Java combine reference counting with garbage collection – which adds a measure of weirdness. It means you can’t say “I no longer need this, please free up the memory”. Instead, there’s a separate mechanism inside the interpreter, that – whenever it feels it’s necessary – goes and cleans things up, frees memory still held by unused pointers. It’s not your problem, but it’s also out of your control.
Copy-on-write
Let’s step into the twilight zone with Swift. It has a mechanism called Copy-On-Write, which takes references and pointers to a new level – in the above example, the first two lines would be still pointing at the same address in memory, but modifying the other_computer would trigger duplicating the data and modifying only one of them:
import Foundation
func pointer(_ obj: UnsafeRawPointer) -> String {
return String(format: "%p", Int(bitPattern: obj))
}
var a : [Int] = [1, 2, 3]
var b = a
print("a is", pointer(a))
print("b is", pointer(b))
b[1] = 4
print("b is", pointer(b))
This outputs roughly something like:
a is 0x100705ac0
b is 0x100705ac0
b is 0x100706000
You see that after modifying the contents of b it gained a whole new memory address.
Swift is not the only language where you’ll encounter COW, for example PHP utilizes it heavily, it’s a prominent feature of some file systems, and std::string
in C++ used it from 1998 until 2011.
Why this matters
Seeing the actual pointer value is becoming more difficult. This is for your own protection. But at the same time it’s obscuring an important mechanism in most languages.
You can think in pointers.
They’re the fastest possible way of conveying data from one part of a program to the other. A true pointer is like a zero-lag space-time portal to a real place in memory and the closest thing to telling a modern processor to go somewhere for its data.
Being conscious of pointers can help you write faster code.
A copy is always a heavier operation. It’s more like making a photocopy of something carrying it with you at all times. The more your data sets grow, the heavier stack of paper you’re carrying with you, since you’re the only one with that instance of the true data.
Years ago, a friend needed a spare part for something, and I happened to have one at hand. He asked how much it was, and I said “it’s free”. He responded with “Thanks, I’ll have a hundred of these.”
This applies to memory. You can be sloppy once, or two times, but when you’re sloppy a million times, a 16-byte structure in memory suddenly takes 16 MB of it. Or if you do things like image processing, not releasing intermediates during processing might make the process require 2 GB of free RAM instead of 200 MB. This can even have a direct effect on performance
Being conscious of memory lets you do more with less.
Memory is the most important component in modern computing. Don’t forget it.
p.s.
Please, if you have personal stories on memory management, drop a message below.
Leave a Reply