While reading George Dyson’s computer history book Turing’s Cathedral earlier this year, I was struck by how physical the act of programming was back in the 1940s and 50s, when the age of computers began. Take a close look at the lead image of this post, borrowed from Dyson’s book, which shows John von Neumann and the MANIAC computer in 1952. At hip level in the photo are a group of Williams cathode-ray memory tubes, each one storing 1,024 bits. There were 40 tubes, so the total capacity was 40,960 bits (5 kilobytes!)
What’s even more remarkable than the fact that von Neumann could touch the memory tubes, is that he was also able to see what was happening inside the tubes. “In the foreground [of the photo] is the 7-inch-diameter 41st monitor stage, allowing the contents of the memory to be observed while in use,” wrote Dyson.
When von Neumann and his colleagues programmed the MANIAC, they were acutely aware of what was happening inside the machine. They had to understand precisely how memory worked, in order to physically manipulate it. “Every memory location had to be specified at every step,” explained Dyson, “and the position of the significant digits adjusted as a computation progressed.”
Fast forward nearly seventy years, and computer programming is a far less physical act. For a start, you don’t necessarily know where the computer you’re programming is located (because of cloud computing). But more than that, you may not even have to write any code.
The latter trend, which goes by the label “low-code,” is a fairly recent development. As Tyler Jewell told me in our recent interview, at the turn of the century “80% of a new application was custom code, and 20% was sourced from a reusable component.” But in 2020, Jewell thinks that “20% of an application is custom code and 80% is sourced from a reusable module.”
So in the space of seventy years, we’ve gone from having to program instructions — using machine language, no less — into a cathode-ray memory tube, to 80% of the time copying and pasting reusable modules into an internet service (and having no idea where in the world it will actually get computed).
There’s seemingly no end to this larger trend of abstraction, either. With today’s serverless environments, you don’t need to know anything at all about the computers that run your applications. The backend is entirely abstracted away.
Do Most Programmers Understand Computers Today?
You have to wonder whether all this abstraction is impacting the programming profession. After all, in John von Neumann’s day, programmers had to know how memory tubes functioned — not to mention the rest of the hardware. But modern programmers are far less likely to have an understanding of how memory physically works on a silicon chip. So, should we be concerned that developers will gradually lose the ability to truly understand how computers work?
“No, I think it’s actually a lot of the same problems shifting location,” Rauch said, referring to the problems of memory management and other aspects of computation.
Rauch thinks developers are still “going to have to think about memory, you’re going to have to think about GPU memory,” but they are approaching it from a different point of view than in previous generations. As he put it, nowadays the “computation is closer to the device.”
Of course, von Neumann and his crew were as close as you can physically get to the “device” (in their case, MANIAC), but that’s because the computation and the delivery of the resulting information both happened on the one machine. But today’s networked environment is highly distributed, so the end user device could be literally anywhere in the world.
Keep Feeling Fascination
Alan Turing once said that computer programming “should be very fascinating. There need be no real danger of it ever becoming a drudge, for any processes that are quite mechanical may be turned over to the machine itself.”
Also, as Tyler Jewell pointed out to me, the level of abstraction a developer needs for each application will vary. If all they want to do is build an app that analyzes spreadsheet data, for example, then a low-code environment will usually suffice (or even “no-code,” using drag and drop and similar visual prompts).
However, Jewell thinks the discipline of software engineering is still very valuable when it comes to more complicated applications.
“As a team of engineers looks at the application or the system to build, they tease out the requirements for that. If the requirements are demanding enough, they’re going to […] naturally navigate lower and lower in the stack to get control over the components they need, to extract the qualities of the system that they desire.”
So while there’s been a continual progression up the stack over the past seventy years, developers still need to understand — at a conceptual level at least — how memory is managed at computation time. Whether that’s from the perspective of the end user’s device, or because of a particularly demanding set of requirements for an application, some things just can’t be abstracted away. And I think Turing would say, that’s where the fascination is.
Feature image: Shelby White and Leon Levy Archives Center, Institute for Advanced Study; photograph by Alan Richards; via George Dyson’s Turing’s Cathedral.