In many ways, the original computers were much like computers of today. They would spend their days sat in offices, occasionally overheating, as they performed a series of complex tasks. But there was one crucial difference: the first computers could catch colds, and feel pangs of hunger. Occasionally, they might even come into work with a hangover. After all, this was 400 years ago, and they were only human. Back then, being a ‘computer’ was just a job.
This may sound like trivia, but I find it a reassuring thought. It reminds me that there’s nothing particularly special about modern computers. We shouldn’t be intimidated by them. They just do what we can do … but they do it extremely well. What’s more, for a very long time, we humans had the upper hand.
The idea of ‘human computers’ has shot to prominence recently, with the release of the movie Hidden Figures — a biopic about three black women, who filled that role in NASA’s aeronautic and space programs. However, the term ‘computer’ actually first appeared on the scene way back in the 17th century. At that time, the scientific revolution was well underway. People were asking questions about the stars and, amazingly, were able to come up with increasingly accurate answers. This search for knowledge meant one thing: complicated calculations. A lot of complicated calculations. And the machines of the time were not up to the task.
Step forward human computers. These were smart — in some cases brilliant — mathematicians who would spend their days performing huge swathes of calculations for their employer. What’s more, many human computers were women — not for progressive reasons, sadly, but because they were willing to accept far less pay. One of the first prominent computers was Nicole-Reine Lepaute who, in 1758, helped predict the path of Halley’s Comet. This lineage continued for hundreds of years, all the way up to the likes of Jean Bartik who, soon after World War Two, went from human computer to programming the ENIAC — possibly the first usable electronic computer.
Throughout that period, the computer’s role remained the same. In the words of Alan Turing: “The human computer is supposed to be following fixed rules; he has no authority to deviate from them in any detail”. Needless to say, the job was not exactly a thrill-a-minute.
This requirement is still true even today. The main objective of any computer — human or electronic — is to follow a sequence of instructions rigidly. It is here that computers distinguish themselves from calculators. That sequence of instructions — which these days we call programs — make computers really powerful. All of a sudden, they can take our data, and actually turn it into something useful.
So far, so human. However, modern computers take that idea a step further: they only know to follow instructions. Computers — unlike their human predecessors — are dumb beasts. What’s more, they deal exclusively in numbers. Words, even letters, are meaningless to them. Everything a computer fires out at us — from email to pictures to Netflix — is really just another sequence of numbers, converted into something that we can understand. If we peered deeper into any of these applications, we would just see more numbers — layer upon layer, all the way down, until we reach the inner-workings of the computer itself.
(How that computer understands numbers is a whole other question. For now we won’t worry about that.)
Up to this point, there’s nothing here to intimidate the likes of Nicole-Reine Lepaute. After all, human computers were excellent mathematicians. And following instructions was their bread and butter — it was literally their job description. So why is ‘human computer’ no longer a viable career path? What do the machines have that we don’t?
The most obvious answer is one word: speed. Even a bog-standard home computer can do billions of calculations a second — a scale that would seem inconceivable to a human mind.
Let’s take a simple example of this. Imagine you wanted to look at your digital holiday photos. To a computer this is just a grid of numbers, each of which gets converted into a coloured dot — what we all know of as pixels. For a large picture, the grid size might exceed 4 million pixels.
This is easy for a machine. In the hands of a human computer, it would be an entirely different matter. 4 million numbers, all requiring translation. That means a lot of Jean Bartiks in one room. Or — if you insist on having just one Jean Bartik — a lot of her time.
But speed isn’t the only advantage that machines have. In fact, at one point in history, human and electronic computers had roughly equivalent speeds. At that time, there was another characteristic that made the machines stand out: Reliability.
Let’s say that you hire 4 million human computers, having become weirdly obsessed by the idea of getting people to display your holiday snaps. You arrange the computers into a grid, in the middle of a large field. Next you give them each the pixel number, and instruct them to quickly select the matching colour. Your plan is to get into a hot-air balloon, and see the final picture from a bird’s eye view. Having gone to all this effort, will you — from up high — see the exact image you’re expecting? Probably not. Someone will inevitably hold up the wrong colour. Oh, you might get pretty close. But in the modern day, ‘pretty close’ isn’t going to cut it.
Electronic computers don’t have this problem. In fact, so long as they’re working properly, they’re effectively perfect. Computers, do not make basic maths mistakes. If they did, they would be useless.
And this reliability is true in many different ways. Not only do machines not make mistakes, but they also don’t get tired. This is what makes them so useful, but also so potentially frightening. Picture Kyle Reece’s chilling speech from The Terminator: “It absolutely will not stop … ever …” . The machines are relentless. They do not need nap times. So long as they have a decent battery life.
It was Richard Feynman who first demonstrated the machines’ superior stamina, when he decided to put both types of computer to the test. He, along with Nicholas Metropolis, organised a compute-off in 1943 — human vs electronics. The first day was neck and neck — if anything, the plucky Homo Sapiens were ahead. By the day three, the humans were starting to get weary, and the machines had pulled ahead. It would never be so close again.
“A ha,” you say, “but if machines are so reliable, then how come Windows keeps crashing on me?” Unfortunately, that’s down to us humans as well. We tell the computer what to do. For a computer, “I was just following orders” is a legitimate excuse.
Obviously we humans lost the computer race a long time ago. But that doesn’t change our shared history with machines — in terms of performing the role of ‘computer’, they aren’t so different to you or me.
Of course, if we did want that particular job back, we would have to become much faster and more reliable (and many other things besides). But why bother? Machines can do the grunt work. Meanwhile, we can relax, safe in the knowledge that our post-office-party hangovers won’t send Halley’s Comet way off course.