Clockwork minds

There’s been a lot of talk in recent months about the potential threat of A.I.; the danger that robots and artificial intelligences could become sentient, accelerate in intelligence and destroy humanity. Elon Musk, Bill Gates and Stephen Hawking have all warned of this threat. Musk is even pledging millions of dollars to study and plan against this outcome. It seems pretty weird that these guys are talking about the threat of A.I. rather than climate change, whose existence is very, very well supported with evidence and which will become highly dangerous to humanity, but there you go.

How real is the threat of rogue A.I.'s? Can one really become sentient, accelerate in intelligence, form its own agenda and take over the world, destroying humanity in the process?

It’s a compelling and fascinating idea. It’s also a great idea for a movie and has been used several times already to great effect. In the sixties, the film ‘Colossus: the Forbin Project’ told a story of an intelligent defence computer system built by the United States that joins forces with the Russian intelligent defence computer system. They hold humanity to ransom by threatening to launch the atomic weapons they control. Younger readers may be more aware of more recent film, The Terminator, directed by James Cameron, where, er, an intelligent defence computer system built by the United States joins forces with the Russian intelligent defence computer system and tries to destroy humanity by launching the atomic weapons they control. There’s also the films Demon Seed, the Matrix and, most importantly, Kryten from Red Dwarf. But is this threat even possible?

One way to answer this is to think about what computers really are. Alan Turing, who is now so famous as a British genius that he’s been played by Benedict Cumberbatch, beautifully described a computer theoretically with his Turing Machine. A Turing Machine receives a data input (say, a punch card or a punched tape) and performs an action on that tape according to its internal setting at that time. It then moves the tape. That’s it and that’s fundamentally all a computer does too. A computer is in fact incapable of doing anything cleverer than a Turing Machine. The reason modern computers seem so powerful and impressive is that they can perform a Turing Machine type action faster than it takes for the light to leave the surface of the screen and get to your eyes. With that kind of speed, a computer can do so many actions that it can easily appear to be tangibly intelligent.

Complete Babbage

But if computers are only Turing Machines, fundamentally, then we can look at them in another way. The first mechanical computers, according to many scholars, was Charles Babbage’s Difference Engine and his later, more sophisticated Analytical Engine, which would have used punched cards - the same items used in Jacquard’s original programmable weaving looms - to programme its actions. As good old Wikipedia points out:

While Babbage's machines were mechanical and unwieldy, their basic architecture was similar to a modern computer. The data and program memory were separated, operation was instruction-based, the control unit could make conditional jumps, and the machine had a separate I/O unit.

Unwieldy was definitely a problem. To quote further from the article, if Babbage had completed his difference engine, it ‘would have been composed of around 25,000 parts, weigh fifteen tons (13,600 kg), and would have been 8 ft (2.4 m) tall.’ Not much like an airbook then. The kind people at the Science Museum have made a working model of a Difference Engine. Cool!


But Babbage’s Analytical Engine is fundamentally no different from a Cray supercomputer or the humble laptop/desktop/tablet/smartphone you have in front of you. It is possible to make a Babbage Analytical Engine that performs exactly the same work as any computer in existence.

You can make a clockwork version of any computer ever created. It’s true that a clockwork version would run a million or a billion times slower and would be a million times bigger, but it would, in the fundamental way it works, be exactly the same. Computers are just clocks with punched cards.
Some readers may object to this simplification. They may point out that computers could have the potential to break out of their programming. Well, sorry, but they can't and this isn’t just a mechanical shortcoming, it’s a logical one.

Back in the early twentieth century, the logician and philosopher Bertrand Russell, who grew up in Richmond Park of all places, made it his life’s work to develop a logical underpinning of all mathematics so that it could all be described logically. He failed. This story is artfully brought to life in the graphic novel Logicomix and I heartily recommend it. The reason Russell failed was because all logical systems carry with them internal paradoxes, something he himself discovered, confounding his fellow logicians.


Here’s an example of this intrinsic logical problem, common to all groupings or sets of information. Russell talked about a barber but I'm going to talk about a cricket club (because I'm English):

The Rejects Cricket Club Paradox

An English county has many established cricket clubs and they all have admittance rules. Their admittance rules are so strict that there are some keen cricketers in the county that can’t get in any of those clubs. To help these people, a cricket club's been formed, called the Rejects Cricket Club, that'll exclusively for anyone who can’t get into any cricket clubs. Isn’t that nice?

But there's a problem. If the Rejects Cricket Club only admits people who can’t get into any cricket clubs, then any person admitted into the Rejects Cricket Club will have been admitted into a cricket club, which means he won’t be eligible any more to be in the Rejects Cricket Club! He’ll have to leave, but if he does leave, he will immediately be eligible to be in the Rejects Cricket Club and invited back in. He’d be forever being admitted, then thrown out, then admitted again. To put it simply, if he is eligible, he isn’t and if he isn’t eligible, he is. A computer cannot deal with this paradox because it cannot step back from the logical circuit; it would go around in circles forever. Fortunately, we can, which is one reason why we are intelligent and computers aren’t.

With this in mind, let’s try and answer the question at the beginning of this article again; ‘could a computer develop runaway intelligence, form its own independent mind and agenda, and destroy humanity?’ We can now rephrase this question as:

could a clockwork card stamper, that clunks to a halt when faced with a simple paradox, become a reasoning mind and destroy humanity?’

This new question is a lot easier to resolve. I’d go for a ‘no’, which would mean that we’re saved from runaway, mega-intelligent, world destroying A.I. robots! Hooray!

Now, what was the other threat?…