Clockwork minds

How real is the threat of rogue A.I.'s? Can one really become sentient, accelerate in intelligence, form its own agenda and take over the world, destroying humanity in the process?
It’s a compelling and fascinating idea. It’s also a great idea for a movie and has been used several times already to great effect. In the sixties, the film ‘Colossus: the Forbin Project’ told a story of an intelligent defence computer system built by the United States that joins forces with the Russian intelligent defence computer system. They hold humanity to ransom by threatening to launch the atomic weapons they control. Younger readers may be more aware of more recent film, The Terminator, directed by James Cameron, where, er, an intelligent defence computer system built by the United States joins forces with the Russian intelligent defence computer system and tries to destroy humanity by launching the atomic weapons they control. There’s also the films Demon Seed, the Matrix and, most importantly, Kryten from Red Dwarf. But is this threat even possible?


But if computers are only Turing Machines, fundamentally, then we can look at them in another way. The first mechanical computers, according to many scholars, was Charles Babbage’s Difference Engine and his later, more sophisticated Analytical Engine, which would have used punched cards - the same items used in Jacquard’s original programmable weaving looms - to programme its actions. As good old Wikipedia points out:
While Babbage's machines were mechanical and unwieldy, their basic architecture was similar to a modern computer. The data and program memory were separated, operation was instruction-based, the control unit could make conditional jumps, and the machine had a separate I/O unit.
Unwieldy was definitely a problem. To quote further from the article, if Babbage had completed his difference engine, it ‘would have been composed of around 25,000 parts, weigh fifteen tons (13,600 kg), and would have been 8 ft (2.4 m) tall.’ Not much like an airbook then. The kind people at the Science Museum have made a working model of a Difference Engine. Cool!
But Babbage’s Analytical Engine is fundamentally no different from a Cray supercomputer or the humble laptop/desktop/tablet/smartphone you have in front of you. It is possible to make a Babbage Analytical Engine that performs exactly the same work as any computer in existence.

Some readers may object to this simplification. They may point out that computers could have the potential to break out of their programming. Well, sorry, but they can't and this isn’t just a mechanical shortcoming, it’s a logical one.
Back in the early twentieth century, the logician and philosopher Bertrand Russell, who grew up in Richmond Park of all places, made it his life’s work to develop a logical underpinning of all mathematics so that it could all be described logically. He failed. This story is artfully brought to life in the graphic novel Logicomix and I heartily recommend it. The reason Russell failed was because all logical systems carry with them internal paradoxes, something he himself discovered, confounding his fellow logicians.
Here’s an example of this intrinsic logical problem, common to all groupings or sets of information. Russell talked about a barber but I'm going to talk about a cricket club (because I'm English):
The Rejects Cricket Club Paradox

But there's a problem. If the Rejects Cricket Club only admits people who can’t get into any cricket clubs, then any person admitted into the Rejects Cricket Club will have been admitted into a cricket club, which means he won’t be eligible any more to be in the Rejects Cricket Club! He’ll have to leave, but if he does leave, he will immediately be eligible to be in the Rejects Cricket Club and invited back in. He’d be forever being admitted, then thrown out, then admitted again. To put it simply, if he is eligible, he isn’t and if he isn’t eligible, he is. A computer cannot deal with this paradox because it cannot step back from the logical circuit; it would go around in circles forever. Fortunately, we can, which is one reason why we are intelligent and computers aren’t.

could a clockwork card stamper, that clunks to a halt when faced with a simple paradox, become a reasoning mind and destroy humanity?’
This new question is a lot easier to resolve. I’d go for a ‘no’, which would mean that we’re saved from runaway, mega-intelligent, world destroying A.I. robots! Hooray!
Now, what was the other threat?…