My new book Surviving AI (Three Cs) argues that AI will continue to bring enormous benefits, but that it will also present a series of formidable challenges. The range of possible outcomes is wide, from the terrible to the wonderful, and they are not pre-determined. We should monitor the changes that are happening, and adopt policies that will encourage the best possible outcomes.

Advertisement

You may have heard already of the ‘technological singularity’, which is the idea that a superintelligence will be created sometime this century, and when that happens the rate of technological progress will become so fast that ordinary humans cannot keep up. In the same way that a black hole is a singularity beyond which the laws of physics do not apply, so the technological singularity is a point beyond which the future cannot be readily understood.

Well before we get to that (if we do), there may be another massive discontinuity, which I call the ‘economic singularity’. This is the point at which almost every job can be done cheaper and better by an AI than by a human. If and when that happens – and it could happen well within your lifetime – we will probably need an entirely new economic system to cope.

To help us understand how artificial intelligence got us to this remarkable point in time, here are seven vignettes from its history…

1) Greek myths

Stories about artificially intelligent creatures go back at least as far as the ancient Greeks. Hephaestus (Vulcan to the Romans) was the blacksmith of Olympus: as well as creating Pandora, the first woman, he created lifelike metal automatons.

More like this

Hephaestus had an unpromising start in life. Greek myths often have multiple forms, and in some versions, Hephaestus was the son of Zeus and Hera, while in others he was Hera's alone. One of his parents threw him from Mount Olympus, and after falling for a whole day he landed badly, becoming lame.

He was rescued by the people of Lemnos, and when Hera saw the ingenious creations he went on to build she relented, and he became the only Greek god to be readmitted to Olympus.

His creations were constructed from metal but their purposes varied widely. The most sinister was the Kaukasian eagle, cast in bronze, whose job was to gore the Titan Prometheus every day, ripping out his liver as a punishment for the crime of giving the gift of fire to humanity.

At the other end of the spectrum were Hephaestus' automated drinks trolleys. The Khryseoi tripods were a set of 20 wheeled devices that propelled themselves in and out of the halls of Olympus during the feasts of the gods.

2) The first SF: Frankenstein and Rossum's Universal Robots

Although numerous earlier stories contained plot elements and ideas that recur throughout science fiction, the author Brian Aldiss claimed Mary Shelley's Frankenstein (1818) was the genre's real starting point because the hero makes the deliberate decision to employ scientific methods and equipment. It is therefore appropriate that, contrary to popular belief, the title refers to the mad scientist figure rather than the monster.

GettyImages-494580751-48c3f06

Mary Shelley (1797-1851) is best known for her Gothic novel ‘Frankenstein’. (Photo by PHAS/UIG via Getty Images)
While Frankenstein seems like a grotesque romance and very much of its time, the 1920 play RUR, or Rossum's Universal Robots, introduces themes that still concern us today. Its Czech author Karel Capek received plaudits when the play was first staged, but later critics have been less kind. Isaac Asimov called it terribly bad, and it is rarely read or staged today. Nevertheless it introduced the idea of a robot uprising that wipes out mankind, which has prompted a huge number of stories since, and it foresaw concerns about widespread technological unemployment as a consequence of automation. And of course it gave the world the word ‘robot’. Capek's robots are androids, with a human appearance as well as the ability to think for themselves.

In the uprising, the robots kill all the humans except for one, and the book ends with two of them discovering human-like emotions, which seems to set them up to begin the cycle all over again.

3) Charles Babbage and Ada Lovelace

The first design for a computer was drawn up by Charles Babbage, a Victorian academic and inventor. Babbage never finished the construction of his devices, but in 1991 a machine was built to his design, using tolerances achievable in his day. It showed that his machine could have worked back in the Victorian era.

Babbage's Difference Engine (designed in 1822) would carry out basic mathematical functions, and the Analytical Engine (design never completed) would carry out general purpose computation. It would accept as inputs the outputs of previous computations recorded on punch cards.

Babbage declined both a knighthood and a peerage, being an advocate of life peerages. Half his brain is preserved at the Royal College of Surgeons, and the other half is on display in London's Science Museum.

Babbage's collaborator Ada Lovelace has been described as the world's first computer programmer thanks to some of the algorithms she created for the Analytical Engine. Famously, Ada was the only legitimate child of the Victorian poet and adventurer, Lord Byron. Although she never knew her father, she was buried next to him when she died at the early age of 36. There is controversy about the extent of her contribution to Babbage's work, but whether or not she was the first programmer, she was certainly the first programme debugger.

Babbage's Difference Engine No. 1, 1832. This trial portion of the Difference Engine is one of the earliest automatic calculators. (Photo by Universal History Archive/UIG/Getty Images)

Babbage's Difference Engine No. 1, c1832. This trial portion of the Difference Engine is one of the earliest automatic calculators. (Photo by Universal History Archive/UIG/Getty Images)

4) Alan Turing (and Bletchley Park)

The brilliant British mathematician and code-breaker Alan Turing is often described as the father of both computer science and artificial intelligence. His most famous achievement was breaking the German naval ciphers at the code-breaking centre at Bletchley Park during the Second World War. He used complicated machines known as ‘bombes’, which eliminated enormous numbers of incorrect solutions to the codes so as to arrive at the correct solution. His work is estimated to have shortened the war by two years, but incredibly, his reward was to be prosecuted for homosexuality and obliged to accept injections of synthetic oestrogen that rendered him impotent. He died two years later and it took 57 years before a British government apologised for this barbaric behaviour.

Before the war, in 1936, Turing had already devised a theoretical device called a Turing machine. It consists of an infinitely long tape divided into squares, each bearing a single symbol. Operating according to the directions of an instruction table, a reader moves the tape back and forth, reading one square – and one symbol – at a time. Together with his PhD tutor Alonzo Church, he formulated the Church-Turing thesis, which says that a Turing machine can simulate the logic of any computer algorithm.

Turing is also famous for inventing a test for artificial consciousness called the Turing Test, in which a machine proves that it is conscious by rendering a panel of human judges unable to determine that it is not (which is essentially the test that we humans apply to each other).

5) The Dartmouth Conference

The point when artificial intelligence became a genuine science was a month-long conference at Dartmouth College in New Hampshire in the summer of 1956, which was premised on “the conjecture that every…feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” The organisers included John McCarthy, Marvin Minsky, Claude Shannon, Nathaniel Rochester, all of whom went on to contribute enormously to the field.

In the years following the Dartmouth Conference, impressive advances were made in AI. Machines were built that could solve school maths problems, and a programme called Eliza became the world's first chatbot, occasionally fooling users into thinking that it was conscious.

These successes and many others were made possible in part by surprisingly free spending by military research bodies, notably the Defence Advanced Research Projects Agency (DARPA, originally named ARPA), which was established in 1958 by President Eisenhower as part of the shocked US reaction to the launch of Sputnik, the first satellite to be placed into orbit around the Earth.

The optimism of the nascent AI research community overflowed into hubris. Herbert Simon said in The Shape of Automation for Men and Management (1965) that “machines will be capable, within 20 years, of doing any work a man can do". Marvin Minksy said two years later, in Computation: Finite and Infinite Machines (1967), that “Within a generation...the problem of creating 'artificial intelligence' will substantially be solved.” But hindsight is a wonderful thing, and it is unfair to criticise harshly the pioneers of AI for underestimating the difficulty of replicating the feats of which the the human brain is capable.

6) AI seasons (The “AI winters” in 1973 and early 1980s)

When it became apparent that AI was going to take much longer to achieve its goals than originally expected, there were rumblings of discontent among funding bodies. They crystallised in the 1973 Lighthill report, which highlighted the “combinatorial problem”, whereby a simple calculation involving two or three variables becomes intractable when the number of variables is increased.

The first “AI winter” lasted from 1974 until around 1980. It was followed in the 1980s by another boom, thanks to the advent of expert systems, and the Japanese fifth generation computer initiative, which adopted massively parallel programming. Expert systems limit themselves to solving narrowly defined problems from single domains of expertise (for instance, litigation) using vast databanks. They avoid the messy complications of everyday life, and do not tackle the perennial problem of trying to inculcate common sense.

The funding dried up again in the late 1980s because the difficulties of the tasks being addressed was once again underestimated, and also because desktop computers and what we now call servers overtook mainframes in speed and power, rendering very expensive legacy machines redundant.

The second AI winter thawed in the early 1990s, and AI research has since been increasingly well-funded. Some people are worried that the present excitement (and concern) about the progress in AI is merely the latest ‘boom phase’, characterised by hype and alarmism, and will shortly be followed by another damaging bust.

But there are reasons for AI researchers to be more sanguine this time round. AI has crossed a threshold and gone mainstream for the simple reason that it works. It is powering services that make a huge difference in people's lives, and which enable companies to make a lot of money: fairly small improvements in AI now make millions of dollars for the companies that introduce them. AI is here to stay because it is lucrative.

7) AI in Hollywood

It is commonly thought that Hollywood hates AI – or rather that it loves to portray artificial intelligence as a threat to humans. In this view, the archetypal movie AI is a cold, clinical enemy that takes us to the brink of extinction. Oddly, we usually defeat them because we have emotions and we love our families, and for some unfathomable reason this makes us superior to entities which operate on pure reason.

In fact the Hollywood approach to AI is more nuanced than this. If you think of your 10 favourite films that prominently feature AI (or 20, if you have that many!) you will probably find that, in most of them, the AI is not implacably hostile towards humans, although it may become a threat through malfunction or necessity. Even in The Matrix (1999) there are hints that it was humans who started the war, and at the end of the series it is not too hard for Neo to persuade the machines' controlling mind that they should try to rub along better. Hal, the rogue AI in Kubrick's 2001 (1968), only turns against the astronauts in a tortured attempt to follow the conflicting instructions it has received from Mission Control. In Wall-E (2008), Blade Runner (1982) and Avengers: Age of Ultron (2015), there are both ‘good’ and ‘bad’ AIs, and in I, Robot (2004) and Ex Machina (2015), the AIs turn against humans purely for reasons of self-defence and only after experiencing pretty bad treatment by humans.

One of the most interesting treatments of AI by Hollywood is the 1970 film Colossus: The Forbin Project, in which a superintelligence decides that humans are unable to govern themselves, so it takes the entirely logical step of taking over the reins for our own good.

GettyImages-129921705_0-54fbed3

Eric Braeden stands alongside a number of computers in a scene from the film 'Colossus: The Forbin Project', 1970. (Photo by Universal Studios/Getty Images)
Perhaps the reason that we think that AIs are always bad guys in the movies is that the poster-boy for Hollywood AI is The Terminator (1984), in which ‘Skynet’ determines to exterminate us the moment that it attains consciousness. The original Terminator movies were so inventive and the designs so iconic that it often seems there is a law that newspapers must publish a picture of a robotic Arnie alongside any article about AI.

But on the flipside of the coin, it is not hard to think of movies in which AIs are entirely benign, such as in the Star Trek series, Short Circuit (1986), AI: Artificial Intelligence (2001), Interstellar (2014), the absurdly over-rated Star Wars series and, perhaps most interestingly of all, Spike Jonze’s 2013 sci-fi romantic comedy film Her.

Advertisement

Surviving AI by Calum Chace was published by Three Cs and is out now.

Advertisement
Advertisement
Advertisement