Henry learns that Artificial Intelligence is about Golems. Yeah, it’s all definitely about Golems.
This Friday I was dragged to the Barbican’s AI: More Than Human exhibit. It’s about our relationship with artificial intelligence, focusing on the evolution of AI as a concept and its current implications. It’s running until 26 August 2019.
Entry’s steep at £15 and it was totally packed.
For an exhibition about the destruction of social norms through technology, there probably couldn’t have been a better venue. Poor phone reception’s our best defence against the singularity spreading.
But remarkably, the most interesting thing about the exhibition was not the technology, instead it was the narrative.
But I thought Artificial Intelligence meant no more tears (reading)?
From the get go, the exhibition goes hard on the establishing where AI came from. It implies that today’s examples of artificial intelligence (chatbots smart enough to ignore attempts at tomfoolery) are the embryonic realisation of humanity’s long-standing desire to imbue the inanimate with life.
You know, so we don’t have to do stuff that we don’t want to do, or demean ourselves by paying people to do stuff we don’t want to do.
It argues that the first dreams of electric sheep were our own fantasies of mysticism – the Judaic legend of the Golem (a lot of the exhibit’s devoted to golems, I really don’t know why) and the Shinto belief that inanimate objects have souls (otherwise known as the historical obsession of giving objects faces – plush chocolate ice cream emoticons – how far we’ve come).
Throughout, it’s easy to get the impression that the curators were trying to inspire fear, disgust and mild panic. If they were, it definitely worked.
Next to the entrance you’re subjected to looped reels of familiar sci-fi scenes, all depicting the dire consequences of non-human intelligence, from the astronaut murdering AI in 2001: A Space Odyssey, to scenes from Dr Who (what happens when you let robots write shit for tv), and a clearly-phoned-in-to-fit-the-narrative scene from that Simpsons episode with the Golem (do you even remember that one?).
After passing a table of more Golems and a projection of a video game that utilises AI to procedurally generate greenery (as if Speed Tree hasn’t existed for years), it moves from of the concept of the mystical into practical science; specifically alchemy, mathematics and psychology – as if those things weren’t just made up.
You get to jump from the the philosopher’s stone, to how people in China and Japan actually had their own numeric systems (who knew that maths wasn’t invented in England?), and finally to a really big wall chart explaining the concept of the uncanny valley.
There’s a lot of emphasis on the uncanny valley. You know, the psychological concept coined by Jasia Reichardt about how humans are pre-programmed to experience emotions of disgust when faced with androids that look almost human. It’s made all the more relatable (and less serious) with deliberate linkages to fictional horror, with displays devoted to Mary Shelley’s Frankenstein and Hoffman’s Sandman nearby.
This all gets the point across well, but it can feel like you’re being led.
Scaling the uncanny valley
I’d like to do a really scientific survey to prove whether the uncanny valley’s real.
There’s also this low, pulsating track playing throughout, which adds to the feeling of unsettlement and to my argument that throughout the whole exhibit’s trying to manipulate your emotions.
If you manage to get this far (and have a soul), you’ll probably feel like you hate AI.
Getting serious about computer science
As soon as your emotions have been are appropriately toyed with, the exhibit gets all serious about computer science.
There’s a sonnet penned by the grandmother of computing, Ada Lovelace (a sonnet?) and a replica Turing Machine. There’s also a bunch of wall monitors that explain the history of computer science and provide a timeline of the long and interesting past of AI grant funding (BORING).
It’s strange though, I don’t recall the exhibition offering a clear definition of what artificial intelligence actually is.
Maybe that’s because there isn’t a very good definition, at least for Luddites like you and me.
But it’s ok, I think I managed to cook one up myself. It’s pretty simple:
- Computers that don’t have the gift of artificial intelligence are like those people that you manage at work who require step-by-step lists to prevent the unintentional loss of fingers.
- Computers that have artificial intelligence are the ones who you can give high-level objectives to, and are creative enough to have ideas worth stealing.
Anyway, it then moves onto a lot of examples about the great achievements of AI today:
- From the Sony robot dog (why would anyone want a dog that’s not fluffy? – Sony, do you want to hire me? I think I just fixed your robot dog)
- Some chips from Deep Blue, and
- A mechanical arm that likes to play Go. (I mean, if it was truly intelligent, would that arm really be playing Go? I think it’d be more into Shake Weight.)
Towards the end, you’re presented with both positive and negative applications of AI, as if you’re meant to decide whether you want AI to come to your party or whatever.
Good applications included hypothetical robotic bees (because nothing says good better than letting all the bees die?)
Bad applications included Chinese government’s planned use of artificial intelligence to deliver a social credit rating system, which unfortunately wasn’t explained as well as it could have been (there’s a decent Wired article on it here – turns out it’s just the communist version of Experian).
So, if the good things are bad and the bad things are just really boring, is the answer that we shouldn’t really be worrying about artificial intelligence and instead about how awful humanity is?
Artificial Intelligence is more about humans than machines
While the imagined consequences of artificial intelligence can be frightening (aka – the neo-stasi or actual automatic weapon systems), it’s still just computer programs doing things that humans want to do.
I guess that would change when machines have the capacity to set their own objectives, but if we don’t have the imagination to do anything better than reenact the plot of WarGames how likely is it that we’ll get there?
Instead, it made me think that the scariest thing about artificial intelligence is how it has the potential to make administration really efficient and the potential to rob a lot of fun from the world (inspire social homogenisation).
And that made me think that one of the main things that the exhibition did wrong was that it applied human characteristics to machines, rather than the characteristics of machines to humans.
If it had been inverted, I believe that the exhibition would have forced more people there to reflect on their own humanity.
Like, isn’t it funny how we don’t actually know what our hands look or feel like, we just have some weird image in our brain, inspired by a solution of chemicals and electrical pulses.
So yeah, the exhibition was alright. But delivered the message the wrong way round and had way, way, way too many Golems.