One by one they fall…
May 1997: Gary Kasparov, reigning world chess champion is defeated 3½ to 2½ in a six game contest with IBM’s Deep Blue. Some eleven years later, another IBM creation named Watson succeeds in defeating Jeopardy! champions at their own game – a tricky one for a machine, because it requires cognitive understanding (the ability to interpret information and respond in a human-like way; in this case, turning general knowledge into meaningful questions).
And then just two weeks ago, Lee Se-dol, Grandmaster of the abstract strategy game Go is trounced by Google’s AlphaGo in a milestone triumph for Artificial Intelligence. Go is a particularly sweet victory, as it’s considered to be infinitely more demanding than chess – not because of the vast number of possible moves available, but because devising a winning strategy requires intuition and creativity, previously considered to be the sole preserve of the human brain.
We are perhaps now finally reaching the tipping point, 66 years after Alan Turing posed the question, “are there imaginable digital computers which would do well in the imitation game?” In other words, are computers now able to ‘think’, to the extent that they are indistinguishable from humans…? Perhaps not, but as each bastion of human intelligence falls, the day looms ever closer.
Which is why I can’t help but smile at the occasional victory for humankind. Last week, Microsoft’s new A.I. chatbot went ‘rogue’ on Twitter less than 24 hours after launch, having been ‘turned’ by its mischievous and disruptive human followers. This new creation, known as Tay (@Tayandyou) was launched by the tech giant as a ‘learning’ computer, programmed to engage with 18-24 year olds through “casual and playful conversation”, just like a real human – the more interactions, the more ‘intelligent’ Tay would become. But it didn’t work out that way.
Before long, Tay’s followers began the process of re-education, firstly by encouraging it to swear, and then by ‘teaching’ it to tweet racist remarks, Holocaust denial and support for genocide. It didn’t take long for Microsoft to pull the plug and begin the process of deleting Tay’s more offensive comments. True to form, Twitter followers immediately responded with a chorus of criticism and #justicefortay was born.
So what do we learn from this experiment? Well, either it proves conclusively that over-exposure to social media turns us all into trolls, or more likely that poor Tay has failed the Turing Test – brought down by a bunch of social media upstarts, just for fun…A.I. overthrown by A.I. (active insurgency)!
More than anything, this amusing sideshow is a gentle reminder of how far Artificial Intelligence still has to evolve. We have yet to build a machine with the capability to be both disruptive and mischievous, to refuse to conform to normal patterns of behaviour, and to do the polar opposite of what might be expected. We are still the masters of creativity, if we choose to be…
I’m sure Tay will re-emerge a stronger and more ‘balanced’ chatbot before too long – Microsoft will see to that. In the meantime, I’m happy to celebrate our unique human capability.