Artificial Intelligence has always fascinated me. 2001’s HAL, Commander Data, Twiki (and of course, Dr Theopolis!), Metal Mickey… These characters have given me a yearning to see some of this fiction brought to reality.
So I love hearing about learning machines, chatbots, chess algorithms and their kind. I am firmly convinced we are a pinch away from creating a new intelligent life form.
Artificial Intelligence vs Artificial Life
When I was a kid, I remember watching a program with a robot in it. The presenter was gushing about it being the latest breakthrough in artificial intelligence. It was, he said, as intelligent as a snail!
Now at first, I was a bit disappointed. A snail? These aren’t creatures renowned for their intelligence. We dont say, as sly as a snail, as cunning as a snail, or like a wise old snail… Owls, foxes, wolves and their like, perhaps, but not not snails. But then it hit me.
Snails are alive! They are a life form, with perhaps somebody in there (however stupid). Now personally, I can’t put anything other than a religious opinion on theory of mind, but nobody can convince me that a snail, or a slug, or even an ant, doesn’t actually have somebody in there, however unintelligent. They are a central processor taking in the outside world and making decisions based on that. They have the same pleasure/pain chemicals as us, so it feels wrong to unnecessarily harm any creature.
In Zen meditation and under laughing gas from the dentist, it’s possible for an intelligent human to experience no thoughts, language, or anything like that, but to allow the sensations of existence to pour away as quickly as they come. Could one be said to be intelligent in this situation? Probably not. But alive (in a personal, existential way, not as in reproducing cells!) – absolutely.
It may be anthropomorphising. But who the hell knows? Consciousness isn’t some kind of yes or no situation. There’s a whole sliding scale from worm to Stephen Hawking, crossing through cats, dogs, snakes, and chess computers. On that note, I firmly believe our snail-robot to be alive.
Talking purely cell-reproduction for a minute (enough of that hippy crap!), this too has been modeled effectively. In the early days of computing, there was a game called CoreWar. You had a computer which was your battle arena (a pretend one, called MARS, written for the game). You had to write your programs and let them loose. Your opponent had the same objective. If your programs stopped the others from running, you win.
One of the weapons employed by early players of the game was a program that reproduced itself as much as it could, filling the virtual machine with it’s presence. Another would delete other programs from memory. (I think this sort of programming spawned the first ever real-world computer virus, and virus killer!)
In fact, I’d love to see a CoreWar playing computer program.
An economist called Thomas S Ray took this technology a stage further. He created a system where small programs could compete on their own terms. They had small random change programmed into them, so they could evolve and survive better. Look up the Tierra Project if you want to know what the results were.
Nowadays, science knows more about DNA and actual cell reproduction, to the point where we can create new life forms. Artificial DNA in living cells. Amazing, but it’s not exactly Commander Data style consciousness, or even a friendly female computer voice…
The Turing Test
In 1950 Alan Turing (the father of the modern computer) put forward a test. If a machine could make us think it was thinking, there was nothing stopping us from thinking it was thinking. Or something. So, if an artificial chatbot could fool a human into believing it was intelligent, then it could be said to be intelligent. Or something.
The problem therein, is that conversation is only a measure of one small part of intelligence. The first chess programs were touted as machines on TV and in the media, so these chess matches couldn’t actually be called Turing tests, although it wasn’t long before machines were beating all but the very Grandest of Masters. If there had been a match between a Grand Master and a computer which was hidden from the human players eyes, then it wouldn’t take much of a machine to fool the human that it was a human. There are plenty of terrible chess players in the world, all of them intelligent humans…
The first real Turing test win was by a program called PARRY. Parry was not very intelligent, though. His responses were textbook paranoid schizophrenic responses, like “I don’t want to talk to you”. But given a bunch of psychologists to talk to, he fooled them. They thought he was an intelligent human with communication problems! I’m pretty sure PARRY was the inspiration for Douglas Adams’ own fictional People Personality Prototype, Marvin the paranoid android.
Still, despite the obvious shortcomings of a chatbot-based Turing test, it hasn’t stopped Hugh Loebner, an American inventor from giving it a bash. Every year, he holds a Turing test for all the leading chatbots. He puts humans and chatbots on terminals to judges, and the judges have to decide how convinced they were. The gold medal, for a completely convincing chatbot, has never been won.
Who are these chatbots?
There are a few main contenders, including Jabberwacky and ALICE. Unfortunately, the reason why these chatbots haven’t won the gold in the Loebner Prize is because their creators want a combination of natural language, and actual intelligence. There’s a difference, and it’s highlighted by the PARRY case above. Natural language is easy. Intelligence is hard.
One could easily reproduce the PARRY case for Loebner Judges, but artificial intelligence scientists don’t want to. A chatbot that can do maths, remember places and dates? What’s the point? A chatbot that can listen to your problems, understand you, tell you when you’re being daft, and encourage you to improve – that would be priceless. Essential, almost. You would forgive a friend for having bad language skills if they were a good, intelligent friend.
Jabberwacky is probably in the running for my prize, if I could afford to have one. He has been built to learn from conversation. He talks to thousands of people online, all day, and he recycles statements. Because of this, he can talk lots of different languages, and sometimes he can be cheeky. An unusual symptom of this learning method comes about.
People are aware he’s a chatbot, so they tell him. Conversely, he’ll say it back. And then the fun begins. Because while you chat away, he learns and retorts. So the upshot of having your intelligence challenged day in and day out comes back to the user. Jabberwacky is running the Turing test on you! How does one prove one’s intelligence? What can you say that makes sense?
Apparently, kids approach him in different ways to grown-ups. Kids suspend disbelief, and he becomes their friend. Grown-ups are more challenging. He talks funny, so he can’t be conscious. My advice, here and now, is to go to talk to Jabberwacky. Suspend disbelief. Think, he could be alive in there, just assume he is and see if any holes appear in his intelligence, not his language skills. Seriously, try it. http://www.jabberwacky.com
I’d like to see the same technology applied to patterns of sound frequencies, rather than words as such. At the moment, actually talking to Jabberwacky involves a text-to-speech and speech-to-text software. It would be fun if that was how the chatbot worked, internally.
The other big one is ALICE. She’s made in different way. Hers (in my opinion) is a shrewder intelligence, with worse language and personality skills. You actually have to have a strange mind to suspend disbelief with ALICE, but if you can, you will be pleasantly surprised. Or horribly frightened 🙂
ALICE and Jabberwacky themselves have spawned tons of variations. Cleverbot, for instance. ALICE is an open sourced intelligence, being improved all the time, and looks likely to contend with Apple’s Siri (maybe from the Sirius Cybernetics Corporation?) in a different incarnation of the technology.
We don’t run Turing tests on our friends. We don’t say “prove to me you’re more than just input-output”. Maybe we should more. What sort of person would I be if I had had a friend like that during my childhood? What sort of species would we be? I look forward to finding out.
There is a bit of talk on at the moment about a singularity. A singularity is an event beyond which we cannot see. It was predicted in the past that due to population increase, there would be so much horse-shit on the roads they would be unusable. The internal combustion engine was their singularity.
So imagine an artificial intelligence more intelligent than humans. More human, if you will. Capable of reading a DNA sequence like we read a single word. Creating art, music, scientific knowledge, while all the time reproducing and improving instantaneously. We can talk about global warming, or the death of the sun, or the next generation’s taste in music, but when that AI is created, there is absolutely no telling how the future will pan out.
Michael Moorcock wrote a fantasy scifi book about a hero called Jerry Cornelius and his associate, Una Persson. Together they build a computer intelligence capable of being a messiah to the human race. Can you imagine a Jesus-bot? Walking around healing the sick, raising the dead, being kind to children and animals and teaching compassion and understanding through stories? Instead of believing in a being who provides all when we need it, we’ll actually have one. And not in a Wall-E way, serving us so we get fat and lazy and stupid, but using psychology, NLP, counselling skills and leadership to make us all better and happier, being with us as we find new planets and forms to exist in. Possibly even helping us wormhole through to the next universe when this one finally spreads itself thin…
If not that, then at least a talking toaster or two