Artificial Intelligence is Hype

Praise for AI is creeping into more and more Ted talks, YouTube videos and news feeds as each week passes.

The question is, or should be, how much of the hype surrounding artificial intelligence is warranted?

“For 60 years scientists have been announcing that the great AI breakthrough is just around the corner.  All of a sudden many tech journalists and tech business leaders appear convinced that, finally, AI has come into its own.”(1)

“We’ve all been seeing hype and excitement around artificial intelligence, big data, machine learning and deep learning. There’s also a lot of confusion about what they really mean and what’s actually possible today. These terms are used arbitrarily and sometimes interchangeably, which further perpetuates confusion.

So, let’s break down these terms and offer some perspective.

Artificial Intelligence

Artificial Intelligence is a branch of computer science that deals with algorithms inspired by various facets of natural intelligence. It includes performing tasks that normally require human intelligence, such as visual perception, speech recognition, problem solving and language translation. Artificial intelligence can be seen in many every day products, from intelligent personal assistants in your smartphone to the X-box 360 Kinect camera, allowing you to interact with games through body movement. There are also well-known examples of AI that are more experimental, from the self-aware Super Mario to the widely discussed driverless car. Other less commonly discussed examples include the ability to sift through millions of images to pull together notable insights.

Big Data

Big Data is an important part of AI and is defined as extremely large data sets that are so large they cannot be analyzed, searched or interpreted using traditional data processing methods. As a result, they have to be analyzed computationally to reveal patterns, trends, and associations. This computational analysis, for instance, has helped businesses improve customer experience and their bottom line by better understand human behavior and interactions. There are many retailers that now rely heavily on Big Data to help adjust pricing in near-real time for millions of items, based on demand and inventory. However, processing of Big Data to make predictions or decisions like this often requires the use of Machine Learning techniques.

Machine Learning

Machine Learning is a form of artificial intelligence which involves algorithms that can learn from data. Such algorithms operate by building a model based on inputs and using that information to make predictions or decisions, rather than following only explicitly programmed instructions. There are lots of basic decisions that can be performed leveraging machine learning, like Nest with its learning thermostats as one example. Machine Learning is widely used in spam detection, credit card fraud detection, and product recommendation systems, such as with Netflix or Amazon.

Deep Learning

Deep Learning is a class of machine learning techniques that operate by constructing numerous layers of abstraction to help map inputs to classifications more accurately. The abstractions made by Deep Learning methods are often observed as being human like, and the big breakthrough in this field in recent years has been the scale of abstraction that can now be achieved. This, in recent years, has resulted in breakthroughs in computer vision and speech recognition accuracy. Deep Learning is inspired by a simplified model of the way Neural Networks are thought to operate in the brain.

No doubt AI is in a hype cycle these days. Recent breakthroughs in Distributed AI and Deep Learning, paired with the ever-increasing need for deriving value from huge stashes of data being collected in every industry, have helped renew interest in AI. “(5)

Human levels of understanding? Really?

How much of an AI breakthrough has humanity actually achieved, as opposed to wishful thinking.

“Gary Marcus, a psychology professor at New York University, who writes about artificial intelligence for the New Yorker, was the first to burst the balloon. He told Geektime that while the coalescence of parallel computation and big data has led to some exciting results, so-called ‘deeper algorithms’ aren’t really that much different from two decades ago.

In fact, several experts concurred that doing neat things with statistics and big data (which account for many of the recent AI “breakthroughs”) are no substitute for understanding how the human brain actually works.

“Current models of intelligence are still extremely far away from anything resembling human intelligence,” philosopher and scientist Douglas Hofstadter told Geektime.

But why is everyone so excited about computer systems like IBM’s Watson, which beat the best human players on Jeopardy! and has even more recently been  diagnosing disease?

“Watson doesn’t understand anything at all,” said Hofstadter.  “It is just good at grammatical parsing and then searching for text strings in a very large database.”

Similarly, Google Translate understands nothing whatsoever of the sentences that it converts from one language to another, “which is why it often makes horrendous messes of them,” said Hofstadter.”(1)

“In narrow domains like chess, computers are getting exponentially better.

But in some other domains, like strong artificial intelligence, general artificial intelligence, there’s  been almost no progress.

Not many people are fooled into thinking that Siri is an example of general artificial intelligence.

We were promised Rosie the robot and got Roomba, which wanders the room and tries not to bump into anything.

AI actually nearly died in 1973. There was something in Britain, called the Lighthill Report, that was compiled by James Lighthill for the British Science Research Council as an evaluation of the academic research in the field of artificial intelligence.

The report said that artificial intelligence only worked in narrow domains, it’s unlikely to scale up, and that it will have limited applications. This report led basically to the end of funding in British AI research. This was called the first AI winter.

Current systems are still narrow. You have chess computers that can’t do anything else, driverless cars that can’t do anything else. There are language translators that are really good at translating languages, but are not perfect, often having a problem with syntax, and they can’t actually answer questions about what they’re translating.

What you end up having in AI is a community of idiot savants, with special service programs that do one thing, but aren’t general.

Watson is probably the most impressive in some ways, but as with most artificial intelligence systems that actually work, there’s a hidden restriction that makes it look a lot easier than it appears to be. When you look at Watson, you think, it knows everything, it can look it up really quickly, but it turns out that 95% of all of the Jeopardy questions that it’s trying to answer are the titles of Wikipedia pages. It’s basically searching the Wikipedia pages, but seems like a general intelligence, but it isn’t one.

IBM is still struggling to figure out what to do with it.

Your average teenager can pick up a new video game after an hour or two of practice or learn plenty of other skills.

The closest we have to that in AI is the company Deep Mind, which Google bought in 2014. It’s a system that can do general purpose learning of a limited sort. Its actually better than humans in a few video games and in the game.

We’re still a long way from machines that can master a wide range of tasks, understand something like Wikipedia and be able to learn for itself.

We were promised human level intelligence and what we got were things like ‘key word’ searches. Anyone who’s done searches on Google has run into the limitations of this level of processing.

The trouble with Big Data is that it’s all correlation and no causation:

You can always go and find correlations, but just finding correlations, which is what Big Data does, if it’s done in an unsophisticated way, doesn’t necessarily give you the right answer.

It’s important to realize that children don’t just care about correlations. They want to know why things are correlated:

Children are asking questions. Big Data is just collecting data.

AI’s roots were in trying to understand human intelligence. Hardly anybody talks about human intelligence anymore. They just talk about getting a big data base and run a particular function on it.”(2)

“In Marcus’ view, the only route to true machine intelligence is to begin with a better understanding of human intelligence. Big data will only get you so far, because it’s all correlation and no causation.

When children begin learning about the world, they don’t need big data. That’s because their brains are understanding why one thing causes another. That process only requires “small data,” says Marcus. “They don’t need exabytes of data to learn a lot.”

“My 22-month old is already more sophisticated than the best robots in the world in digging through bad toys and finding something new.”

Marcus offers several examples of aspects of human intelligence that we need to gain a better understanding of if we are want to build intelligent machines. For instance, a human being who looks at the following picture will be able to guess what happens next:

No machine can.”(1)

Below is an image of a Goose on a lake and there’s a detail there that looks like a car:

“It you take a Deep Learning algorithm and have it look at a picture like this, it might do what’s called a false alarm. It might say it sees a duck and it sees a car there too. You, as a human being, know that there’s not a car in the lake. So, if you have common sense, you use that a part of your analysis of the image and you don’t usually get fooled.

Try doing a search on the following: ‘Which is closer Paris or Saturn’, and see what you get for search results. Any child should be able to answer that question, but for most search engines, you will just get various links to info about Saturn and some links to info about Paris.

Natural Language

There’s a kind of sentence called a generic, which is things like ‘triangles have three sides’. What is meant by generic here is the triangles have three sides in general.

But is can be looser than that. For example one can say ‘dogs have four legs’. Most dogs have four legs, but they don’t all, since most people have seen three legged dogs.

The point here is that you can make sense of that statement. You can read an encyclopedia and make inferences about it.

How about ‘ducks lay eggs’? Well, this isn’t even true of most ducks since half the ducks are male and they don’t lay eggs, and some of the ducks are too young or too old or have a disorder, and they don’t lay eggs. So maybe only 30% of ducks actually lay eggs, but you understand it, you get it, you can think about it. We can make inferences, even though we don’t have a statistically reliable truth.

Children are able to understand this but machines aren’t.

The field of AI is hyped up to be further along than it actually is. There’s been little progress on making genuinely smart machines. Statistics and Big Data, as popular as they are, are never going to get us all the way there by themselves.

The only route to true machine intelligence is going to begin with a better understanding of human intelligence.”(2)

The Ideology of AI

“If computers don’t actually even think in the human sense, then why do the media and high-tech business leaders seem so eager to jump the gun? Why would they have us believe that robots are about to surpass us?

Perhaps many of us actually want computers to be smarter than humans because it’s an appealing fantasy. If robots are at parity with humans, then we can define down what it means to be human — we’re just an obsolete computer program — and all the vexing, painful questions like why do we suffer, why do we die, how should we live? become irrelevant.

It also justifies a world in which we put algorithms on a pedestal and believe they will solve all our problems. Jaron Lanier compares it to a religion:

“In the history of organized religion,” he told, “it’s often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity.”

“That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else…contributing mostly to the fortunes of whoever runs the top computers. The new elite might say, “Well, but they’re helping the AI, it’s not us, they’re helping the AI. The effect of the new religious idea of AI is a lot like the economic effect of the old idea, religion.”(1)

The Mythic Singularity

“Why is religious language so pervasive in AI and transhumanist circles?

The odd thing about the anti-clericalism in the AI community is that religious language runs wild in its ranks, and in how the media reports on it. There are AI ‘oracles’ and technology ‘evangelists’ of a future that’s yet to come, plus plenty of loose talk about angels, gods and the apocalypse.

Ray Kurzweil, an executive at Google, is regularly anointed a ‘prophet’ by the media – sometimes as a prophet of a coming wave of ‘superintelligence’ (a sapience surpassing any human’s capability); sometimes as a ‘prophet of doom’ (thanks to his pronouncements about the dire prospects for humanity); and often as a soothsayer of the ‘singularity’ (when humans will merge with machines, and as a consequence live forever).

The tech folk who also invoke these metaphors and tropes operate in overtly and almost exclusively secular spaces, where rationality is routinely pitched against religion. But believers in a ‘transhuman’ future – in which AI will allow us to transcend the human condition once and for all – draw constantly on prophetic and end-of-days narratives to understand what they’re striving for.

From its inception, the technological singularity has represented a mix of otherworldly hopes and fears. The modern concept has its origin in 1965, when Gordon Moore, later the co-founder of Intel, observed that the number of transistors you could fit on a microchip was doubling roughly every 12 months or so. This became known as Moore’s Law: the prediction that computing power would grow exponentially until at least the early 2020s, when transistors would become so small that quantum interference is likely to become an issue.

‘Singularitarians’ have picked up this thinking and run with it. In Speculations Concerning the First Ultraintelligent Machine (1965), the British mathematician and cryptologist I J Good offered this influential description of humanity’s technological inflection point:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion’, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

These meditations are shot through with excitement but also the very old anxiety about humans’ impending obsolescence. Kurzweil has said that Moore’s Law expresses a universal ‘Law of Accelerating Returns’ as nature moves towards greater and greater order. He predicts that computers will first reach the level of human intelligence, before rapidly surpassing it in a recursive, self-improving spiral.

When the singularity is conceived as an entity or being, the questions circle around what it would mean to communicate with a non-human creature that is omniscient, omnipotent, possibly even omnibenevolent. This is a problem that religious believers have struggled with for centuries, as they quested towards the mind of God.

In the 13th century, Thomas Aquinas argued for the importance of a passionate search for a relationship and shaped it into a Christian prayer: ‘Grant me, O Lord my God, a mind to know you, a heart to seek you, wisdom to find you …’ Now, in online forums, rationalist ‘singularitarians’ debate what such a being would want and how it would go about getting it, sometimes driving themselves into a state of existential distress at the answers they find.

A god-like being of infinite knowing (the singularity); an escape of the flesh and this limited world (uploading our minds); a moment of transfiguration or ‘end of days’ (the singularity as a moment of rapture); prophets (even if they work for Google); demons and hell (even if it’s an eternal computer simulation of suffering), and evangelists who wear smart suits (just like the religious ones do). Consciously and unconsciously, religious ideas are at work in the narratives of those discussing, planning, and hoping for a future shaped by AI.

The stories and forms that religion takes are still driving the aspirations we have for AI. What lies behind this strange confluence of narratives? The likeliest explanation is that when we try to describe the ineffable – the singularity, the future itself – even the most secular among us are forced to reach for a familiar metaphysical lexicon. When trying to think about interacting with another intelligence, when summoning that intelligence, and when trying to imagine the future that such an intelligence might foreshadow, we fall back on old cultural habits. The prospect creating an AI invites us to ask about the purpose and meaning of being human: what a human is for in a world where we are not the only workers, not the only thinkers, not the only conscious agents shaping our destiny.”(4)

Superior Intelligence and Rogue AI

“In Aristotle’s book, The Politics, he explains: ‘[T]hat some should rule and others be ruled is a thing not only necessary, but expedient; from the hour of their birth, some are marked out for subjection, others for rule.’ What marks the ruler is their possession of ‘the rational element’. Educated men have this the most, and should therefore naturally rule over women – and also those men ‘whose business is to use their body’ and who therefore ‘are by nature slaves’. Lower down the ladder still are non-human animals, who are so witless as to be ‘better off when they are ruled by man’.

So at the dawn of Western philosophy, we have intelligence identified with the European, educated, male human. It becomes an argument for his right to dominate women, the lower classes, uncivilized peoples and non-human animals.

Needless to say, more than 2,000 years later, the train of thought that these men set in motion has yet to be derailed.

The late Australian philosopher and conservationist Val Plumwood has argued that the giants of Greek philosophy set up a series of linked dualisms that continue to inform our thought. Opposing categories such as intelligent/stupid, rational/emotional and mind/body are linked, implicitly or explicitly, to others such as male/female, civilised/primitive, and human/animal. These dualisms aren’t value-neutral, but fall within a broader dualism, as Aristotle makes clear: that of dominant/subordinate or master/slave. Together, they make relationships of domination, such as patriarchy or slavery, appear to be part of the natural order of things.

According to Kant, the reasoning being – today, we’d say the intelligent being – has infinite worth or dignity, whereas the unreasoning or unintelligent one has none. His arguments are more sophisticated, but essentially he arrives at the same conclusion as Aristotle: there are natural masters and natural slaves, and intelligence is what distinguishes them.

This line of thinking was extended to become a core part of the logic of colonialism. The argument ran like this: non-white peoples were less intelligent; they were therefore unqualified to rule over themselves and their lands. It was therefore perfectly legitimate – even a duty, ‘the white man’s burden’ – to destroy their cultures and take their territory. In addition, because intelligence defined humanity, by virtue of being less intelligent, these peoples were less human. They therefore did not enjoy full moral standing – and so it was perfectly fine to kill or enslave them.

So when we reflect upon how the idea of intelligence has been used to justify privilege and domination throughout more than 2,000 years of history, is it any wonder that the imminent prospect of super-smart robots fills us with dread?

From 2001: A Space Odyssey to the Terminator films, writers have fantasized about machines rising up against us. Now we can see why. If we’re used to believing that the top spots in society should go to the brainiest, then of course we should expect to be made redundant by bigger-brained robots and sent to the bottom of the heap. If we’ve absorbed the idea that the more intelligent can colonize the less intelligent as of right, then it’s natural that we’d fear enslavement by our super-smart creations. If we justify our own positions of power and prosperity by virtue of our intellect, it’s understandable that we see superior AI as an existential threat.

Natural stupidity, rather than artificial intelligence, remains the greatest risk.

This narrative of privilege might explain why, as the New York-based scholar and technologist Kate Crawford has noted, the fear of rogue AI seems predominant among Western white men. Other groups have endured a long history of domination by self-appointed superiors, and are still fighting against real oppressors. White men, on the other hand, are used to being at the top of the pecking order. They have most to lose if new entities arrive that excel in exactly those areas that have been used to justify male superiority.

I don’t mean to suggest that all our anxiety about rogue AI is unfounded. There are real risks associated with the use of advanced AI (as well as immense potential benefits). But being oppressed by robots in the way that, say, Australia’s indigenous people have been oppressed by European colonists is not number one on the list.

We would do better to worry about what humans might do with AI, rather than what it might do by itself. We humans are far more likely to deploy intelligent systems against each other, or to become over-reliant on them. As in the fable of the sorcerer’s apprentice, if AIs do cause harm, it’s more likely to be because we give them well-meaning but ill-thought-through goals – not because they wish to conquer us. Natural stupidity, rather than artificial intelligence, remains the greatest risk.”(3)

Consumers Don’t Want It

“2016 and 2017 saw “AI” being deployed on consumers experimentally, tentatively, and the signs are already there for anyone who cares to see. It hasn’t been a great success.

The most hyped manifestation of better language processing is chatbots. Chatbots are the new UX, many including Microsoft and Facebook hope. Oren Etzoni at Paul Allen’s Institute predicts it will become a “trillion dollar industry” But he also admits “my 4 YO is far smarter than any AI program I ever met”.

Hmmm, thanks Oren. So what you’re saying is that we must now get used to chatting with someone dumber than a four year old, just because they can make software act dumber than a four year old.

Put it this way. How many times have you rung a call center recently and wished that you’d spoken to someone even more thick, or rendered by processes even more incapable of resolving the dispute, than the minimum-wage offshore staffer who you actually spoke with? When the chatbots come, as you close the [X] on another fantastically unproductive hour wasted, will you cheerfully console yourself with the thought: “That was terrible, but least MegaCorp will make higher margins this year! They’re at the cutting edge of AI!”?

In a healthy and competitive services marketplace, bad service means lost business. The early adopters of AI chatbots will discover this the hard way. There may be no later adopters once the early adopters have become internet memes for terrible service.

The other area where apparently impressive feats of “AI” were unleashed upon the public were subtle. Unbidden, unwanted AI “help” is starting to pop out at us. Google scans your personal photos and later, if you have an Android phone, will pop up “helpful” reminders of where you have been. People almost universally find this creepy. We could call this a “Clippy The Paperclip” problem, after the intrusive Office Assistant that only wanted to help. Clippy is going to haunt AI in 2017. This is actually going to be worse than anybody inside the AI cult quite realizes.

The successful web services today so far are based on an economic exchange. The internet giants slurp your data, and give you free stuff. We haven’t thought more closely about what this data is worth. For the consumer, however, these unsought AI intrusions merely draw our attention to how intrusive the data slurp really is. It could wreck everything. Has nobody thought of that?

AI Is a Make Believe World Populated By Mad People

The AI hype so far has relied on a collusion between two groups of people: a supply side and a demand side. The technology industry, the forecasting industry and researchers provide a limitless supply of post-human hype.

The demand comes from the media and political classes, now unable or unwilling to engage in politics with the masses, to indulge in wild fantasies about humans being replaced by robots. The latter reflects a displacement activity: the professions are already surrendering autonomy in their work to technocratic managerialism. They’ve made robots out of themselves – and now fear being replaced by robots.

There’s a cultural gulf between AI’s promoters and the public that Asperger’s alone can’t explain. There’s no polite way to express this, but AI belongs to California’s inglorious tradition of generating cults, and incubating cult-like thinking. Most people can name a few from the hippy or post-hippy years – EST, or the Family, or the Symbionese Liberation Army – but actually, Californians have been it at it longer than anyone realizes.

There’s nothing at all weird about Mark.

Move along and please tip the Chatbot.

Today, that spirit lives on Silicon Valley, where creepy billionaire nerds like Mark Zuckerberg and Elon Musk can fulfil their desires to “play God and be amazed by magic”, the two big things they miss from childhood. Look at Zuckerberg’s house, for example. What these people want is not what you or I want. I’d be wary of them running an after school club.”(6)

Should We Be Afraid of AI?

“Suppose you enter a dark room in an unknown building. You might panic about monsters that could be lurking in the dark. Or you could just turn on the light, to avoid bumping into furniture. The dark room is the future of artificial intelligence (AI). Unfortunately, many people believe that, as we step into the room, we might run into some evil, ultra-intelligent machines. This is an old fear. It dates to the 1960s, when Irving John Good, a British mathematician who worked as a cryptologist at Bletchley Park with Alan Turing, made the following observation:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion’, and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.

Once ultraintelligent machines become a reality, they might not be docile at all but behave like Terminator: enslave humanity as a sub-species, ignore its rights, and pursue their own ends, regardless of the effects on human lives.

If this sounds incredible, you might wish to reconsider. Fast-forward half a century to now, and the amazing developments in our digital technologies have led many people to believe that Good’s ‘intelligence explosion’ is a serious risk, and the end of our species might be near, if we’re not careful. This is Stephen Hawking in 2014:

The development of full artificial intelligence could spell the end of the human race.

Last year, Bill Gates was of the same view:

I am in the camp that is concerned about superintelligence. First the machines will do a lot of jobs for us and not be superintelligent. That should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this, and don’t understand why some people are not concerned.

And what had Musk, Tesla’s CEO, said?

We should be very careful about artificial intelligence. If I were to guess what our biggest existential threat is, it’s probably that… Increasingly, scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence, we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like, yeah, he’s sure he can control the demon. Didn’t work out.

The reality is more trivial. This March, Microsoft introduced Tay – an AI-based chat robot – to Twitter. They had to remove it only 16 hours later. It was supposed to become increasingly smarter as it interacted with humans. Instead, it quickly became an evil Hitler-loving, Holocaust-denying, incestual-sex-promoting, ‘Bush did 9/11’-proclaiming chatterbox. Why? Because it worked no better than kitchen paper, absorbing and being shaped by the nasty messages sent to it. Microsoft apologised.

This is the state of AI today. After so much talking about the risks of ultraintelligent machines, it is time to turn on the light, stop worrying about sci-fi scenarios, and start focusing on AI’s actual challenges, in order to avoid making painful and costly mistakes in the design and use of our smart technologies.

The current debate about AI is a dichotomy between those who believe in true AI and those who do not. Yes, the real thing, not Siri in your iPhone, Roomba in your living room, or Nest in your kitchen. Think instead of the false Maria in Metropolis (1927); Hal 9000 in 2001: A Space Odyssey (1968), on which Good was one of the consultants; C3PO in Star Wars (1977); Rachael in Blade Runner (1982); Data in Star Trek: The Next Generation (1987); Agent Smith in The Matrix (1999) or the disembodied Samantha in Her (2013). You’ve got the picture. Believers in true AI and in Good’s ‘intelligence explosion’ belong to the Church of Singularitarians. For lack of a better term, disbelievers will be referred to as members of the Church of AItheists. Let’s have a look at both faiths and see why both are mistaken. And meanwhile, remember: good philosophy is almost always in the boring middle.

Singularitarians believe in three dogmas. First, that the creation of some form of artificial ultraintelligence is likely in the foreseeable future. This turning point is known as a technological singularity, hence the name. Both the nature of such a superintelligence and the exact timeframe of its arrival are left unspecified, although Singularitarians tend to prefer futures that are conveniently close-enough-to-worry-about but far-enough-not-to-be-around-to-be-proved-wrong.

Second, humanity runs a major risk of being dominated by such ultraintelligence. Third, a primary responsibility of the current generation is to ensure that the Singularity either does not happen or, if it does, that it is benign and will benefit humanity. This has all the elements of a Manichean view of the world: Good fighting Evil, apocalyptic overtones, the urgency of ‘we must do something now or it will be too late’, an eschatological perspective of human salvation, and an appeal to fears and ignorance.

Put all this in a context where people are rightly worried about the impact of idiotic digital technologies on their lives, especially in the job market and in cyberwars, and where mass media daily report new gizmos and unprecedented computer-driven disasters, and you have a recipe for mass distraction: a digital opiate for the masses.

Like all faith-based views, Singularitarianism is irrefutable because, in the end, it is unconstrained by reason and evidence. It is also implausible, since there is no reason to believe that anything resembling intelligent (let alone ultraintelligent) machines will emerge from our current and foreseeable understanding of computer science and digital technologies. Let me explain.

Sometimes, Singularitarianism is presented conditionally. This is shrewd, because the then does follow from the if, and not merely in an ex falso quodlibet sense: if some kind of ultraintelligence were to appear, then we would be in deep trouble. Correct.

Absolutely. But this also holds true for the following conditional: if the Four Horsemen of the Apocalypse were to appear, then we would be in even deeper trouble.

At other times, Singularitarianism relies on a very weak sense of possibility: some form of artificial ultraintelligence could develop, couldn’t it? Yes it could. But this ‘could’ is mere logical possibility – as far as we know, there is no contradiction in assuming the development of artificial ultraintelligence. Yet this is a trick, blurring the immense difference between ‘I could be sick tomorrow’ when I am already feeling unwell, and ‘I could be a butterfly that dreams it’s a human being.’

There is no contradiction in assuming that a dead relative you’ve never heard of has left you $10 million. That could happen. So? Contradictions, like happily married bachelors, aren’t possible states of affairs, but non-contradictions, like extra-terrestrial agents living among us so well-hidden that we never discovered them, can still be dismissed as utterly crazy. In other words, the ‘could’ is not the ‘could happen’ of an earthquake, but the ‘it isn’t true that it couldn’t happen’ of thinking that you are the first immortal human. Correct, but not a reason to start acting as if you will live forever. Unless, that is, someone provides evidence to the contrary, and shows that there is something in our current and foreseeable understanding of computer science that should lead us to suspect that the emergence of artificial ultraintelligence is truly plausible.

Here Singularitarians mix faith and facts, often moved, I believe, by a sincere sense of apocalyptic urgency. They start talking about job losses, digital systems at risk, unmanned drones gone awry and other real and worrisome issues about computational technologies that are coming to dominate human life, from education to employment, from entertainment to conflicts. From this, they jump to being seriously worried about their inability to control their next Honda Civic because it will have a mind of its own. How some nasty ultraintelligent AI will ever evolve autonomously from the computational skills required to park in a tight spot remains unclear. The truth is that climbing on top of a tree is not a small step towards the Moon; it is the end of the journey. What we are going to see are increasingly smart machines able to perform more tasks that we currently perform ourselves.

If all other arguments fail, Singularitarians are fond of throwing in some maths. A favorite reference is Moore’s Law. This is the empirical claim that, in the development of digital computers, the number of transistors on integrated circuits doubles approximately every two years. The outcome has so far been more computational power for less. But things are changing. Technical difficulties in nanotechnology present serious manufacturing challenges. There is, after all, a limit to how small things can get before they simply melt. Moore’s Law no longer holds. Just because something grows exponentially for some time, does not mean that it will continue to do so forever.

Singularitarianism is irresponsibly distracting. It is a rich-world preoccupation, likely to worry people in leisured societies, who seem to forget about real evils oppressing humanity and our planet.

Deeply irritated by those who worship the wrong digital gods, and by their unfulfilled Singularitarian prophecies, disbelievers – AItheists – make it their mission to prove once and for all that any kind of faith in true AI is totally wrong. AI is just computers, computers are just Turing Machines, Turing Machines are merely syntactic engines, and syntactic engines cannot think, cannot know, cannot be conscious. End of story.

AItheists’ faith is as misplaced as the Singularitarians’. Both Churches have plenty of followers in California, where Hollywood sci-fi films, wonderful research universities such as Berkeley, and some of the world’s most important digital companies flourish side by side. This might not be accidental. When there is big money involved, people easily get confused. For example, Google has been buying AI tech companies as if there were no tomorrow, so surely Google must know something about the real chances of developing a computer that can think, that we, outside ‘The Circle’, are missing? Eric Schmidt, Google’s executive chairman, fuelled this view, when he told the Aspen Institute in 2013: ‘Many people in AI believe that we’re close to [a computer passing the Turing Test] within the next five years.’

The Turing test is a way to check whether AI is getting any closer. You ask questions of two agents in another room; one is human, the other artificial; if you cannot tell the difference between the two from their answers, then the robot passes the test. It is a crude test. Think of the driving test: if Alice does not pass it, she is not a safe driver; but even if she does, she might still be an unsafe driver. The Turing test provides a necessary but insufficient condition for a form of intelligence. This is a really low bar. And yet, no AI has ever got over it. More importantly, all programs keep failing in the same way, using tricks developed in the 1960s.

Both Singularitarians and AItheists are mistaken. As Turing clearly stated in the 1950 article that introduced his test, the question ‘Can a machine think?’ is ‘too meaningless to deserve discussion’. This holds true, no matter which of the two Churches you belong to. Yet both Churches continue this pointless debate, suffocating any dissenting voice of reason.

True AI is not logically impossible, but it is utterly implausible. We have no idea how we might begin to engineer it, not least because we have very little understanding of how our own brains and intelligence work. This means that we should not lose sleep over the possible appearance of some ultraintelligence. What really matters is that the increasing presence of ever-smarter technologies is having huge effects on how we conceive of ourselves, the world, and our interactions. The point is not that our machines are conscious, or intelligent, or able to know something as we do. They are not. There are plenty of well-known results that indicate the limits of computation, so-called undecidable problems for which it can be proved that it is impossible to construct an algorithm that always leads to a correct yes-or-no answer.

We know, for example, that our computational machines satisfy the Curry-Howard correspondence, which indicates that proof systems in logic on the one hand and the models of computation on the other, are in fact structurally the same kind of objects, and so any logical limit applies to computers as well. Plenty of machines can do amazing things, including playing checkers, chess and Go and the quiz show Jeopardy better than us. And yet they are all versions of a Turing Machine, an abstract model that sets the limits of what can be done by a computer through its mathematical logic.

Quantum computers are constrained by the same limits, the limits of what can be computed (so-called computable functions). No conscious, intelligent entity is going to emerge from a Turing Machine. The point is that our smart technologies – also thanks to the enormous amount of available data and some very sophisticated programming – are increasingly able to deal with more tasks better than we do, including predicting our behaviors. So we are not the only agents able to perform tasks successfully.

These are ordinary artifacts that outperform us in ever more tasks, despite being no cleverer than a toaster. Their abilities are humbling and make us reevaluate human exceptionality and our special role in the Universe, which remains unique. We thought we were smart because we could play chess. Now a phone plays better than a Grandmaster. We thought we were free because we could buy whatever we wished. Now our spending patterns are predicted by devices as thick as a plank.

What’s the difference? The same as between you and the dishwasher when washing the dishes. What’s the consequence? That any apocalyptic vision of AI can be disregarded.

The success of our technologies depends largely on the fact that, while we were speculating about the possibility of ultraintelligence, we increasingly enveloped the world in so many devices, sensors, applications and data that it became an IT-friendly environment, where technologies can replace us without having any understanding, mental states, intentions, interpretations, emotional states, semantic skills, consciousness, self-awareness or flexible intelligence. Memory (as in algorithms and immense datasets) outperforms intelligence when landing an aircraft, finding the fastest route from home to the office, or discovering the best price for your next fridge.

Digital technologies can do more and more things better than us, by processing increasing amounts of data and improving their performance by analyzing their own output as input for the next operations. AlphaGo, the computer program developed by Google DeepMind, won the boardgame Go against the world’s best player because it could use a database of around 30 million moves and play thousands of games against itself, ‘learning’ how to improve its performance. It is like a two-knife system that can sharpen itself. What’s the difference? The same as between you and the dishwasher when washing the dishes. What’s the consequence? That any apocalyptic vision of AI can be disregarded.

We are and shall remain, for any foreseeable future, the problem, not our technology.

So we should concentrate on the real challenges:

We should make AI environment-friendly. We need the smartest technologies we can build to tackle the concrete evils oppressing humanity and our planet, from environmental disasters to financial crises, from crime, terrorism and war, to famine, poverty, ignorance, inequality and appalling living standards.

We should make AI human-friendly. It should be used to treat people always as ends, never as mere means, to paraphrase Immanuel Kant.

We should make AI’s stupidity work for human intelligence. Millions of jobs will be disrupted, eliminated and created; the benefits of this should be shared by all, and the costs borne by society.

We should make AI’s predictive power work for freedom and autonomy. Marketing products, influencing behaviors, nudging people or fighting crime and terrorism should never undermine human dignity.

And finally, we should make AI make us more human. The serious risk is that we might misuse our smart technologies, to the detriment of most of humanity and the whole planet. Winston Churchill said that ‘we shape our buildings and afterwards our buildings shape us’. This applies to the infosphere and its smart technologies as well.

Singularitarians and AItheists will continue their diatribes about the possibility or impossibility of true AI. We need to be tolerant. But we do not have to engage. As Virgil suggests in Dante’s Inferno: ‘Speak not of them, but look, and pass them by.’ For the world needs some good philosophy, and we need to take care of more pressing problems.”(7)


(1) You’ve read the hype, now read the truth

(2) Web Summit 2014 Day One – Gary Marcus

(3) Intelligence: a history

(4) fAIth

(5) Myth Busting Artificial Intelligence

(6) ‘Artificial Intelligence’ was 2016’s fake news

(7) Should we be afraid of AI?

Comments are closed.