Movies

Will A.I. like Ultron spell the end of humanity?

avengers-2-ultron-posterIf you want to get an idea of what our future might look like you could go see Marvel’s new blockbuster superhero movie The Avengers: Age of Ultron.

Recently Steve Wozniak, co-founder of Apple, said in no uncertain terms that computers would take over humanity. “No question,” is how he put it. “Will we be gods? Will we be the family pets?” he asked. “I don’t know about that,” but according to Wozniak, the day that artificial intelligence will be in control is coming. Hopefully we’ll do better than the family pet.

The central conflict in Avengers: Age of Ultron is an accidental side-effect of the quest for artificial intelligence, a side-effect on the level that Wozniak is talking about. Ultron is a sentient android with a long and complicated history that can only come from living in comic books. In Joss Whedon’s new film, however, Ultron (voiced by the incomparable when evil James Spader) is an A.I. brought to life by the actions of billionaire genius Tony Stark. Stark wants to create a “shield around the world” to protect humanity from an ever increasing universe of threats. Unfortunately, once Ultron has achieved sentience, he takes his prime directive (to achieve peace in our time) to hyperlogical, human-life-ending conclusions.

Wozniak (and Whedon) isn’t alone in this most dire of opinions regarding A.I.. The world-famous physicist Stephen Hawking last year told the BBC that “the development of full artificial intelligence could spell the end of the human race.” Hawking and hundreds of other scientists at a recent AI conference signed onto a letter warning of the existential dangers of A.I. Elon Musk, too, feels the heat. The billionaire engineer/inventor and owner of Tesla Motors recently donated $10 Million to the Future of Life Institute, a non-profit group working to “mitigate the existential risks to humanity” inherent in developing A.I.

In fact there are a number of groups working to slow the progress of A.I.-through regulation, mitigation, or just an all out ban on killer robots. For real, The Campaign to Stop Killer Robots was formed to oppose the development of autonomous weapons, a cause that cuts right to the heart of Stephen Hawking’s “end of humanity” warnings. Meanwhile, Cambridge University has established the Center for the Study Existential Risk and Oxford, The Future of Humanity Institute. Both of which explore the species-level risks posed by technology and artificial intelligence.

metropolis

Fritz Lang’s Metropolis (1927)

These aren’t sci-fi fan-groups or fringe thinkers forming these organization. Smart, prominent people are committing their time to this threat. The Campaign to Stop Killer Robots is a joint effort of 5 international NGOs, including Human Rights Watch. Actors Alan Alda and Morgan Freeman are on the scientific advisory board of the Future of Life Institute, a seat they share with Musk and Hawking. Hawking also advises the Center for the Study Existential Risk. The CSER posits that rapid technological development might lead to the loss , perhaps accidentally, of “direct, short-term control” over “circumstances essential to our survival”. In other words, we might make something that takes control of our future. That’d be a big oops, if killer robots is one of the dangers.

All of which begs two questions: What exactly are Hawking and Wozniak and Tony Stark talking about when they say “artificial intelligence”? We know smartphones, and smart watches, thermostats. Our TVs and appliances have gone smart; smartcars are starting to show up on streets. At what point do these machines go from smart to intelligent? And if this intelligence threat is legit, and yet so many scientists are pursuing it, what are the benefits they find that could possibly outweigh such risks?

What We Talk About When We Talk About A.I.

Most of us probably have some mental idea of what people are talking about when they’re talking about risks from A.I: Robots. Robots are everywhere in popular culture. Portrayals of A.I. in sci-fi are often complex and reach far beyond simple evil killer robots. A.I. have provided human emotional fulfillment, such as Spike Jonze’s Her or Disney’s Big Hero 6. They’ve been our friends, our enemies, even our slaves, like in the classic Blade Runner.

But, for the most part, sci-fi A.I. ends up looking a lot like Ultron: destructive and evil, if carrying a kind of seductive logic. Age of Ultron is just the latest in a long line of evil robot movies, dating to the origins of cinema with Fritz Lang’s Metropolis, to the comic horror of The Stepford Wives to the annihilating machine armies of The Matrix.

Does the prevalence of the homicidal, or even genocidal, robot theme stand to aid Hawking’s point?

In movies, the potential realities of A.I. reside somewhere in the far-off future. But the Campaign to Stop Killer Robots, the Future of Life Institute and the rest are already worrying about this threat. So what do we mean when we talk about artificial intelligence in the real world?

Part of why it’s hard to understand that question is that our sense of what makes machines intelligent keeps changing. Brian Coulombe, a director at x.ai-a personal assistant A.I. program (like Spike Jonze’s A.I. creation in the 2013 Oscar-winning film Her, only more practical and less chatty), told me that the target keeps moving. Alan Turing, who kicked off the study of thinking machines in the 1950s, would surely look at a computer like Watson-who played against the best Jeopardy players ever and wiped the floor with them-and consider that computer intelligent. Watson understood the game of Jeopardy and could process questions, find answers, and even provide them in the form of a question.Watson-JeopardySurely Turing would be wowed by Watson. Today, though, Watson is not the achievement of A.I. but just a big and fast search engine. Input keywords in a question, output data. Is Watson intelligent or just a fancy computer? This is the moving target that Coulombe means: once, A.I. was winning chess matches. Now, what? Jeopardy? Writing poetry? Reproducing? “Often we reach a goal, then we decide that wasn’t the actual target,” Coulombe told me over e-mail. So How do we define actual artificial intelligence?

I asked that question of Dr. Anita Raja, the Associate Dean of Research and Graduate Programs and Professor of Computer Science at the Albert Nerken School of Engineering at the Cooper Union. She said this: “A.I. is the scientific study of mechanisms underlying intelligent behavior, including perception, thinking and learning in computational terms.”

Dr. Raja’s is the best definition I’ve found; it lays out the subject of study and the reason it makes folks nervous: what makes up intelligence (thinking, learning, etc) and how to do we apply that intelligence to computers. A.I. not as a thing, like Ultron the killing robot from Avengers, but an application of intelligence into a non-intelligent world.

Though scientists have not made much progress in replicating human consciousness, Raja says, those risks are real—among them, the development of “super intelligence” where machine intelligence surpasses that of humans.

Risks and benefits

Like stem cell research, biotech, or nuclear physics, artificial intelligence is full of complex ethical questions that need addressing. Responsible conduct and foresight are crucial when engaging such risky areas of research. But as Dr. Raja said, fields with the most potential for human benefit tend to come with great associated moral and ethical concerns.

Take geoengineering. Intentionally altering the chemical or physical environment could backfire in tremendous and unexpected ways. It carries with it great risk and should not be undertaken without expending all other options. But should such a time come, according to proponents of geoengineering, we’ll need new technologies to address climate change, because the risk of unmitigated climate change is even greater.

“The existence of risk does not mean that one ceases to pursue science that has the potential for unprecedented benefits to humanity,” Raja said. She made clear that ethical considerations (or doomsday scenarios, depending on your point of view) involved in AI are important to the field. “The study of A.I. safety has been ongoing for a long time and continues in the mainstream…These discussions will ensure that when the appropriate time comes” there will be sufficient safeguards in place to account for safety and privacy of people.

So what does Dean Raja think are some of the greatest social benefits offered by A.I. research? For one, climate change. Many of the people I interacted with in A.I. fields talked about the relationship between A.I. research and nature, weather, and climate “Significant inroads in machine learning, image classification, planning and coordination, software agent teams to solve problems…represent a bottom-up route to A.I.” Such inroads, she continues, “have the potential to help with gathering and analysis of data to attain goals such as disease prediction, identification and prevention; disaster rescue, climate change,” and many other risks that face humanity in the 21st century.

Such research into A.I. is well underway. The American Meteorological Society holds an annual conference on “Artificial and Computational Intelligence, and it’s Application to the Environmental Sciences.” That conference is in its 13th year.Data_intoxicated

A 2007 National Science Foundation grant to Ecoinformatics Collaboratory at University of Vermont, Earth Economics, and Conservation International led to the creation of Artificial Intelligence for Ecosystem Services (ARIES), a web-based methodology that combines A.I. and modeling to improve the speed and effectiveness of environmental decision-making. ARIES allows users to “discover, understand, and quantify environmental assets,” and serves to improve water quality, storm and flood responses, and carbon storage and sequestration.

It’s not just climate that A.I. is working on. Modernizing Medicine, a healthcare company that created schEMA, is bringing the benefits of AI research to medicine. schEMA lives in IBM Watson’s digital ecosystem (the same Watson that dominated at Jeopardy); doctors will ask questions of schEMA, who will search the vast resources of Watson for clinically useful content. Not just finding stuff, but finding the right stuff.

It’s not all goodness and social justice. The military and defense applications of A.I., especially as it pertains to drone technology, are real, and cause for great concern. Those campaigns to stop killer robots are serious in their endeavor.

But let’s be honest. There are no shortage of existential risks our planet faces, including climate change, resource depletion, and disease, all of which A.I. might help us solve. Perhaps it’s time for a paradigm shift in the public imagination when it comes to how we perceive the advance of A.I Not to erase the threat it poses-we should never ignore the risks that accompany science-but to complicate it, or (dare I say it) humanize it beyond the scope of sci-fi movies. Because maybe Ultron won’t try to kill us. Maybe, he’ll try to save us.

Follow The Stake on Twitter and Facebook

About these ads

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s