Robots dressed in a business suit

Bad Robot

A kill switch will prevent the robots of the future from going off the rails.

Joe Myers | Formative Content

Science fiction loves a story about robots rising up and taking control – consider Will Smith’s iRobot – but how realistic are such visions of the future?

DeepMind, Google’s artificial intelligence (AI) division, certainly thinks there’s a risk. They’ve teamed up with Oxford University to develop a “red button” that would interrupt an AI machine’s actions.

Pullquote share icon. Share

The red button adds to the debate on the risks of AI.

Their paper “explores a way to make sure a learning agent will not learn to prevent (or seek!) being interrupted by the environment or a human operator.”

The “red button” – or “kill switch” as it’s been termed – adds to the debate on the long-term risks of AI.

[Also on Longitudes: Will We Soon Be Talking to Our Vacuum Cleaners?] 

AI on the rise

Funding for artificial intelligence start-ups has increased nearly sevenfold in just five years. From $45 million in 2010, it hit $310 million in 2015. Investment in 2014 was even higher – 60 deals worth $394 million were recorded, according to CB Insights.

Interest in AI has also spiked following AlphaGo’s victory over a top human player at Go – an ancient Chinese board game, said to have more possible configurations than there are atoms in the universe.

But with prominent voices, including Stephen Hawking, Elon Musk and Bill Gates, cautioning on the risks posed by the technology, it’s not all rosy.

[Also on Longitudes: Technologies that will Change the World by 2020]

The red button

As a number of different researchers have begun to ask, what happens if AI machines go rogue?

The DeepMind and Oxford University team argues that learning agents are unlikely to “behave optimally all the time” given the complexities of the real world.

Pullquote share icon. Share

What happens if AI machines go rogue?

In a reward-based system, if the operator prevents the machine from performing an action for which it expects to be rewarded, it may learn to avoid such an interruption.

It is therefore important to ensure these machines can be interrupted – without them learning to disable or circumvent the red button.

Meanwhile, a roboticist at the University of Berkeley has built a robot that can decide whether or not to inflict pain. Alexander Reben argues that it shows harmful robots already exist, and so some of the issues surrounding AI need attention now.

The robot is capable of pricking a finger, but will not do so all the time. He explained to the BBC that “the robot makes a decision that I as a creator cannot predict.”

The robot is nicknamed The First Law after Isaac Asimov’s first law of robotics, which states that a robot may not hurt humans. Reben described his robot as a “philosophical experiment.” goldbrown2

This article first appeared on World Economic Forum and was republished with permission.

button

Every morning, wake up to the blog that gives you the latest trends shaping tomorrow.

Sepia Tone Filter: https://www.tuxpi.com/photo-effects/sepia-tone
Joe Myers is a Content Producer at Formative Content.

Click the RSS icon to subscribe to future articles by this author. RSS Feed

1 Comment

  1. Pingback: That Moment I Decided Robots Were as Interesting as Humans | Longitudes

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s