Robots Will Be More Useful if They are Made to Lack Confidence

http://ift.tt/2sY5qyw

By New Scientist

June 8, 2017

Comments

Not so cocky now, are you?

Researchers at the University of California, Berkeley are developing an artificial intelligence system that seeks and accepts human oversight, in an attempt to combat fake news.

Credit: Jason Lee/Reuters

In an attempt to combat fake news, University of California, Berkeley researchers are developing an artificial intelligence (AI) system that seeks and accepts human oversight.

Rather that promoting every article it thinks users want to see, an AI algorithm that was more uncertain of its abilities would be more likely to defer to a human’s better judgement.

The researchers designed a mathematical model of an interaction between humans and robots called the “off-switch game,” to explore the idea of a computer’s “self-confidence.” In this theoretical game, a robot with an off switch is given a task to do, after which a human is free to press the robot’s off switch whenever they like, but the robot also can choose to disable its switch so the human cannot turn it off.

The researchers are studying what degree of “confidence” to give a robot so it will allow the human to flip the off switch when necessary.

From New Scientist
View Full Article

 

Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA


 

No entries found

from Communications of the ACM: Artificial Intelligence http://ift.tt/2sHkTDF

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s