By New Scientist
June 8, 2017
Researchers at the University of California, Berkeley are developing an artificial intelligence system that seeks and accepts human oversight, in an attempt to combat fake news.
Credit: Jason Lee/Reuters
In an attempt to combat fake news, University of California, Berkeley researchers are developing an artificial intelligence (AI) system that seeks and accepts human oversight.
Rather that promoting every article it thinks users want to see, an AI algorithm that was more uncertain of its abilities would be more likely to defer to a human’s better judgement.
The researchers designed a mathematical model of an interaction between humans and robots called the “off-switch game,” to explore the idea of a computer’s “self-confidence.” In this theoretical game, a robot with an off switch is given a task to do, after which a human is free to press the robot’s off switch whenever they like, but the robot also can choose to disable its switch so the human cannot turn it off.
The researchers are studying what degree of “confidence” to give a robot so it will allow the human to flip the off switch when necessary.
From New Scientist
View Full Article
Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA
No entries found
from Communications of the ACM: Artificial Intelligence http://ift.tt/2sHkTDF