Computer says no: why making AIs fair, accountable and transparent is crucial

In October, American teachers prevailed in a lawsuit with their school district over a computer program that assessed their performance.

The system rated teachers in Houston by comparing their students’ test scores against state averages. Those with high ratings won praise and even bonuses. Those who fared poorly faced the sack.

The program did not please everyone. Some teachers felt that the system marked them down without good reason. But they had no way of checking if the program was fair or faulty: the company that built the software, the SAS Institute, regards its algorithm a trade secret and would not disclose its workings.

The teachers took their case to court and a federal judge ruled that use of the EVAAS (Educational Value Added Assessment System) program may violate their civil rights. In settling the case, the school district paid the teachers’ fees and agreed to stop using the software.

The law has treated others differently. When Wisconsin police arrested Eric Loomis in 2013 for driving a car used in a shooting, he was handed a hefty prison term in part because a computer algorithm known as Compas judged him at high risk of re-offending. Loomis challenged the sentence because he was unable to check the program. His argument was rejected by the Wisconsin supreme court.

Q&A

How do machines learn?

Show

Hide

A central goal of the field of artificial intelligence is for machines to be able to learn how to perform tasks and make decisions independently, rather than being explicitly programmed with inflexible rules. There are different ways of achieving this in practice, but some of the most striking recent advances, such as AlphaGo, have used a strategy called reinforcement learning. Typically the machine will have a goal, such as translating a sentence from English to French and a massive dataset to train on. It starts off just making a stab at the task – in the translation example it would start by producing garbled nonsense and comparing its attempts against existing translations. The program is then “rewarded” with a score when it is successful. After each iteration of the task it improves and after a vast number of reruns, such programs can match and even exceed the level of human translators. Getting machines to learn less well defined tasks or ones for which no digital datasets exist is a future goal that would require a more general form of intelligence, akin to common sense.

Thank you for your feedback.

The arrival of artificial intelligence has raised concerns over computerised decisions to a new high. Powerful AIs are proliferating in society, through banks, legal firms and businesses, into the National Health Service and government. It is not their popularity that is problematic; it is whether they are fair and can be held to account.

Researchers have documented a long list of AIs that make bad decisions either because of coding mistakes or biases ingrained in the data they trained on.

Bad AIs have flagged the innocent as terrorists, sent sick patients home from hospital, lost people their jobs and car licences, had people kicked off the electoral register, and chased the wrong men for child support bills. They have discriminated on the basis of names, addresses, gender and skin colour.

Bad intentions are not needed to make bad AI. A company might use an AI to search CVs for good job applicants after training it on information about people who rose to the top of the firm. If the culture at the business is healthy, the AI might well spot promising candidates, but if not, it might suggest people for interview who think nothing of trampling on their colleagues for a promotion.

Opening the black box

How to make AIs fair, accountable and transparent is now one of the most crucial areas of AI research. Most AIs are made by private companies who do not let outsiders see how they work. Moreover, many AIs employ such complex neural networks that even their designers cannot explain how they arrive at answers. The decisions are delivered from a “black box” and must essentially be taken on trust. That may not matter if the AI is recommending the next series of Game of Thrones. But the stakes are higher if the AI is driving a car, diagnosing illness, or holding sway over a person’s job or prison sentence.

Last month, the AI Now Institute at New York University, which researches the social impact of AI, urged public agencies responsible for criminal justice, healthcare, welfare and education, to ban black box AIs because their decisions cannot be explained.

“We can’t accept systems in high stakes domains that aren’t accountable to the public,” Kate Crawford, a co-founder of the institute, told the Guardian. The report said AIs should pass pre-release trials and be monitored “in the wild” so that biases and other faults are swiftly corrected.

Tech firms know that coming regulations and public pressure may demand AIs that can explain their decisions, but developers want to understand them too. Klaus-Robert Müller, professor of machine learning at the Technical University of Berlin, has trained an AI to diagnose breast cancer using variety of medical data. It is not good enough for the AI to simply spit out a diagnosis, he says. “It’s absolutely mandatory for the individual patient to know what the heck is going on.”

To understand how their AI reached decisions, Müller and his team developed an inspection program known as Layerwise Relevance Propagation, or LRP. It can take an AI’s decision and work backwards through the program’s neural network to reveal how a decision was made.

In a simple test, Müller’s team used LRP to work out how two top-performing AIs recognised horses in a vast library of images used by computer vision scientists. While one AI focused rightly on the animal’s features, the other based its decision wholly on a bunch of pixels at the bottom left corner of each horse image. The pixels turned out to contain a copyright tag for the horse pictures. The AI worked perfectly for entirely spurious reasons. “This is why opening the black box is important,” says Müller. “We have to make sure we get the right answers for the right reasons.”

In many cases, the AI black box need not be opened. Sandra Wachter, a lawyer and researcher in data ethics and algorithms at the Oxford Internet Institute and the Alan Turing Institute, worked with her colleagues Brent Mittelstadt and Chris Russell to develop another approach. Instead of exposing the full inner workings of an AI, it figures out what it would take to change the AI’s decision.


Some experts are calling for a European AI watchdog to police the technology, and certify its safety in critical arenas such driverless cars (pictured), law and health. Photograph: Elijah Nouvelage/Reuters

Suppose an AI turns down a mortgage applicant. Wachter’s method might reveal that the loan was denied because the person’s income was £30,000, but would have been approved if it was £45,000. It allows the decision to be challenged and informs the person what needs to change to get the loan.

For some researchers, the time to start regulating AI has arrived. “We have seen too many slip-ups, and AI is too powerful not to have government be part of the solution,” said Craig Fagan, policy director at Tim Berners-Lee’s Web Foundation. “It’s asking companies to take on a lot of responsibility to manage such rapid economic, political and social transformation and not have some government oversight.”

Joanna Bryson, an AI researcher at the University of Bath, thinks AI companies might be regulated like architects who learn to work with city planners, certification schemes and licences to make buildings safe. “People die and governments change because of stuff that happens with software. It’s got to be more regulated,” she said.

Europe is ahead of other parts of the world in drawing up regulations to protect people from badly made or used AI. Next May, the General Data Protection Regulation (GDPR) comes into force in Britain and across the continent. At first glance, it gives people the right to know when companies are making automated decisions of any importance about them. It also mentions a right to explanation and the right to challenge automated decisions.

In practice GDPR is far weaker than the rights suggest. The right to be informed applies before decisions are made, not after the fact. And decisions can only be challenged when they are completely automated from start to finish, and the outcome of the decision has legal or other similarly significant effects. The obligation vanishes if there is the slightest form of human involvement.

For example, a bank could have an employee rubber stamp loans approved by an AI, while a law firm might have a PA invite people for interview after their CVs were ranked by an AI. “As soon as you put a human in the loop, the safeguards no longer apply,” said Wachter. The right to explanation appears even weaker. In early drafts, the European Parliament proposed to make the right legally binding, but it was demoted during later negotiations to no more than a guideline – and police and other crime-fighting organisation are exempt even from this.

Along with Luciano Floridi and Brent Mittelstadt at the Oxford Internet Institute, Wachter has called for a European AI watchdog to police the technology. The body would need powers to send independent investigators into organisations to scrutinise their AIs and extract meaningful explanations. To keep people safe, AIs could be certified for use in critical arenas such as medicine, criminal justice and driverless cars. “If we’re deploying them in critical infrastructure, we need to be sure they meet safety standards,” Wachter said.

“We need transparency as far as it is achievable, but above all we need to have a mechanism to redress whatever goes wrong, some kind of ombudsman,” said Floridi. “It’s only the government that can do that.”

from Artificial intelligence (AI) | The Guardian http://ift.tt/2zyw8oE

Leave a comment