When we don’t know much about a new technology, we talk in generalisations. Those generalisations are often also extreme: the utopian drives of those who are developing it on one hand, and the dystopian visions that help society look before it leaps on the other.
These tensions are true for machine learning, the set of techniques that enables much of what we currently think of as Artificial Intelligence. But, as the Royal Society’s recently published report Machine learning: the power and promise of computers that learn by example showed, we are already at the point where we can do better than the generalisations; give members of the public the opportunity to interrogate the “experts” and explore the future, and they come up with nuanced expectations in which context is everything.
The Society’s report was informed by structured public dialogue, carried out over six days in four locations around the UK, with participants from mixed socio-economic backgrounds. Quantitative research showed only 9% of people have heard of machine learning, even though most of us use it regularly through applications such as text prediction or customer recommendation services. The public dialogue gave people the opportunity to discuss the science with leading academics. The conversations were seeded with near-term realistic scenarios from contexts such as GP’s surgeries and schools.
The results showed common themes but they also revealed how, when it came to balancing potential risks and benefits, people gave very different weightings depending on what was at stake.
Participants talked about potential advantages such as objectivity and accuracy: better an expert and well-tested diagnostic system than a human doctor unable to keep up with the latest literature and over-tired on the day. They raised the benefits of true efficiency in public services: systems that might relieve pressure on front line workers such as police officers or teachers. Even in time-limited discussions, participants often came up with ideas as to how machine learning could enhance rather than simply replace existing tasks or jobs. And they saw the potential for machine learning to address large-scale societal challenges such as climate change.
At the same time, they were concerned about the depersonalisation of key services. The tired human doctor would still be essential to any conversation with the patient about the meaning of an important diagnosis (and some were sceptical about the likelihood of accurate diagnostic systems for mental illnesses, at least in the near term). They discussed whether the use of machine learning systems to augment experiences they currently enjoyed – from driving to writing poetry – might make these experiences less personal or ‘human’.
They wanted to know the real limits of systems’ abilities to account for the full range of human characteristics. They wanted to avoid being stereotyped and having their choices of goods, services or news narrowed. They were also concerned about harm to individuals, with some suggestion that the physical embodiment of machine learning systems, as opposed to their use in classification or diagnosis, heightened the nature of those concerns.
Context determined what people thought of the relative significance of the benefits and harms and the practical implications. In a qualitative assessment exercise, for example, they were highly positive about healthcare applications, considering these to be in the “high social value, low social risk” category, while assigning shopper recommendation systems (other than for financial services such as insurance and loans) and art as lower social value and higher social risk. When discussing the implications of potential physical harm from embodied systems such as autonomous vehicles or the social care robots in the home, they considered a range of potential levels of assurance. To ensure confidence they wanted evidence that the machine learning system was more accurate or safer than a human with an equivalent function. In high-stakes cases they saw an ongoing need for a “human in the loop”, either taking key decisions with the machine’s help, or in an oversight role.
The potential implications of machine learning for jobs came up spontaneously and frequently. Participants were quick to make comparisons with previous industrial transitions such as the factory assembly line. Our dialogues only scratched the surface of this debate, which goes beyond technology and policy to impact business models, organisational structures, notions of ownership and rights.
Other parts of the project highlighted different aspects of these transitions. A workshop with leaders in the professions began to explore current practices of continuous professional development and the changing nature of the ‘secular and sacred’ (technical and ethical) elements of professional practice. Workshops with different industrial sectors showed how difficult it is for smaller businesses to identify when and how they might be able to create value from machine learning, and how difficult it is to find the best advice in an area where demand for experts far outstrips supply. A workshop with leaders in the legal profession began to explore the extent to which machine learning might significantly disrupt business models. Chatbots are already carrying out basic legal advisory tasks such as dealing with parking fines; machine learning systems are able to replace junior staff in searching legal texts; and machine learning is now used by major firms to inform strategy by, for example, predicting counterparty reactions in major corporate cases.
Claire Craig is Director of Policy at the Royal Society. Jessica Montgomery is a Senior Policy Adviser at the Royal Society
from Artificial intelligence (AI) | The Guardian http://ift.tt/2qJ2Vmp