Don’t worry about opaque algorithms; you already don’t know what anything is doing, or why

Machine learning algorithms are opaque, difficult to audit, unconstrained by ethics , and there’s always the possibility they’ll do the unthinkable when facing the unexpected. But that’s true of most our society’s code base, and, in a way, they are the most secure part of it, because we haven’t talked ourselves yet into a false sense of security about them.

There’s a technical side to this argument: contemporary software is so complex, and the pressures under which it’s developed so strong, that it’s materially impossible to make sure it’ll always behave the way you want it to. Your phone isn’t supposed to freeze while you’re making a call, and your webcam shouldn’t send real-time surveillance to some guy in Ukraine, and yet here we are.

But that’s not the biggest problem. Yes, some Toyota vehicles decided on their own to accelerate at inconvenient times because their software systems were mindbogglingly and unnecessarily complex, but nobody outside the company knew they were because it was so legally difficult to have access to the code that even after the crashed they had to be inspected by an outside expert under conditions usually reserved to high-level intelligence briefings.

And there was the hidden code in VW engines designed to fool emissions tests, and the programs Uber uses to track you even while they say they aren’t, or even Facebook’s convenient tools to help advertisers target the emotionally vulnerable.

The point is, the main problem right now isn’t what a self-driving car _might_ do when it has to make a complex ethical choice guided by ultimately unknowable algorithms, but what the car is doing on every other moment, reflecting ethical choices guided by corporate executives that might be unknowable in a philosophical, existential sense, but are worryingly familiar in an empirical one. You don’t know most of what your phone is doing at any given time, not to mention other devices, it can be illegal to try to figure it out, and it can also be illegal if not impossible to change it even if you did.

And a phone a thing you hold in your hand and can, at least in theory, put in a drawer somewhere if you want to have a discrete chat with a Russian diplomat. Even more serious are all the hidden bits of software running in the background, like the ones that can automatically flag you as a national security risk, or are constantly weighting whether you should be allowed to turn on your tractor. Even if the organization that developed or runs the software did its job uncommonly well and knows what it’s doing down to the last bit, you don’t and most likely never will.

This situation, perhaps first and certainly most forcefully argued against by Richard Stallman, is endemic to our society, and absolutely independent of the otherwise world-changing Open Source movement. Very little of the code in our lives is running in something resembling a personal computer, after all, and even when it does, it mostly works by connecting to remote infrastructures whose key algorithms are jealously guarded business secrets. Emphasis on secret, with a hidden subtext of specially from users.

So let’s not get too focused on the fact that we don’t really understand how a given neural network works. It might suddenly decide to accelerate your car, but “old fashioned” code could, and as a matter of fact did, and in any case there’s very little practical difference between not knowing what something is doing because it’s a cognitively opaque piece of code, and not knowing what something is doing because the company controlling the thing you bought doesn’t want you to know and has the law on its side if it wants to send you to jail if you try to.

Going forward, our approach to software as users, and, increasingly, as citizens, cannot but be empirical paranoia. Just assume everything around you is potentially doing everything it’s physically capable of (noting that being remotely connected to huge amounts of computational power makes even simple hardware quite more powerful than you’d think), and if any of that is something you don’t find acceptable, take external steps to prevent it, above and beyond toggling a dubiously effective setting somewhere. Recent experience shows that FOIA requests, legal suits, and the occasional whistleblower might be more important for adding transparency to our technological infrastructure than your choice of operating system or clicking a “do not track” checkbox.

from Ethical Technology http://ift.tt/2qh0Ees

Google’s AlphaGo Defeats Chinese Go Master in Win for AI

By The New York Times

May 24, 2017

Comments

Ke Jie, the world’s top Go player, scratches his head during his match on Tuesday against the AlphaGo artificial intelligence software.

Google DeepMind’s AlphaGo program on Tuesday beat a Chinese world Go champion in the first game of a three-game tournament.

Credit: China Stringer Network, via Reuters

Google DeepMind’s AlphaGo program on Tuesday beat a Chinese world champion in the first of three Go games held this week, in what is being hailed as a victory for artificial intelligence (AI).


The human player, Ke Jie, notes the program has improved rapidly after its 2016 defeat of a South Korean Go player. “AlphaGo is like a different player this year compared to last year,” Ke says.


DeepMind co-founder Demis Hassabis says AlphaGo uses methods in which it learns experientially from playing a large number of games. For the new contest, Hassabis notes the program adopted a strategy that enables it to learn more by playing games against itself.


“Last year it was still quite humanlike when it played,” Ke says. “But this year, it became like a god of Go.”


Researchers say similar AI methods could be used to perform many tasks, such as improving basic scientific research.



From The New York Times

View Full Article


 


Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA


What is the closest representation, practice, activity, or theory pertaining to artificial intelligence before/without the use of computers?

Half-assed examples I can think of:

  • Schizophrenia. You have "AI" in your head, completely artificial personalities (fabricated by your brain) that are going haywire.
  • Alan Turing writing a chess algorithm for a computer and exclusively following it while playing against someone to test it out.
  • Beyblades (you "let it rip" then watch a "fight" play out, without any further interaction).
  • Playing a game like Candyland or War, one based entirely on luck, in which the outcome is the same whether a second player is taking the actions or you are for them.
  • Raising a baby.

submitted by /u/Zephandrypus
[link] [comments]

from Artificial Intelligence http://ift.tt/2qWErDD

Next-gen computing: Memristor chips that see patterns over pixels

Inspired by how mammals see, a new “memristor” computer circuit prototype at the University of Michigan has the potential to process complex data, such as images and video orders of magnitude, faster and with much less power than today’s most advanced systems.

Faster image processing could have big implications for autonomous systems such as self-driving cars, says Wei Lu, U-M professor of electrical engineering and computer science. Lu is lead author of a paper on the work published in the current issue of Nature Nanotechnology.

Lu’s next-generation computer components use pattern recognition to shortcut the energy-intensive process conventional systems use to dissect images. In this new work, he and his colleagues demonstrate an algorithm that relies on a technique called “sparse coding” to coax their 32-by-32 array of memristors to efficiently analyze and recreate several photos.

Memristors are electrical resistors with memory — advanced electronic devices that regulate current based on the history of the voltages applied to them. They can store and process data simultaneously, which makes them a lot more efficient than traditional systems. In a conventional computer, logic and memory functions are located at different parts of the circuit.

“The tasks we ask of today’s computers have grown in complexity,” Lu said. “In this ‘big data’ era, computers require costly, constant and slow communications between their processor and memory to retrieve large amounts data. This makes them large, expensive and power-hungry.”

But like neural networks in a biological brain, networks of memristors can perform many operations at the same time, without having to move data around. As a result, they could enable new platforms that process a vast number of signals in parallel and are capable of advanced machine learning. Memristors are good candidates for deep neural networks, a branch of machine learning, which trains computers to execute processes without being explicitly programmed to do so.

“We need our next-generation electronics to be able to quickly process complex data in a dynamic environment. You can’t just write a program to do that. Sometimes you don’t even have a pre-defined task,” Lu said. “To make our systems smarter, we need to find ways for them to process a lot of data more efficiently. Our approach to accomplish that is inspired by neuroscience.”

A mammal’s brain is able to generate sweeping, split-second impressions of what the eyes take in. One reason is because they can quickly recognize different arrangements of shapes. Humans do this using only a limited number of neurons that become active, Lu says. Both neuroscientists and computer scientists call the process “sparse coding.”

“When we take a look at a chair we will recognize it because its characteristics correspond to our stored mental picture of a chair,” Lu said. “Although not all chairs are the same and some may differ from a mental prototype that serves as a standard, each chair retains some of the key characteristics necessary for easy recognition. Basically, the object is correctly recognized the moment it is properly classified — when ‘stored’ in the appropriate category in our heads.”

Similarly, Lu’s electronic system is designed to detect the patterns very efficiently — and to use as few features as possible to describe the original input.

In our brains, different neurons recognize different patterns, Lu says.

“When we see an image, the neurons that recognize it will become more active,” he said. “The neurons will also compete with each other to naturally create an efficient representation. We’re implementing this approach in our electronic system.”

The researchers trained their system to learn a “dictionary” of images. Trained on a set of grayscale image patterns, their memristor network was able to reconstruct images of famous paintings and photos and other test patterns.

If their system can be scaled up, they expect to be able to process and analyze video in real time in a compact system that can be directly integrated with sensors or cameras.

Story Source:

Materials provided by University of Michigan. Note: Content may be edited for style and length.

from Artificial Intelligence News — ScienceDaily http://ift.tt/2rU5QFJ

New AI Mimics Any Voice in a Matter of Minutes

http://ift.tt/2rQzo7P

The story starts out like a bad joke: Obama, Clinton and Trump walk into a bar, where they applauded a new startup based in Montreal, Canada called Lyrebird.

Skeptical? Here’s a recording.

http://ift.tt/2rQzo7P

Sound clip credit: Lyrebird

If the scenario seems too bizarre to be real, you’re right—it’s not. The entire recording was generated by a new AI with the ability to mimic natural conversation, at a rate much faster than any previous speech synthesizer.

Announced last week, Lyrebird’s program analyzes a single minute of voice recording and extracts a person’s “speech DNA” using machine learning. From there, it adds an extra layer of emotion or special intonation, until it nails a person’s voice, tone and accent—may it be Obama, Trump or even you.

While Lyrebird still retains a slight but noticeable robotic buzz characteristic of machine-generated speech, add some smartly-placed background noise to cover up the distortion, and the recordings could pass off as genuine to unsuspecting ears.

Creeped out? You’re not alone. In an era where Photoshopped images run wild and fake news swarms social media, a program that can make anyone say anything seems like a catalyst for more trouble.

Yet people are jumping on. According to Alexandre de Brébisson, a founder of the company and current PhD student at the University of Montreal, their website scored 100,000 visits on launch day, and the team has attracted the attention of “several famous investors.”

What’s the draw?

Synthetic speech

While machine-fabricated speech sounds like something straight out of a Black Mirror episode, speech synthesizers—like all technologies—aren’t inherently malicious.

For people with speech disabilities or paralysis, these programs give them a voice to communicate. For the blind, they provide a way to tap into the vast text-based resources on paper or online. AI-based personal assistants like Siri and Cortana rely on speech synthesizers to create a more natural interface with users, while audiobook companies may one day utilize the technology to automatically and cheaply generate products.

“We want to improve human-computer interfaces and create completely new applications for speech synthesis,” explains de Brébisson to Singularity Hub.

Lyrebird is only the latest push in a long line of research towards natural-sounding speech synthesizers.

The core goal of these programs is to transform text into speech in real time. It’s a two-pronged problem: for one, the AI needs to “understand” the different components of the text; for another, it has to generate appropriate sounds for the input text in a non cringe-inducing way.

Analyzing text may seem like a strange way to tackle speech, but much of our intonation for words, phrases and sentences is based on what the sentence says. For example, questions usually end with a rising pitch, and words like “read” are pronounced differently depending on their tense.

But of the two, generating the audio output is arguably the harder task. Older synthesizers rely on algorithms to produce individual sounds, resulting in the characteristic robotic voice.

These days, synthesizers generally start with a massive database of audio recordings by actual human beings, splicing together voice segments smoothly into new sentences. While the output sounds less robotic, for every new voice—switching from female to male, for example—the software needs a new dataset of voice snippets to draw upon.

Because the voice databases need to contain every possible word the device uses to communicate with its user (often in different intonations), they’re a huge pain to construct. And if there’s a word not in the database, the device stumbles.

Voice DNA

Lyrebird’s system takes a different approach.

By listening to voice recordings the AI learns the pronunciation of letters, phonemes and words. Like someone learning a new language, Lyrebird then uses its learned examples to extrapolate new words and sentences—even ones it’s never learned before—and add on top emotions such as anger, sympathy or stress.

At its core, Lyrebird is a multi-layer artificial neural network, a type of software that loosely mimics the human brain. Like their biological counterparts, artificial networks “learn” through example, tweaking the connections between each “neuron” until the network generates the correct output. Think of it as tuning a guitar.

Similar to other deep learning technologies, the initial training requires hours of voice recordings and many iterations. But once trained on one person’s voice, the AI can produce a passable mimic of another voice at thousands of sentences per second—using just a single minute of a new recording.

That’s because different voices share a lot of similar information that is already “stored” within the artificial network, explains de Brébisson. So it doesn’t need many new examples to pick up on the intricacies of another person’s speaking voice—his or her voice “DNA,” so to speak.

Although the generated recordings still have an uncanny valley quality, de Brébisson stresses that it’ll likely go away with more training examples.

“Sometimes we can hear a little bit of noise in our samples, it’s because we trained our models on real-world data and the model is learning the background noise or microphone noise,” he says, adding that the company is working hard to remove these artifacts.

Adding little “extra” sounds like lip smacking or intaking a breath could also add to the veracity of machine speak.

These “flaws” actually carry meaning and are picked up by the listener, says speech researcher Dr. Timo Baumann at Carnegie Mellon University, who is not involved with Lyrebird.

But both de Brébisson and Baumann agree that the hurdles are simple. Machines will be able to convincingly copy a human voice in real-time in just a few years, they say.

Fake speech

De Brébisson acknowledges that mimicking someone else’s voice can be highly problematic.

Fake news is the least of it. AI-generated voice recordings could be used for impersonation, raising security and privacy concerns. Voice-based security systems would no longer be safe.

While Lyrebird is working on a “voice print” that will easily tell apart originals and generated recordings, it’s unreasonable to expect people to look for such a mark in every recording they come across.

Then there are slightly less obvious concerns. Baumann points out that humans instinctively trust sources with a voice, especially if it’s endowed with emotion. Compared to an obvious synthetic voice, Lyrebird is much easier to connect with, like talking to an understanding friend. While these systems could help calm people down during a long wait on the phone, for example, they’re also great tools for social engineering.

People would more likely divulge personal information or buy things the AI recommends, says Baumann.

In a brief statement on their website, Lyrebird acknowledges these ethical concerns, but also stressed that ignoring the technology isn’t the way to go—rather, education and awareness is key, much like when Photoshop first came into the social consciousness.

“We hope that everyone will soon be aware that such technology exists and that copying the voice of someone else is possible,” they write, adding that “by releasing our technology publicly and making it available to anyone, we want to ensure that there will be no such risks.”

Lyrebird is too optimistic to completely discount the risks. Without doubt, fake audio clips are coming, and left unchecked, they could wreak havoc. But although people are still adapting to fake images, fake news and other construed information that warps our reality, the discussion about alternative facts has entered the societal mainstream, and forces have begun pushing back.

Like the delicate voice-mimicking songbird it’s named after, Lyrebird is a wonder—one that we’ll have to handle with thought and care.

Image Credit: Shutterstock

from Singularity Hub http://ift.tt/2rhpu28

When Artificial Intelligence Botches Your Medical Diagnosis, Who’s to Blame?

By Quartz

May 24, 2017

Comments

Artificial intelligence is not just creeping into our personal lives and workplaces — it’s also beginning to appear in the doctor’s office. The prospect of being diagnosed by an AI might feel foreign and impersonal at first, but what if you were told that a robot physician was more likely to give you a correct diagnosis?


AI raises profound questions regarding medical responsibility. Usually when something goes wrong, it is a fairly straightforward matter to determine blame. A misdiagnosis, for instance, would likely be the responsibility of the presiding physician. A faulty machine or medical device that harms a patient would likely see the manufacturer or operator held to account. What would this mean for an AI?


From Quartz

View Full Article


Georgia Professor Uses Artificial Intelligence To Teach Class

May 24, 2017

Comments

Artist's impression of an artificial intelligence-based teacher.

A computer science professor at the Georgia Institute of Technnology is using artificial intelligence to interact with students.

Credit: thefuturesagency.com

Imagine your teacher was a computer. A computer science professor at Georgia Tech is using artificial intelligence — or AI — to help him answer some of students’ frequently asked questions.


Tasnim Shamma from member station WABE in Atlanta has the story.


 


From WBUR

View Full Article