Next-gen computing: Memristor chips that see patterns over pixels

Inspired by how mammals see, a new “memristor” computer circuit prototype at the University of Michigan has the potential to process complex data, such as images and video orders of magnitude, faster and with much less power than today’s most advanced systems.

Faster image processing could have big implications for autonomous systems such as self-driving cars, says Wei Lu, U-M professor of electrical engineering and computer science. Lu is lead author of a paper on the work published in the current issue of Nature Nanotechnology.

Lu’s next-generation computer components use pattern recognition to shortcut the energy-intensive process conventional systems use to dissect images. In this new work, he and his colleagues demonstrate an algorithm that relies on a technique called “sparse coding” to coax their 32-by-32 array of memristors to efficiently analyze and recreate several photos.

Memristors are electrical resistors with memory — advanced electronic devices that regulate current based on the history of the voltages applied to them. They can store and process data simultaneously, which makes them a lot more efficient than traditional systems. In a conventional computer, logic and memory functions are located at different parts of the circuit.

“The tasks we ask of today’s computers have grown in complexity,” Lu said. “In this ‘big data’ era, computers require costly, constant and slow communications between their processor and memory to retrieve large amounts data. This makes them large, expensive and power-hungry.”

But like neural networks in a biological brain, networks of memristors can perform many operations at the same time, without having to move data around. As a result, they could enable new platforms that process a vast number of signals in parallel and are capable of advanced machine learning. Memristors are good candidates for deep neural networks, a branch of machine learning, which trains computers to execute processes without being explicitly programmed to do so.

“We need our next-generation electronics to be able to quickly process complex data in a dynamic environment. You can’t just write a program to do that. Sometimes you don’t even have a pre-defined task,” Lu said. “To make our systems smarter, we need to find ways for them to process a lot of data more efficiently. Our approach to accomplish that is inspired by neuroscience.”

A mammal’s brain is able to generate sweeping, split-second impressions of what the eyes take in. One reason is because they can quickly recognize different arrangements of shapes. Humans do this using only a limited number of neurons that become active, Lu says. Both neuroscientists and computer scientists call the process “sparse coding.”

“When we take a look at a chair we will recognize it because its characteristics correspond to our stored mental picture of a chair,” Lu said. “Although not all chairs are the same and some may differ from a mental prototype that serves as a standard, each chair retains some of the key characteristics necessary for easy recognition. Basically, the object is correctly recognized the moment it is properly classified — when ‘stored’ in the appropriate category in our heads.”

Similarly, Lu’s electronic system is designed to detect the patterns very efficiently — and to use as few features as possible to describe the original input.

In our brains, different neurons recognize different patterns, Lu says.

“When we see an image, the neurons that recognize it will become more active,” he said. “The neurons will also compete with each other to naturally create an efficient representation. We’re implementing this approach in our electronic system.”

The researchers trained their system to learn a “dictionary” of images. Trained on a set of grayscale image patterns, their memristor network was able to reconstruct images of famous paintings and photos and other test patterns.

If their system can be scaled up, they expect to be able to process and analyze video in real time in a compact system that can be directly integrated with sensors or cameras.

Story Source:

Materials provided by University of Michigan. Note: Content may be edited for style and length.

from Artificial Intelligence News — ScienceDaily http://ift.tt/2rU5QFJ

New AI Mimics Any Voice in a Matter of Minutes

http://ift.tt/2rQzo7P

The story starts out like a bad joke: Obama, Clinton and Trump walk into a bar, where they applauded a new startup based in Montreal, Canada called Lyrebird.

Skeptical? Here’s a recording.

http://ift.tt/2rQzo7P

Sound clip credit: Lyrebird

If the scenario seems too bizarre to be real, you’re right—it’s not. The entire recording was generated by a new AI with the ability to mimic natural conversation, at a rate much faster than any previous speech synthesizer.

Announced last week, Lyrebird’s program analyzes a single minute of voice recording and extracts a person’s “speech DNA” using machine learning. From there, it adds an extra layer of emotion or special intonation, until it nails a person’s voice, tone and accent—may it be Obama, Trump or even you.

While Lyrebird still retains a slight but noticeable robotic buzz characteristic of machine-generated speech, add some smartly-placed background noise to cover up the distortion, and the recordings could pass off as genuine to unsuspecting ears.

Creeped out? You’re not alone. In an era where Photoshopped images run wild and fake news swarms social media, a program that can make anyone say anything seems like a catalyst for more trouble.

Yet people are jumping on. According to Alexandre de Brébisson, a founder of the company and current PhD student at the University of Montreal, their website scored 100,000 visits on launch day, and the team has attracted the attention of “several famous investors.”

What’s the draw?

Synthetic speech

While machine-fabricated speech sounds like something straight out of a Black Mirror episode, speech synthesizers—like all technologies—aren’t inherently malicious.

For people with speech disabilities or paralysis, these programs give them a voice to communicate. For the blind, they provide a way to tap into the vast text-based resources on paper or online. AI-based personal assistants like Siri and Cortana rely on speech synthesizers to create a more natural interface with users, while audiobook companies may one day utilize the technology to automatically and cheaply generate products.

“We want to improve human-computer interfaces and create completely new applications for speech synthesis,” explains de Brébisson to Singularity Hub.

Lyrebird is only the latest push in a long line of research towards natural-sounding speech synthesizers.

The core goal of these programs is to transform text into speech in real time. It’s a two-pronged problem: for one, the AI needs to “understand” the different components of the text; for another, it has to generate appropriate sounds for the input text in a non cringe-inducing way.

Analyzing text may seem like a strange way to tackle speech, but much of our intonation for words, phrases and sentences is based on what the sentence says. For example, questions usually end with a rising pitch, and words like “read” are pronounced differently depending on their tense.

But of the two, generating the audio output is arguably the harder task. Older synthesizers rely on algorithms to produce individual sounds, resulting in the characteristic robotic voice.

These days, synthesizers generally start with a massive database of audio recordings by actual human beings, splicing together voice segments smoothly into new sentences. While the output sounds less robotic, for every new voice—switching from female to male, for example—the software needs a new dataset of voice snippets to draw upon.

Because the voice databases need to contain every possible word the device uses to communicate with its user (often in different intonations), they’re a huge pain to construct. And if there’s a word not in the database, the device stumbles.

Voice DNA

Lyrebird’s system takes a different approach.

By listening to voice recordings the AI learns the pronunciation of letters, phonemes and words. Like someone learning a new language, Lyrebird then uses its learned examples to extrapolate new words and sentences—even ones it’s never learned before—and add on top emotions such as anger, sympathy or stress.

At its core, Lyrebird is a multi-layer artificial neural network, a type of software that loosely mimics the human brain. Like their biological counterparts, artificial networks “learn” through example, tweaking the connections between each “neuron” until the network generates the correct output. Think of it as tuning a guitar.

Similar to other deep learning technologies, the initial training requires hours of voice recordings and many iterations. But once trained on one person’s voice, the AI can produce a passable mimic of another voice at thousands of sentences per second—using just a single minute of a new recording.

That’s because different voices share a lot of similar information that is already “stored” within the artificial network, explains de Brébisson. So it doesn’t need many new examples to pick up on the intricacies of another person’s speaking voice—his or her voice “DNA,” so to speak.

Although the generated recordings still have an uncanny valley quality, de Brébisson stresses that it’ll likely go away with more training examples.

“Sometimes we can hear a little bit of noise in our samples, it’s because we trained our models on real-world data and the model is learning the background noise or microphone noise,” he says, adding that the company is working hard to remove these artifacts.

Adding little “extra” sounds like lip smacking or intaking a breath could also add to the veracity of machine speak.

These “flaws” actually carry meaning and are picked up by the listener, says speech researcher Dr. Timo Baumann at Carnegie Mellon University, who is not involved with Lyrebird.

But both de Brébisson and Baumann agree that the hurdles are simple. Machines will be able to convincingly copy a human voice in real-time in just a few years, they say.

Fake speech

De Brébisson acknowledges that mimicking someone else’s voice can be highly problematic.

Fake news is the least of it. AI-generated voice recordings could be used for impersonation, raising security and privacy concerns. Voice-based security systems would no longer be safe.

While Lyrebird is working on a “voice print” that will easily tell apart originals and generated recordings, it’s unreasonable to expect people to look for such a mark in every recording they come across.

Then there are slightly less obvious concerns. Baumann points out that humans instinctively trust sources with a voice, especially if it’s endowed with emotion. Compared to an obvious synthetic voice, Lyrebird is much easier to connect with, like talking to an understanding friend. While these systems could help calm people down during a long wait on the phone, for example, they’re also great tools for social engineering.

People would more likely divulge personal information or buy things the AI recommends, says Baumann.

In a brief statement on their website, Lyrebird acknowledges these ethical concerns, but also stressed that ignoring the technology isn’t the way to go—rather, education and awareness is key, much like when Photoshop first came into the social consciousness.

“We hope that everyone will soon be aware that such technology exists and that copying the voice of someone else is possible,” they write, adding that “by releasing our technology publicly and making it available to anyone, we want to ensure that there will be no such risks.”

Lyrebird is too optimistic to completely discount the risks. Without doubt, fake audio clips are coming, and left unchecked, they could wreak havoc. But although people are still adapting to fake images, fake news and other construed information that warps our reality, the discussion about alternative facts has entered the societal mainstream, and forces have begun pushing back.

Like the delicate voice-mimicking songbird it’s named after, Lyrebird is a wonder—one that we’ll have to handle with thought and care.

Image Credit: Shutterstock

from Singularity Hub http://ift.tt/2rhpu28

When Artificial Intelligence Botches Your Medical Diagnosis, Who’s to Blame?

By Quartz

May 24, 2017

Comments

Artificial intelligence is not just creeping into our personal lives and workplaces — it’s also beginning to appear in the doctor’s office. The prospect of being diagnosed by an AI might feel foreign and impersonal at first, but what if you were told that a robot physician was more likely to give you a correct diagnosis?


AI raises profound questions regarding medical responsibility. Usually when something goes wrong, it is a fairly straightforward matter to determine blame. A misdiagnosis, for instance, would likely be the responsibility of the presiding physician. A faulty machine or medical device that harms a patient would likely see the manufacturer or operator held to account. What would this mean for an AI?


From Quartz

View Full Article


Georgia Professor Uses Artificial Intelligence To Teach Class

May 24, 2017

Comments

Artist's impression of an artificial intelligence-based teacher.

A computer science professor at the Georgia Institute of Technnology is using artificial intelligence to interact with students.

Credit: thefuturesagency.com

Imagine your teacher was a computer. A computer science professor at Georgia Tech is using artificial intelligence — or AI — to help him answer some of students’ frequently asked questions.


Tasnim Shamma from member station WABE in Atlanta has the story.


 


From WBUR

View Full Article


 


Tawny Crazy Ant is terrorizing Mississippi’s Gulf Coast – WJTV


WJTV

Tawny Crazy Ant is terrorizing Mississippi’s Gulf Coast
WJTV
JACKSON, Mississippi (WJTV) – An insect that can cause major damage inside your home is terrorizing the Gulf Coast and entomologists are trying to keep it from spreading to the rest of Mississippi. The “tawny crazy ant” has not been around for long

from ants – Google News http://ift.tt/2qVtbte

Seaweed is being used to kill invasive ants – ZME Science


ZME Science

Seaweed is being used to kill invasive ants
ZME Science
When summer comes, the ants invade houses. Particularly bad are the Argentine ants, an invasive species which has spread all around the world. So what to do if your house gets swarmed? Spraying and laying commercial pesticides aren’t ideal because …

from ants – Google News http://ift.tt/2rTI1Og

Ants can give you a buggy spring – The Newark Advocate


The Newark Advocate

Ants can give you a buggy spring
The Newark Advocate
Ants are part of the Formicidae family of insects. As part of the ant’s defenses, formic acid is injected through a sting or sprayed on an attacker. Formic acid is useful in many ways and small amounts of it are produced in humans as the chemical

from ants – Google News http://ift.tt/2qVvdtp

China censored Google’s AlphaGo match against world’s best Go player

DeepMind’s board game-playing AI, AlphaGo, may well have won its first game against the Go world number one, Ke Jie, from China – but but most Chinese viewers could not watch the match live.

The Chinese government had issued a censorship notice to broadcasters and online publishers, warning them against livestreaming Tuesday’s game, according to China Digital Times, a site that regularly posts such notices in the name of transparency.

“Regarding the go match between Ke Jie and AlphaGo, no website, without exception, may carry a livestream,” the notice read. “If one has been announced in advance, please immediately withdraw it.” The ban did not just cover video footage: outlets were banned from covering the match live in any way, including text commentary, social media, or push notifications.

It appears the government was concerned that 19-year-old Ke, who lost the first of three scheduled games by a razor-thin half-point margin, might have suffered a more damaging defeat that would hurt the national pride of a state which holds Go close to its heart.

After the game Ke said AlphaGo had become too strong for humans. “I feel like his game is more and more like the ‘Go god,’” he said. “Really, it is brilliant”.

The ban underscores the esteem in which Go is held across east Asia, where it has been played in more or less unmodified form for over 2,000 years. First invented in China in 500BC, it was considered one of the four arts a scholarly Chinese gentleman should master, along with playing the guqin, calligraphy and painting.

Go was formalised in Japan, where the game arrived in the 7th century. The country developed a system of Go houses, for training and supporting players, and for hundreds of years the houses would compete in the annual castle games for the privilege of playing in the shogun’s presence.

In Korea, where the game arrived in the 5th century, high level Go players are celebrities in their own right. DeepMind’s first public victory took place against Lee Sedol, the Roger Federer of the game, after the AI won four of five matches covered by media from across the region.

Despite the ban, several Chinese streaming sites such as bilibili.com offered versions of the game for viewers to watch live, replicating it move by move on their own boards. None had actual shots from the event, however. According to business news site Quartz, one Shanghai-based livestreaming site had sent staff to the venue before receiving the ban and withdrawing them on Friday.

DeepMind is streaming all three games of the match live, on YouTube. But the video site is blocked in China, along with the rest of Google’s services.

from Artificial intelligence (AI) | The Guardian http://ift.tt/2qjxXNj

Let the Masses Live In VR–A Socioeconomic Exploration

As technology continues its exponential gains and catapults us into the realms of augmented and virtual reality, a chasm is forming between those who are looking to live unplugged as they cultivate a life of digital detoxing and those who are anxiously awaiting the moment they can live a majority of their life exploring alternate realities from the comfort of their own home.

from Ethical Technology http://ift.tt/2rSsTQX