Don’t worry about opaque algorithms; you already don’t know what anything is doing, or why

Machine learning algorithms are opaque, difficult to audit, unconstrained by ethics , and there’s always the possibility they’ll do the unthinkable when facing the unexpected. But that’s true of most our society’s code base, and, in a way, they are the most secure part of it, because we haven’t talked ourselves yet into a false sense of security about them.

There’s a technical side to this argument: contemporary software is so complex, and the pressures under which it’s developed so strong, that it’s materially impossible to make sure it’ll always behave the way you want it to. Your phone isn’t supposed to freeze while you’re making a call, and your webcam shouldn’t send real-time surveillance to some guy in Ukraine, and yet here we are.

But that’s not the biggest problem. Yes, some Toyota vehicles decided on their own to accelerate at inconvenient times because their software systems were mindbogglingly and unnecessarily complex, but nobody outside the company knew they were because it was so legally difficult to have access to the code that even after the crashed they had to be inspected by an outside expert under conditions usually reserved to high-level intelligence briefings.

And there was the hidden code in VW engines designed to fool emissions tests, and the programs Uber uses to track you even while they say they aren’t, or even Facebook’s convenient tools to help advertisers target the emotionally vulnerable.

The point is, the main problem right now isn’t what a self-driving car _might_ do when it has to make a complex ethical choice guided by ultimately unknowable algorithms, but what the car is doing on every other moment, reflecting ethical choices guided by corporate executives that might be unknowable in a philosophical, existential sense, but are worryingly familiar in an empirical one. You don’t know most of what your phone is doing at any given time, not to mention other devices, it can be illegal to try to figure it out, and it can also be illegal if not impossible to change it even if you did.

And a phone a thing you hold in your hand and can, at least in theory, put in a drawer somewhere if you want to have a discrete chat with a Russian diplomat. Even more serious are all the hidden bits of software running in the background, like the ones that can automatically flag you as a national security risk, or are constantly weighting whether you should be allowed to turn on your tractor. Even if the organization that developed or runs the software did its job uncommonly well and knows what it’s doing down to the last bit, you don’t and most likely never will.

This situation, perhaps first and certainly most forcefully argued against by Richard Stallman, is endemic to our society, and absolutely independent of the otherwise world-changing Open Source movement. Very little of the code in our lives is running in something resembling a personal computer, after all, and even when it does, it mostly works by connecting to remote infrastructures whose key algorithms are jealously guarded business secrets. Emphasis on secret, with a hidden subtext of specially from users.

So let’s not get too focused on the fact that we don’t really understand how a given neural network works. It might suddenly decide to accelerate your car, but “old fashioned” code could, and as a matter of fact did, and in any case there’s very little practical difference between not knowing what something is doing because it’s a cognitively opaque piece of code, and not knowing what something is doing because the company controlling the thing you bought doesn’t want you to know and has the law on its side if it wants to send you to jail if you try to.

Going forward, our approach to software as users, and, increasingly, as citizens, cannot but be empirical paranoia. Just assume everything around you is potentially doing everything it’s physically capable of (noting that being remotely connected to huge amounts of computational power makes even simple hardware quite more powerful than you’d think), and if any of that is something you don’t find acceptable, take external steps to prevent it, above and beyond toggling a dubiously effective setting somewhere. Recent experience shows that FOIA requests, legal suits, and the occasional whistleblower might be more important for adding transparency to our technological infrastructure than your choice of operating system or clicking a “do not track” checkbox.

from Ethical Technology http://ift.tt/2qh0Ees

New AI Mimics Any Voice in a Matter of Minutes

http://ift.tt/2rQzo7P

The story starts out like a bad joke: Obama, Clinton and Trump walk into a bar, where they applauded a new startup based in Montreal, Canada called Lyrebird.

Skeptical? Here’s a recording.

http://ift.tt/2rQzo7P

Sound clip credit: Lyrebird

If the scenario seems too bizarre to be real, you’re right—it’s not. The entire recording was generated by a new AI with the ability to mimic natural conversation, at a rate much faster than any previous speech synthesizer.

Announced last week, Lyrebird’s program analyzes a single minute of voice recording and extracts a person’s “speech DNA” using machine learning. From there, it adds an extra layer of emotion or special intonation, until it nails a person’s voice, tone and accent—may it be Obama, Trump or even you.

While Lyrebird still retains a slight but noticeable robotic buzz characteristic of machine-generated speech, add some smartly-placed background noise to cover up the distortion, and the recordings could pass off as genuine to unsuspecting ears.

Creeped out? You’re not alone. In an era where Photoshopped images run wild and fake news swarms social media, a program that can make anyone say anything seems like a catalyst for more trouble.

Yet people are jumping on. According to Alexandre de Brébisson, a founder of the company and current PhD student at the University of Montreal, their website scored 100,000 visits on launch day, and the team has attracted the attention of “several famous investors.”

What’s the draw?

Synthetic speech

While machine-fabricated speech sounds like something straight out of a Black Mirror episode, speech synthesizers—like all technologies—aren’t inherently malicious.

For people with speech disabilities or paralysis, these programs give them a voice to communicate. For the blind, they provide a way to tap into the vast text-based resources on paper or online. AI-based personal assistants like Siri and Cortana rely on speech synthesizers to create a more natural interface with users, while audiobook companies may one day utilize the technology to automatically and cheaply generate products.

“We want to improve human-computer interfaces and create completely new applications for speech synthesis,” explains de Brébisson to Singularity Hub.

Lyrebird is only the latest push in a long line of research towards natural-sounding speech synthesizers.

The core goal of these programs is to transform text into speech in real time. It’s a two-pronged problem: for one, the AI needs to “understand” the different components of the text; for another, it has to generate appropriate sounds for the input text in a non cringe-inducing way.

Analyzing text may seem like a strange way to tackle speech, but much of our intonation for words, phrases and sentences is based on what the sentence says. For example, questions usually end with a rising pitch, and words like “read” are pronounced differently depending on their tense.

But of the two, generating the audio output is arguably the harder task. Older synthesizers rely on algorithms to produce individual sounds, resulting in the characteristic robotic voice.

These days, synthesizers generally start with a massive database of audio recordings by actual human beings, splicing together voice segments smoothly into new sentences. While the output sounds less robotic, for every new voice—switching from female to male, for example—the software needs a new dataset of voice snippets to draw upon.

Because the voice databases need to contain every possible word the device uses to communicate with its user (often in different intonations), they’re a huge pain to construct. And if there’s a word not in the database, the device stumbles.

Voice DNA

Lyrebird’s system takes a different approach.

By listening to voice recordings the AI learns the pronunciation of letters, phonemes and words. Like someone learning a new language, Lyrebird then uses its learned examples to extrapolate new words and sentences—even ones it’s never learned before—and add on top emotions such as anger, sympathy or stress.

At its core, Lyrebird is a multi-layer artificial neural network, a type of software that loosely mimics the human brain. Like their biological counterparts, artificial networks “learn” through example, tweaking the connections between each “neuron” until the network generates the correct output. Think of it as tuning a guitar.

Similar to other deep learning technologies, the initial training requires hours of voice recordings and many iterations. But once trained on one person’s voice, the AI can produce a passable mimic of another voice at thousands of sentences per second—using just a single minute of a new recording.

That’s because different voices share a lot of similar information that is already “stored” within the artificial network, explains de Brébisson. So it doesn’t need many new examples to pick up on the intricacies of another person’s speaking voice—his or her voice “DNA,” so to speak.

Although the generated recordings still have an uncanny valley quality, de Brébisson stresses that it’ll likely go away with more training examples.

“Sometimes we can hear a little bit of noise in our samples, it’s because we trained our models on real-world data and the model is learning the background noise or microphone noise,” he says, adding that the company is working hard to remove these artifacts.

Adding little “extra” sounds like lip smacking or intaking a breath could also add to the veracity of machine speak.

These “flaws” actually carry meaning and are picked up by the listener, says speech researcher Dr. Timo Baumann at Carnegie Mellon University, who is not involved with Lyrebird.

But both de Brébisson and Baumann agree that the hurdles are simple. Machines will be able to convincingly copy a human voice in real-time in just a few years, they say.

Fake speech

De Brébisson acknowledges that mimicking someone else’s voice can be highly problematic.

Fake news is the least of it. AI-generated voice recordings could be used for impersonation, raising security and privacy concerns. Voice-based security systems would no longer be safe.

While Lyrebird is working on a “voice print” that will easily tell apart originals and generated recordings, it’s unreasonable to expect people to look for such a mark in every recording they come across.

Then there are slightly less obvious concerns. Baumann points out that humans instinctively trust sources with a voice, especially if it’s endowed with emotion. Compared to an obvious synthetic voice, Lyrebird is much easier to connect with, like talking to an understanding friend. While these systems could help calm people down during a long wait on the phone, for example, they’re also great tools for social engineering.

People would more likely divulge personal information or buy things the AI recommends, says Baumann.

In a brief statement on their website, Lyrebird acknowledges these ethical concerns, but also stressed that ignoring the technology isn’t the way to go—rather, education and awareness is key, much like when Photoshop first came into the social consciousness.

“We hope that everyone will soon be aware that such technology exists and that copying the voice of someone else is possible,” they write, adding that “by releasing our technology publicly and making it available to anyone, we want to ensure that there will be no such risks.”

Lyrebird is too optimistic to completely discount the risks. Without doubt, fake audio clips are coming, and left unchecked, they could wreak havoc. But although people are still adapting to fake images, fake news and other construed information that warps our reality, the discussion about alternative facts has entered the societal mainstream, and forces have begun pushing back.

Like the delicate voice-mimicking songbird it’s named after, Lyrebird is a wonder—one that we’ll have to handle with thought and care.

Image Credit: Shutterstock

from Singularity Hub http://ift.tt/2rhpu28

Let the Masses Live In VR–A Socioeconomic Exploration

As technology continues its exponential gains and catapults us into the realms of augmented and virtual reality, a chasm is forming between those who are looking to live unplugged as they cultivate a life of digital detoxing and those who are anxiously awaiting the moment they can live a majority of their life exploring alternate realities from the comfort of their own home.

from Ethical Technology http://ift.tt/2rSsTQX

International Roadmap for Devices and Systems (IRDS)

http://ift.tt/2ryNj55

This initiative focuses on an International Roadmap for Devices and Systems (IRDS) through the work of roadmap teams closely aligned with the advancement of the devices and systems industries. Led by an international roadmap committee (IRC), International Focus Teams (IFTs) will collaborate in the development of a roadmap, and engage with other segments of the IEEE, such as Rebooting Computing, and related industry communities, in complementary activities to help ensure alignment and consensus across a range of stakeholders, such as

  • Academia
  • Consortia
  • Industry
  • National laboratories

IEEE, the world’s largest technical professional organization dedicated to advancing technology for humanity, through the IEEE Standards Association (IEEE-SA) Industry Connections (IC) program, supports the IRDS to ensure alignment and consensus across a range of stakeholders to identify trends and develop the roadmap for all of the related technologies in the computer industry.

The IRDS is sponsored by the IEEE Rebooting Computing (IEEE RC) Initiative in consultation and support from many IEEE Operating Units and Partner organizations including:
CASS – Circuits and Systems Society
CEDA – Council on Electronic Design Automation
CPMT – Components, Packaging and Manufacturing Society
CS – Computer Society
CSC – Council on Superconductivity
EDS – Electron Devices Society
MAG – Magnetics Society
NTC – Nanotechnology Council
RS – Reliability Society
SSCS – Solid State Circuits Society
SRC – Semiconductor Research Corporation

IEEE Standards Association

Institute quote from IEEE president.

“A major new initiative, the International Roadmap for Devices and Systems will ensure alignment and consensus across a range of stakeholders to identify trends and develop the road map for all related technologies in the computer industry. With the launch of the IRDS program, IEEE is taking the lead in building a comprehensive, end-to-end view of the computing ecosystem, including devices, components, systems, architecture, and software.” – Barry Shoop, 2 December 2016, The Institute

IRDS Mission

Identify the roadmap of electronic industry from devices to systems and from systems to devices

Who We Are
Our roadmap executive committee is comprised of leaders from five regions of the world: Europe, Korea, Japan, Taiwan, and the U.S.A.

The IRDS roadmap IFTs are Application Benchmarking, System and Architecture, More Moore, Beyond CMOS, Heterogeneous Integration (including Systems, Devices, and Packaging), Outside System Connectivity, Factory Integration, Lithography, Metrology, Emerging Research Materials, Environment, Health, and Safety, Yield, Test and Test Equipment, and RF and AMS.

International roadmap efforts directly aligned with the IRDS:

Our Subject Matter Experts include industry leaders from these communities:

  • Device and Systems Manufacturers
  • Researchers
  • Suppliers
  • Vendors
  • Computer Manufacturers
  • Process and Device Modeling Companies
  • MEMs and NEMs Manufacturers
  • Semiconductor Process Equipment Manufacturers
  • Integrated Circuit CAD Vendors

Method

  • Each of the IFTs will assess present status and future evolution of the ecosystem in their specific field of expertise and produce a 15 year roadmap
  • This includes: evolution, key challenges, major roadblocks and possible solutions
  • Integration of all the IFTs roadmaps will produce a “big picture overview”
  • This IRDS, in cooperation with RCI, will
    • Identify the technology needs and the key enablers, potential solutions, and areas of innovations in order to resolve challenges and meet the 15-year targets for the industries enabled by the IRDS.
    • Identify any potential cooperation with organizations interested to demonstrating possible solutions

Deliverables

Exploration of this activity include:

  • Identifying key trends related to devices, systems, and all related technologies by generating a roadmap with a 15-year horizon
  • Determining generic devices and systems needs, challenges, potential solutions, and opportunities for innovation.
  • Encouraging related activities world-wide through collaborative events such as related IEEE conferences and roadmap workshops
  • Visit our Reports page for 2016 white papers.

Policy and Procedures

Interested in participating on one of the Focus Teams?  Please contact irds_info@ieee.org.

IRDS Contact: irds_info@ieee.org.

Join our IRDS COLLABRATEC Community!

—Event Producer

from KurzweilAI http://ift.tt/2qUNS6t

IEEE International Conference on Rebooting Computing (ICRC)

http://ift.tt/2qf9Kbu

Rebooting Computing week which begins with a meeting of the International Roadmap for Devices and Systems (IRDS, Nov. 6-7), then ICRC 2017 (Nov. 8-9), and ends with the first Industry Summit on the Future of Computing (Nov. 10). Register now!

The goal of the 2nd IEEE International Conference on Rebooting Computing is to discover and foster novel methodologies to reinvent computing technology, including new materials and physics, devices and circuits, system and network architectures, and algorithms and software. The conference seeks input from a broad technical community, with emphasis on all aspects of the computing stack.

—Event Producer

from KurzweilAI http://ift.tt/2q9FCma

Veo Gives Robots ‘Eyes and a Brain’ So They Can Safely Work With People

http://img.youtube.com/vi/Sc5bNXZWJ70/0.jpg

The robots are coming.

Actually, they’re already here. Machines are learning to do tasks they’ve never done before, from locating and retrieving goods from a shelf to driving cars to performing surgery. In manufacturing environments, robots can place an object with millimeter precision over and over, lift hundreds of pounds without getting tired, and repeat the same action constantly for hundreds of hours.

But let’s not give robots all the glory just yet. A lot of things that are easy for humans are still hard or impossible for robots. A three-year-old child, for example, can differentiate between a dog and a cat, or intuitively scoot over when said dog or cat jumps into its play space. A computer can’t do either of these simple actions.

So how do we take the best robots have to offer and the best humans have to offer and combine them to reach new levels of output and performance?

That’s the question engineers at Veo Robotics are working to answer. At Singularity University’s Exponential Manufacturing Summit last week, Clara Vu, Veo’s cofounder and VP of Engineering, shared some of her company’s initiatives and why they’re becoming essential to today’s manufacturing world.

Why we’re awesome

If you think about it, we humans are pretty amazing creatures. Vu pointed out that the human visual system has wide range, precise focus, depth-rich color, and three dimensions. Our hands have five independently-articulated fingers, 29 joints, 34 muscles, and 123 tendons—and they’re all covered in skin, a finely-grained material sensitive to force, temperature and touch.

Not only do we have all these tools, we have millions of years of evolution behind us that have taught us the best ways to use them. We use them for a huge variety of tasks, and we can adapt them to quickly-changing environments.

Most robots, on the other hand, know how to do one task, the same way, over and over again. Move the assembly line six inches to the right or make the load two pounds lighter, and a robot won’t be able to adapt and carry on.

Like oil and water

In today’s manufacturing environment, humans and robots don’t mix—they’re so different that it’s hard for them to work together. This leaves manufacturing engineers designing processes either entirely for robots, or entirely without them. But what if the best way to, say, attach a door to a refrigerator is to have a robot lift it, a human guide it into place, the robot put it down, and the human tighten its hinges?

Sounds simple enough, but with the big, dumb robots we have today, that’s close to impossible—and the manufacturing environment is evolving in a direction that will make it harder, not easier. “As the number of different things we want to make increases and the time between design and production decreases, we’ll want more flexibility in our processes, and it will be more difficult to use automation effectively,” Vu said.

Smaller, lighter, smarter

For people and robots to work together safely and efficiently, robots need to get smaller, lighter, and most importantly, smarter. “Autonomy is exactly what we need here,” Vu said. “At its core, autonomy is about the ability to perceive, decide and act independently.” An autonomous robot, she explained, needs to be able to answer questions like ‘where am I?’, ‘what’s going on around me?’, ‘what actions are safe?’, and ‘what actions will bring me closer to my goal?’

Veo engineers are working on a responsive system to bring spatial awareness to robots. Depth-sensing cameras give the robot visual coverage, and its software learns to differentiate between the objects around it, to the point that it can be aware of the size and location of everything in its area. It can then be programmed to adjust its behavior to changes in its environment—if a human shows up where a human isn’t supposed to be, the robot can stop what it’s doing to make sure the human doesn’t get hurt.

3D sensors will also play a key part in the system, and Vu mentioned the importance of their declining costs. “Ten years ago, the only 3D sensors that were available were 3D liners that cost tens of thousands of dollars. Today, because of advances in consumer applications like gaming and gesture recognition, it’s possible to get 3D time-of-flight chipsets for well under a hundred dollars, in quantity. These sensors give us exactly the kind of data we need to solve this problem,” she said.

3D sensors wouldn’t be very helpful without computers that can do something useful with all the data they collect. “Multiple sensors monitoring a large 3D area means millions of points that have to be processed in real time,” Vu noted. “Today’s CPUs, and in particular GPUs, which can perform thousands of computations in parallel, are up to the task.”

A seamless future

Veo’s technology can be integrated with pre-existing robots of various sizes, types, and functionalities. The company is currently testing its prototypes with manufacturing partners, and is aiming to deploy in 2019.

Vu told the audience that industrial robots have a projected compound annual growth rate of 13 percent by 2019, and though collaborative robots account for just a small fraction of the installed base, their projected growth rate by 2019 is 67 percent.

Vu concluded with her vision of a future of seamless robot-human interaction. “We want to allow manufacturers to combine the creativity, flexibility, judgment and dexterity of humans with the strength, speed and precision of industrial robots,” she said. “We believe this will give manufacturers new tools to meet the growing needs of the modern economy.”

Image Credit: Shutterstock

from Singularity Hub http://ift.tt/2qdhT0i

Watch: Where AI Is Today, and Where It’s Going in the Future

2016 was a year of breakthroughs in artificial intelligence. Top selling holiday gifts, Amazon Echo and Google Now, featured AI-powered voice recognition; IBM Watson diagnosed cancer; and Google DeepMind’s system AlphaGo cracked the ancient and complex Chinese game Go sooner than expected.

Neil Jacobstein, faculty chair of Artificial Intelligence and Robotics at Singularity University, hit the audience at Singularity University’s Exponential Manufacturing with some of the more significant updates in AI so far this year.

DeepMind, for example, recently outlined a new method called Elastic Waiting Consolidation (EWC) to tackle “catastrophic forgetting” in machine learning. The method helps neural networks retain “memories” of previously learned tasks.

And a project out of Newcastle University is taking object recognition to the next level. The researchers have created a system that’s hooked up to a robotic hand, which is learning how to uniquely approach and pick up different objects. (Think about the impact this technology may have on assembly lines.)

For those worried AI has become overhyped, we sat down with Jacobstein after his talk to hear firsthand about progress in the field of AI, the practical applications of the technology that he’s most excited about, and how we can prepare society for a future of AI.

Image Source: Shutterstock

from Singularity Hub http://ift.tt/2re0uZn