Do we shape technologies or do they shape us? Dupuy critique. Seminal thinker

submitted by /u/DeleuzeChaosmos
[link] [comments]

from Artificial Intelligence https://ift.tt/2toPR4x

Advertisements

Deepfake Videos Are Getting Impossibly Good

https://ift.tt/2M9NCd4

Deepfake videos of former U.S. president Barack Obama

A new artificial intelligence-based system uses input video to create photorealistic reanimations of portrait videos.

Credit: University of Washington

Researchers at Stanford University, working with colleagues at the Technical University of Munich in Germany, the University of Bath in the U.K., France-based Technicolor, and other institutions have developed an artificial intelligence-based system that uses input video to create photorealistic reanimations of portrait videos.

The data from the source videos, which are created by a source actor, is used to manipulate the portrait video of a target actor.

In addition, the new Deep Video Portraits systems enables a range of movements, including full three-dimensional head positions, head rotation, eye gaze, and eye blinking.

Deep Video Portraits uses generative neural networks to take data from the signal models and calculate the photorealistic frames for a given target actor; secondary algorithms are used to correct glitches, giving the videos a highly realistic appearance.

The research will be presented in August at the ACM Special Interest Group on Computer Graphics and Interactive Techniques Conference (SIGGRAPH 2018) in Vancouver, Canada.

From Gizmodo
View Full Article

 

Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA


A Robot Has Performed Eye Surgery on Humans for the First Time

https://ift.tt/2M7pXtS

The robot is guided by the surgeon.

Researchers at the University of Oxford in the U.K., working with Dutch medical robotics firm Preceye, have developed a robot that can perform eye surgery on humans.

Credit: University of Oxford

Researchers at the University of Oxford in the U.K., working with Dutch medical robotics firm Preceye, have developed a robot that can perform eye surgery on humans.

The team used the robot to help six participants who needed a membrane removed from their retina to improve their vision. The procedure involves excising a collection of cells that have clumped together.

The robot has a moveable arm directed using a joystick-style controller; it can be fitted with various surgical instruments, and filters out tremors from the surgeon’s hand.

Twelve participants received the surgery, with six being worked on by the robot, and six receiving treatment from a human surgeon; all the operations were deemed successful.

Although the robotic approach took nearly three times as long as performing the surgery manually, the researchers attribute this to the fact that the surgeons were new to using the robot and were being cautious.

From New Scientist
View Full Article – May Require Paid Subscription

 

Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA


Why We Need to Fine-Tune Our Definition of Artificial Intelligence

https://ift.tt/2lns3e4

Sophia’s uncanny-valley face, made of Hanson Robotics’ patented Frubber, is rapidly becoming an iconic image in the field of artificial intelligence. She has been interviewed on shows like 60 Minutes, made a Saudi citizen, and even appeared before the United Nations. Every media appearance sparks comments about how artificial intelligence is going to completely transform the world. This is pretty good PR for a chatbot in a robot suit.

But it’s also riding the hype around artificial intelligence, and more importantly, people’s uncertainty around what constitutes artificial intelligence, what can feasibly be done with it, and how close various milestones may be.

There are various definitions of artificial intelligence.

For example, there’s the cultural idea (from films like Ex Machina, for example) of a machine that has human-level artificial general intelligence. But human-level intelligence or performance is also seen as an important benchmark for those that develop software that aims to mimic narrow aspects of human intelligence, for example, medical diagnostics.

The latter software might be referred to as narrow AI, or weak AI. Weak it may be, but it can still disrupt society and the world of work substantially.

Then there’s the philosophical idea, championed by Ray Kurzweil, Nick Bostrom, and others, of a recursively-improving superintelligent AI that eventually compares to human intelligence in the same way as we outrank bacteria. Such a scenario would clearly change the world in ways that are difficult to imagine and harder to quantify; weighty tomes are devoted to studying how to navigate the perils, pitfalls, and possibilities of this future. The ones by Bostrom and Max Tegmark epitomize this type of thinking.

This, more often than not, is the scenario that Stephen Hawking and various Silicon Valley luminaries have warned about when they view AI as an existential risk.

Those working on superintelligence as a hypothetical future may lament for humanity when people take Sophia seriously. Yet without hype surrounding the achievements of narrow AI in industry, and the immense advances in computational power and algorithmic complexity driven by these achievements, they may not get funding to research AI safety.

Some of those who work on algorithms at the front line find the whole superintelligence debate premature, casting fear and uncertainty over work that has the potential to benefit humanity. Others even call it a dangerous distraction from the very real problems that narrow AI and automation will pose, although few actually work in the field. But even as they attempt to draw this distinction, surely some of their VC funding and share price relies on the idea that if superintelligent AI is possible, and as world-changing as everyone believes it will be, Google might get there first. These dreams may drive people to join them.

Yet the ambiguity is stark. Someone working on, say, MIT Intelligence Quest or Google Brain might be attempting to reach AGI by studying human psychology and learning or animal neuroscience, perhaps attempting to simulate the simple brain of a nematode worm. Another researcher, who we might consider to be “narrow” in focus, trains a neural network to diagnose cancer with higher accuracy than any human.

Where should something like Sophia, a chatbot that flatters to deceive as a general intelligence, sit? Its creator says: “As a hard-core transhumanist I see these as somewhat peripheral transitional questions, which will seem interesting only during a relatively short period of time before AGIs become massively superhuman in intelligence and capability. I am more interested in the use of Sophia as a platform for general intelligence R&D.” This illustrates a further source of confusion: people working in the field disagree about the end goal of their work, how close an AGI might be, and even what artificial intelligence is.

Stanford’s Jerry Kaplan is one of those who lays some of the blame at the feet of AI researchers themselves. “Public discourse about AI has become untethered from reality in part because the field doesn’t have a coherent theory. Without such a theory, people can’t gauge progress in the field, and characterizing advances becomes anyone’s guess.” He would prefer a less mysticism-loaded term like “anthropic computing.” Defining intelligence is difficult enough, but efforts like Stanford’s AI index go some way towards establishing a framework for tracking progress in different fields.

The ambiguity and confusion surrounding AI is part of a broader trend. A combination of marketing hype and the truly impressive pace of technology can cause us to overestimate our own technological capabilities or achievements. In artificial intelligence, which requires highly valued expertise and expensive hardware, the future remains unevenly distributed. For all the hype over renewables in the last 30 years, fossil fuels have declined from providing 88 percent of our energy to 85 percent.

We can underestimate the vulnerabilities. How many people have seen videos of Sophia or Atlas or heard hype about AlphaGo? Okay, now how many know that some neural networks can be fooled by adversarial examples that could be printed out as stickers? Overestimating what technology can do can leave you dangerously dependent on it, or blind to the risks you’re running.

At the same time, there is a very real risk that technological capacities and impacts are underestimated, or missed entirely. Take the recent controversy over social media engineering in the US election: no one can agree over the impact that automated “bots” have had. Refer to these algorithms as “artificial intelligence,” and people will think you’re a conspiracy theorist. Yet they can still have a societal impact.

Those who work on superintelligence argue that development could accelerate rapidly, that we could be in the knee of an exponential curve. Given that the problem they seek to solve (“What should an artificial superintelligence optimize?”) is dangerously close to “What should the mind of God look like?”, they might need all the time they can get.

We urgently need to move away from an artificial dichotomy between techno-hype and techno-fear; oscillating from one to the other is no way to ensure safe advances in technology. We need to communicate with those at the forefront of AI research in an honest, nuanced way and listen to their opinions and arguments, preferably without using a picture of the Terminator in the article.

Those who work with AI and robotics should ensure they don’t mislead the public. We need to ensure that policymakers have the best information possible. Luckily, groups like OpenAI are helping with this.

Algorithms are already reshaping our society; regardless of where you think artificial intelligence is going, a confused response to its promises and perils is no good thing.

Image Credit: vs148 / Shutterstock.com

from Singularity Hub https://ift.tt/2K30oJV

Controlling robots with brainwaves and hand gestures

https://ift.tt/2lkM8S4

Getting robots to do things isn’t easy: usually scientists have to either explicitly program them or get them to understand how humans communicate via language.

But what if we could control robots more intuitively, using just hand gestures and brainwaves?

A new system spearheaded by researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) aims to do exactly that, allowing users to instantly correct robot mistakes with nothing more than brain signals and the flick of a finger.

Building off the team’s past work focused on simple binary-choice activities, the new work expands the scope to multiple-choice tasks, opening up new possibilities for how human workers could manage teams of robots.

By monitoring brain activity, the system can detect in real time if a person notices an error as a robot does a task. Using an interface that measures muscle activity, the person can then make hand gestures to scroll through and select the correct option for the robot to execute.

The team demonstrated the system on a task in which a robot moves a power drill to one of three possible targets on the body of a mock plane. Importantly, they showed that the system works on people it’s never seen before, meaning that organizations could deploy it in real-world settings without needing to train it on users.

“This work combining EEG and EMG feedback enables natural human-robot interactions for a broader set of applications than we’ve been able to do before using only EEG feedback,” says CSAIL director Daniela Rus, who supervised the work. “By including muscle feedback, we can use gestures to command the robot spatially, with much more nuance and specificity.”

PhD candidate Joseph DelPreto was lead author on a paper about the project alongside Rus, former CSAIL postdoctoral associate Andres F. Salazar-Gomez, former CSAIL research scientist Stephanie Gil, research scholar Ramin M. Hasani, and Boston University professor Frank H. Guenther. The paper will be presented at the Robotics: Science and Systems (RSS) conference taking place in Pittsburgh next week.

Intuitive human-robot interaction

In most previous work, systems could generally only recognize brain signals when people trained themselves to “think” in very specific but arbitrary ways and when the system was trained on such signals. For instance, a human operator might have to look at different light displays that correspond to different robot tasks during a training session.

Not surprisingly, such approaches are difficult for people to handle reliably, especially if they work in fields like construction or navigation that already require intense concentration.

Meanwhile, Rus’ team harnessed the power of brain signals called “error-related potentials” (ErrPs), which researchers have found to naturally occur when people notice mistakes. If there’s an ErrP, the system stops so the user can correct it; if not, it carries on.

“What’s great about this approach is that there’s no need to train users to think in a prescribed way,” says DelPreto. “The machine adapts to you, and not the other way around.”

For the project the team used “Baxter,” a humanoid robot from Rethink Robotics. With human supervision, the robot went from choosing the correct target 70 percent of the time to more than 97 percent of the time.

To create the system the team harnessed the power of electroencephalography (EEG) for brain activity and electromyography (EMG) for muscle activity, putting a series of electrodes on the users’ scalp and forearm.

Both metrics have some individual shortcomings: EEG signals are not always reliably detectable, while EMG signals can sometimes be difficult to map to motions that are any more specific than “move left or right.” Merging the two, however, allows for more robust bio-sensing and makes it possible for the system to work on new users without training.

“By looking at both muscle and brain signals, we can start to pick up on a person’s natural gestures along with their snap decisions about whether something is going wrong,” says DelPreto. “This helps make communicating with a robot more like communicating with another person.”

The team says that they could imagine the system one day being useful for the elderly, or workers with language disorders or limited mobility.

“We’d like to move away from a world where people have to adapt to the constraints of machines,” says Rus. “Approaches like this show that it’s very much possible to develop robotic systems that are a more natural and intuitive extension of us.”

Video: https://www.youtube.com/watch?v=_Or8Lt3YtEA&feature=youtu.be

The project was funded, in part, by the Boeing Company.

from Artificial Intelligence News — ScienceDaily https://ift.tt/2JR3Awe