A University of California, Berkeley team has engineered a robot that can mimic an activity after viewing it only once on a video screen.
The researchers integrated imitation learning with a model-agnostic meta-learning (MAML) algorithm so the robot acquires knowledge by incorporating prior experience.
If a robot is shown video of a human performing some specific task, the MAML can get a “feel” for an objective. If it is taught to emulate a behavior in a certain way, it “learns” what action to follow by observing other similar behaviors.
For example, when the robot sees a video of a person picking up an object and placing it in a bowl, it will recognize the behavior and can translate it into a similar behavior of its own.
From Tech Xplore
View Full Article
Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA