Making Machines Make Sense

By Samuel Greengard

May 23, 2017

Comments

A Google algorithm recognizes fruit in a bowl.

A computerized system’s ability to detect an object is critical, yet there is also the need to know what an object is and what to do with it.

Credit: Google

As drones, robots, autonomous vehicles and other computerized systems take shape, the need for sophisticated sensing systems grows. So far, the primary focus has been on visual systems and image processing. Of course, the ability to detect an object is critical, yet there is also the need to know what an object is and what to do with it. As Aaron Quigley, professor and chair of Human Computer Interaction at the University of St. Andrews puts it: “Computer vision can only go so far.”


As a result, researchers are now exploring ways to broaden sensing capabilities. They are developing systems that detect, classify, and handle objects using radar, sensors, and other technologies. In the coming years, these tools will likely help robots, drones, and other devices know how to react to different objects and determine the right tactile pressure for a specific task. These advances, along with specialized software and algorithms, will also shape the next generation of prosthetics and medical devices by simulating human feel.


To be sure, advanced machine sensing could radically change the way humans interact with the world. “Object recognition and tactile capabilities are critical for building smarter machines,” explains Henny Admoni, a postdoctoral fellow in The Robotics Institute at Carnegie Mellon University (CMU). “The technology has value across a wide array of industries and within many situations.”


Touchy subjects


Human sensing is a basic function of the brain. As Admoni puts it, “It happens naturally and seamlessly. We classify and handle objects by seeing, touching, listening, smelling, and perhaps tasting them.” Of course, machines—no matter how sophisticated—cannot necessarily draw accurate conclusions using only one or two sensing methods. So far, “We have only begun to scratch the surface for how to fuse all these different modalities,” she explains.


For example, machines equipped with image recognition can detect the basic shape of an object, but not its internal structure or rear surface. Adding more robust sensing to autonomous vehicles, drones, and robots would not only make them smarter, it would broaden the potential applications. For example, a hazmat robot could identify a substance before initiating a cleanup. A medical robot could perform surgery far more precisely by knowing the density of the tissue or the composition of a mass.


For the disabled and the vision-impaired, more advanced sensing and haptics would aid in object identification, as well as helping a robotic or prosthetic arm handle objects, ranging from a book to a blueberry, with the right pressure. These machine sensing and computing systems could also integrate with the human brain and nervous system through the use of sensory gloves or other smart wearable devices equipped with cameras, sensors, and other technology.


“The goal is to tie together enough data points to match or perhaps even exceed human capabilities,” says Yantao Shen, associate professor and director of the Lab for Bioinstrumentation and Automation of the department of Electrical and Biomedical Engineering at the University of Nevada, Reno.


Feeling the Way


Sensing technology is advancing rapidly. At the University of St. Andrews, Quigley and a team of researchers have developed a device called RadarCat (Radar Categorization for Input and Interaction), which fires electromagnetic waves in the millimeter band (mmWave) at objects and captures a unique fingerprint based on the data that bounces back to the receiver. The system then uses machine learning to train and classify objects and materials. The accuracy rate is now in the high 90% range. The RadarCat system uses Google’s Soli, a small radar chip that can be embedded in many devices.


In the future, the object recognition system could be built into robots, drones, or even smartphones. Hui-Shyong Yeo, a Ph.D. student on the team, says these items can be automatically added to a database, essentially forming a “dictionary of things.” These tools could also be combined with other types of sensors to create more robust sensing capabilities. For instance, researchers at CMU and elsewhere are studying haptic methods that outfit the surface of a robot with sensors that detect either pressure or capacitance, which will allow them to respond to physical touch with just the right amount of pressure. Combined with other types of sensors that measure force, torque, temperature, and other factors, it’s possible to push the boundaries on machine sensing further.


Already at CMU, Admoni and a team of researchers led by associate professor and Personal Robotics Lab director Siddhartha Srinivasa have built software algorithms for a robot that helps the physically impaired eat and handle other tasks independently. “The robotic arm becomes an extension of the person’s own body,” Admoni says.


Meanwhile, Shen is experimenting with imaging systems and an array of touch sensors that detect force, movement, pressure and temperature to create a more complete machine sensory system. His sensory “glove” can detect objects, using cameras and sensors, that help a blind or sight-impaired people to better navigate the world.


Says Shen: “Better sensing systems will make the next generation of machines much more like humans.”


Samuel Greengard is an author and journalist based in West Linn, OR.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s