
The skill to evaluate and approximate the physical properties of objects is vital for robots. It allows them to cooperate more efficiently with their surroundings.
In recent years, researchers have been attempting to develop methods and systems that let robots to approximate tactile properties of objects or subjects. Such a thing could offer them skills that resemble our sense of touch. Here are the latest results.
How to Estimate Friction and Compliance From Visual Information
The goal of new research, built on previous studies, was to approximate many physical properties of a surface, such as friction and compliance, only from visual information. Matthew Purri is a Ph.D. student specializing in Computer Vision and AI at the Rutgers University who developed a convolutional neural network (CNN)-based prototype that can evaluate tactical properties of a surface by examining images of them. Purri was supervised by Kristin Dana, a professor of Electrical Engineering at Rutgers.
Purri and Dana investigated if the angle from which various input pictures were taken influenced how well their neural network approximated a surface’s physical properties. They devised a pattern that can automatically comprehend optimal viewing angle combinations and neural network parameters, too.
“One goal of our model is to learn a function that separately projects images of a surface and tactile physical property information into a shared subspace, where pairs of visual-tactile information are close and dissimilar visual-tactile pairs are far apart,” detailed Purri.
The method devised by Purri and Dana also attempted to track the visual-tactile pairs that have the same tactile and visual properties to other pairs, following another classification objective. Then, it produced new classification labels via a technique dubbed visual-tactile feature clustering.
The new pattern developed by the researchers could also have many intriguing applications. It could support robotic systems better to understand the significant characteristics of surfaces and objects in their surroundings. The systems could also explore new environments with greater ease.