Can a machine have a tender touch? It would seem so with the emerging tech from a team at the Max Planck Institute for Intelligent Systems (MPI-IS); they’ve developed a sensor for a robotic thumb that closely approximates human touch using a unique approach to robot sensors that may have far-reaching applications. Instead of using pressure sensors, it relies on a digital camera.
Dubbed “Insight,” scientists developed the thumb-shaped sensor using a camera embedded in the robotic fingertip to create a 3D map of whatever the finger comes into contact with. A deep neural network (DNN) interprets the data and tells the finger sensor the appropriate level of force to apply and in what direction. So, for example, a rock would get more pressure than an egg. In this way, robots can feel the objects all around them much in the same way as animals and humans do.
How does the DNN know what it’s looking at? The network interprets a series of high-speed images using an algorithm that can detect minute changes in light in the pixels to make a 3D force map. With that info, the DNN creates a separate force vector for each of the many points on the finger sensor. You can think of it as each goosebump on human skin getting individual instructions and the group of goosebumps acting as a whole. This is accomplished using the visual information collected by a single camera rather than a matrix of purely tactile pressure sensors that is often used in robotics.
It’s even got a fingernail. Sorta. What looks like a nail on the top of the robotic thumb is actually a sensor that’s even more sensitive than the surrounding surfaces. While the rest of the robotic thumb is covered with a layer of elastomer that’s 4 mm thick, the nail area has elastomer with only a 1.2 mm thickness. This creates a highly sensitive zone that can detect much smaller force differences and the fine details of an object.
And Insight learns. Testing the sensor using the application of both normal and shear forces, MPI-IS’s scientists generated around 200,000 measurements to hone a machine-learning model’s understanding of the relationship between changes in the pixels of the raw images and the correlating force that should be applied. The results? After three weeks of probing, the system was better able to gauge localized contacts, and therefore had an increased ability to approximate human touch.
The future of this camera-driven sensor tech doesn’t look limited to robotic fingers; a wide array of robot parts can benefit from the ability to touch with such a high degree of precision.