New! Sign up for our email newsletter on Substack.

Robots Now Learn What Objects Feel Like Just By Picking Them Up

Researchers at MIT, Amazon Robotics, and the University of British Columbia have come up with a new way for robots to figure out what an object is like just by picking it up and moving it around. Using their own built-in sensors, these robots can now tell how heavy, soft, or what’s inside an object without using cameras or special tools.

Taking Cues From How Humans Sense Things

Think about when you pick up a box and shake it to guess what’s inside. Robots can now do something similar. They use what scientists call “proprioception” – the ability to sense their own movements and position.

“A human doesn’t have super-accurate measurements of the joint angles in our fingers or the precise amount of torque we are applying to an object, but a robot does. We take advantage of these abilities,” says Chao Liu, an MIT postdoc who helped create this system.

Unlike other methods that need outside cameras or tools, this approach uses sensors already built into the robot’s joints. These sensors track how the joints move and turn when handling objects.

How It Works: Digital Twins Make It Possible

The key to making this work is a computer simulation that models both the robot and the object it’s handling. The system watches what happens during a real interaction, then tweaks the object’s properties in the simulation until it matches what really happened.

The researchers call this “differentiable simulation” – a fancy way of saying the computer can figure out how small changes in an object’s properties affect how the robot moves.

“Having an accurate digital twin of the real-world is really important for the success of our method,” says Peter Yichen Chen, lead author of the research paper.

Real-World Uses Beyond Labs

This technology works well in places where cameras don’t – like dark rooms or disaster areas with dust and smoke. It’s also cheaper because it uses sensors already built into most robots.

During testing, the researchers taught robots to figure out:

  • How much different objects weigh (with tiny errors of just 0.002 kg)
  • How soft various materials are
  • What’s inside closed containers

Being able to “feel” objects without seeing them helps robots work better in messy, real-world settings. “This idea is general, and I believe we are just scratching the surface of what a robot can learn this way,” says Chen. “My dream would be to have robots go out into the world, touch things and move things in their environments, and figure out the properties of everything they interact with on their own.”

What’s Next?

The research team now wants to combine this method with camera vision for even better results. They also plan to try it with more complex robots and challenging materials like sloshing liquids or sand.

“This work is significant because it shows that robots can accurately infer properties like mass and softness using only their internal joint sensors, without relying on external cameras or specialized measurement tools,” says Miles Macklin from NVIDIA, who wasn’t part of the research team.

As robots become more common in our homes, workplaces, and challenging environments, this ability to “feel” what they’re handling marks an important step toward making machines that can interact with the world more like we do.

Fuel Independent Science Reporting: Make a Difference Today

If our reporting has informed or inspired you, please consider making a donation. Every contribution, no matter the size, empowers us to continue delivering accurate, engaging, and trustworthy science and medical news. Independent journalism requires time, effort, and resources—your support ensures we can keep uncovering the stories that matter most to you.

Join us in making knowledge accessible and impactful. Thank you for standing with us!



Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.