Robots may be able to help people with household tasks including folding laundry, according to new research from Carnegie Mellon University's Robotics Institute (RI). Robotic helpers could “feel” layers of cloth rather than relying on computer vision tools to see them, say researchers, presenting a wealth of opportunities for robot help in the home.
Humans use their senses of sight and touch to pick items up easily, simple routine tasks which are extremely difficult for robots to replicate and generate data that machines struggle to quantify.
"Humans look at something, we reach for it, then we use touch to make sure that we're in the right position to grab it," says David Held, Assistant Professor in the School of Computer Science and head of the Robots Perceiving and Doing (R-Pad) Lab. "A lot of the tactile sensing humans do is natural to us. We don't think that much about it, so we don't realise how valuable it is."
To fold laundry, robots need a sensor to mimic how a human's fingers can feel the top layer of a towel or shirt and grasp the layers beneath it, say researchers. A robot could be taught to feel the top layer of cloth and grasp it, but without the robot sensing the other layers of cloth, it would only ever grab the top layer and never successfully fold the cloth.
Carnegie Mellon and Meta AI have skin in the game
ReSkin, developed by researchers at Carnegie Mellon and Meta AI, was designed to solve this problem. This open-source touch-sensing "skin" is made of a thin, elastic polymer embedded with magnetic particles to measure three-axis tactile signals.
"By reading the changes in the magnetic fields from depressions or movement of the skin, we can achieve tactile sensing," says Thomas Weng, a PhD student in the R-Pad Lab, who worked on the project with RI postdoc Daniel Seita and grad student Sashank Tirumala. "We can use this tactile sensing to determine how many layers of cloth we've picked up by pinching with the sensor."
Other research has used tactile sensing to grab rigid objects, but cloth is "deformable," meaning it changes when you touch it — making the task even more difficult. Adjusting the robot's grasp on the cloth changes both its pose and the sensor readings.
Researchers didn't teach the robot how or where to grasp the fabric. Instead, they taught it how many layers of fabric it was grasping by first estimating how many layers it was holding using the sensors in ReSkin, then adjusting the grip to try again. The team evaluated the robot picking up both one and two layers of cloth and used different textures and colours of cloth to demonstrate generalisation beyond the training data.
"The profile of this sensor is so small, we were able to do this very fine task, inserting it between cloth layers, which we can't do with other sensors, particularly optical-based sensors," says Weng.
"It really is an exploration of what we can do with this new sensor. We're exploring how to get robots to feel with this magnetic skin for things that are soft, and exploring simple strategies to manipulate cloth that we'll need for robots to eventually be able to do our laundry."
- ICYMI: Top 10 DevSecOps tools and cut-price animal robotsDigital Transformation
- Budget robots inspired by animals a step forward for humansAI & Machine Learning
- ICYMI: Alienated digital talent and Amazon’s new AI robotEnterprise IT
- Is it right to use video and AI systems? Privacy expert asksAI & Machine Learning