Touching the future: Mastering physical contact with new algorithm for robots

0
175
Touching the future: Mastering physical contact with new algorithm for robots


Credit: DAIR Lab

Penn Engineers have developed a new algorithm that allows robots to react to complex physical contact in real time, making it possible for autonomous robots to succeed at previously impossible tasks, like controlling the motion of a sliding object.

The algorithm, known as consensus complementarity control (C3), may prove to be an essential building block of future robots, translating directions from the output of tools like large language models, or LLMs, into appropriate action.

“Your large language model might say, ‘Go chop an onion,'” says Michael Posa, Assistant Professor in Mechanical Engineering and Applied Mechanics (MEAM) and a core faculty member of the General Robotics, Automation, Sensing and Perception (GRASP) Lab. “How do you move your arm to hold the onion in place, to hold the knife, to slice through it in the right way, to reorient it when necessary?”

One of the greatest challenges in robotics is control, a catch-all term for the intelligent use of the robot’s actuators, the parts of a robot that move or control its limbs, like motors or hydraulic systems. Control of the physical contact that a robot makes with its surroundings is both difficult and essential.

“That kind of lower- and mid-level reasoning is really fundamental in getting anything to work in the physical world,” says Posa.







The new algorithm allows the robotic arm to balance and move a waiter’s plastic tray, mastering control of a sliding object—a previously impossible task for robots. Credit: DAIR Lab

Since the 1980s, experts in artificial intelligence have recognized that, paradoxically, the first skills humans learn—how to manipulate objects and move from one place to another, even in the face of obstacles—are the hardest to teach robots, and vice versa.

“Robots work really well until they have to start touching things,” says Posa. “Artificial intelligence machines right now can solve International Mathematical Olympiad-level math problems and beat experts at chess. But they have the physical capabilities of a 2- or 3-year-old at best.”

In essence, this means that every interaction robots have that involves touching something—picking up an object, moving it somewhere else—must be carefully choreographed. “The key challenge is the contact sequence,” says William Yang, a recent doctoral graduate of Posa’s Dynamic Autonomy and Intelligent Robotics (DAIR) Lab. “Where do you put your hand in the environment? Where do you put your foot in the environment?”

Humans, of course, rarely have to think twice about how they interact with objects. In part, the challenge for robots is that something as simple as picking up a cup actually involves many different choices—from the correct angle of approach to the appropriate amount of force.







A glimpse into the algorithm’s process, which allows robots to “hallucinate” the future state of objects, allowing them to react to physical contact in real time. Credit: DAIR Lab

“Not every one of these choices is so terribly different from the ones around it,” Posa points out. But, until now, no algorithm has allowed robots to assess all those choices and make an appropriate decision in real time.

To solve the problem, the researchers essentially devised a way to help robots “hallucinate” the different possibilities that might arise when making contact with an object. “By imagining the benefits of touching things, you get gradients in your algorithm that correspond to that interaction,” says Posa.

“And then you can apply some style of gradient-based algorithm and in the process of solving that problem, the physics gradually becomes more and more accurate over time to where you’re not just imagining, ‘What if I touch it?’ but you’re actually planning to go out and touch it.”

In the past year, Posa and the DAIR Lab have written a suite of award-winning papers on the topic, most recently one posted to the arXiv preprint server for which Yang served as the lead author, which won the Outstanding Student Paper Award at the 2024 Robotics: Science and Systems conference in the Netherlands.

Touching the future: Mastering physical contact with new algorithm for robots
Credit: DAIR Lab

That paper demonstrates how C3 can empower robots to control sliding objects in real time. “Sliding is notoriously hard to control in robotics,” says Yang. “Mathematically, it’s hard, but you also have to rely on object feedback.”

But, using C3, Yang demonstrated how a can safely manipulate a tray, similar to one waiters might use at a restaurant. In videotaped experiments, Yang had the robotic arm pick the tray up and put it down, with and without a coffee cup, and rotate the tray against a wall. “Previous work thought, ‘We just want to avoid sliding,'” Yang says, “but the algorithm includes sliding as a possibility for the robots to consider.”

In the future, Posa and his group hope to make the algorithm even more robust to different situations, such as when the objects a robot handles weigh slightly more or less than anticipated, and to extend the project to more open-ended scenarios that C3 currently cannot handle.

“This is a building block that can go from a pretty simple specification—make this part go over there—and distill that down to the motor torque that the robot is going to need to achieve that,” says Posa. “Going from a very, very complicated, messy world down to the key sets of objects or features or dynamical properties that matter for any given task, that’s the open question we’re interested in.”

More information:
William Yang et al, Dynamic On-Palm Manipulation via Controlled Sliding, arXiv (2024). DOI: 10.48550/arxiv.2405.08731

Journal information:
arXiv


Citation:
Touching the future: Mastering physical contact with new algorithm for robots (2024, October 15)
retrieved 15 October 2024
from https://techxplore.com/news/2024-10-future-mastering-physical-contact-algorithm.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here