>
Or at least that’s not too far off from what University of Vermont roboticist Josh Bongard has discovered, as he reports in the January 10 online edition of the Proceedings of the National Academy of Sciences.
In a first-of-its-kind experiment, Bongard created both simulated and actual robots that, like tadpoles becoming frogs, change their body forms while learning how to walk. And, over generations, his simulated robots also evolved, spending less time in “infant” tadpole-like forms and more time in “adult” four-legged forms.
These evolving populations of robots were able to learn to walk more rapidly than ones with fixed body forms. And, in their final form, the changing robots had developed a more robust gait — better able to deal with, say, being knocked with a stick — than the ones that had learned to walk using upright legs from the beginning.
“This paper shows that body change, morphological change, actually helps us design better robots,” Bongard says. “That’s never been attempted before.”
Robots are complex
Bongard’s research, supported by the National Science Foundation, is part of a wider venture called evolutionary robotics. “We have an engineering goal,” he says “to produce robots as quickly and consistently as possible.” In this experimental case: upright four-legged robots that can move themselves to a light source without falling over.
“But we don’t know how to program robots very well,” Bongard says, because robots are complex systems. In some ways, they are too much like people for people to easily understand them.
“They have lots of moving parts. And their brains, like our brains, have lots of distributed materials: there’s neurons and there’s sensors and motors and they’re all turning on and off in parallel,” Bongard says, “and the emergent behavior from the complex system which is a robot, is some useful task like clearing up a construction site or laying pavement for a new road.” Or at least that’s the goal.
But, so far, engineers have been largely unsuccessful at creating robots that can continually perform simple, yet adaptable, behaviors in unstructured or outdoor environments.
Which is why Bongard, an assistant professor in UVM’s College of Engineering and Mathematical Sciences, and other robotics experts have turned to computer programs to design robots and develop their behaviors — rather than trying to program the robots’ behavior directly.
His new work may help.
To the light
Using a sophisticated computer simulation, Bongard unleashed a series of synthetic beasts that move about in a 3-dimensional space. “It looks like a modern video game,” he says. Each creature — or, rather, generations of the creatures — then run a software routine, called a genetic algorithm, that experiments with various motions until it develops a slither, shuffle, or walking gait — based on its body plan — that can get it to the light source without tipping over.
“The robots have 12 moving parts,” Bongard says. “They look like the simplified skeleton of a mammal: it’s got a jointed spine and then you have four sticks — the legs — sticking out.”
Some of the creatures begin flat to the ground, like tadpoles or, perhaps, snakes with legs; others have splayed legs, a bit like a lizard; and others ran the full set of simulations with upright legs, like mammals.
And why do the generations of robots that progress from slithering to wide legs and, finally, to upright legs, ultimately perform better, getting to the desired behavior faster?
“The snake and reptilian robots are, in essence, training wheels,” says Bongard, “they allow evolution to find motion patterns quicker, because those kinds of robots can’t fall over. So evolution only has to solve the movement problem, but not the balance problem, initially. Then gradually over time it’s able to tackle the balance problem after already solving the movement problem.”
Sound anything like how a human infant first learns to roll, then crawl, then cruise along the coffee table and, finally, walk?
“Yes,” says Bongard, “We’re copying nature, we’re copying evolution, we’re copying neural science when we’re building artificial brains into these robots.” But the key point is that his robots don’t only evolve their artificial brain — the neural network controller — but rather do so in continuous interaction with a changing body plan. A tadpole can’t kick its legs, because it doesn’t have any yet; it’s learning some things legless and others with legs.
And this may help to explain the most surprising — and useful — finding in Bongard’s study: the changing robots were not only faster in getting to the final goal, but afterward were more able to deal with new kinds of challenges that they hadn’t before faced, like efforts to tip them over.
Bongard is not exactly sure why this is, but he thinks it’s because controllers that evolved in the robots whose bodies changed over generations learned to maintain the desired behavior over a wider range of sensor-motor arrangements than controllers evolved in robots with fixed body plans. It seem that learning to walk while flat, squat, and then upright, gave the evolving robots resilience to stay upright when faced with new disruptions. Perhaps what a tadpole learns before it has legs makes it better able to use its legs once they grow.
“Realizing adaptive behavior in machines has to date focused on dynamic controllers, but static morphologies,” Bongard writes in his PNAS paper “This is an inheritance from traditional artificial intelligence in which computer programs were developed that had no body with which to affect, and be affected by, the world.”
“One thing that has been left out all this time is the obvious fact that in nature it’s not that the animal’s body stays fixed and its brain gets better over time,” he says, “in natural evolution animals bodies and brains are evolving together all the time.” A human infant, even if she knew how, couldn’t walk: her bones and joints aren’t up to the task until she starts to experience stress on the foot and ankle.
That hasn’t been done in robotics for an obvious reason: “it’s very hard to change a robot’s body,” Bongard says, “it’s much easier to change the programming inside its head.”
Lego proof
Still, Bongard gave it a try. After running 5000 simulations, each taking 30 hours on the parallel processors in UVM’s Vermont Advanced Computing Center — “it would have taken 50 or 100 years on a single machine,” Bongard says — he took the task into the real world.
“We built a relatively simple robot, out of a couple of Lego Mindstorm kits, to demonstrate that you actually could do it,” he says. This physical robot is four-legged, like in the simulation, but the Lego creature wears a brace on its front and back legs. “The brace gradually tilts the robot,” as the controller searches for successful movement patterns, Bongard says, “so that the legs go from horizontal to vertical, from reptile to quadruped.
“While the brace is bending the legs, the controller is causing the robot to move around, so it’s able to move its legs, and bend its spine,” he says, “it’s squirming around like a reptile flat on the ground and then it gradually stands up until, at the end of this movement pattern, it’s walking like a coyote.”
“It’s a very simple prototype,” he says, “but it works; it’s a proof of concept.”
Courtesy ScienceDaily
Courtesy ScienceDaily