The Iron Man Challenge

21 minute read

Let me correct an impression you may have: robots are pretty much idiots. They can’t do very much, and they do it with a slowness that would try the patience of a saint who was also an elephant. Samuel Beckett would have made a good roboticist. It is a science of boredom, disappointment and despair.

I am observing all this at close range in a windowless warehouse in Pensacola, Fla., belonging to the Florida Institute for Human and Machine Cognition (IHMC for short), a pleasantly interdisciplinary nonprofit research institution that is affiliated with several universities but beholden to none. Its focus is on developing machines that extend and enhance human physical and mental abilities–exoskeletons for the paralyzed would be one example.

Much of the warehouse is taken up with the kind of crude open-plan office you might expect at a struggling Internet startup: wooden trestle tables crowded with computers and monitors and other techno-detritus. Lots of whiteboards, lots of beards, not a lot of women. But one side of the warehouse has been cordoned off and cleared of furniture. It contains among other things a car, a freestanding plywood wall with a circular valve handle set in it, some simulated rubble and debris, a door that goes nowhere and a robot.

The robot has no name. It has two skeletal arms and two skinny, skeletal legs. It’s roughly the size and shape of a human being, or if you like a defleshed Terminator–but only roughly. It’s top-heavy: it has an absurdly dainty waist, but it’s chesty and wears a massive backpack containing a huge battery. It has long gorilla-like arms that end in three-fingered clamps. Tubes snake in and out and through its body–hydraulics, electric cables, cooling fluid. Blue lights wink in its innards. Its head is the weirdest part: it has no face, just a sensor pod with two lenses for binocular vision and an eternally spinning laser range finder called lidar.

When it’s not powered on, the robot can’t stand up by itself, so it spends most of its time dangling from a harness about a foot off the floor, with the sheepish air of a skydiver whose parachute has gotten caught in a tree.

The robot was designed with a specific purpose in mind. On April 10, 2012, DARPA, the Defense Advanced Research Projects Agency, which runs the Department of Defense’s high-tech moon shots, launched a robot-building competition. The robots were to be humanoid and designed for rescue operations in disaster areas that might be hazardous to humans, like the ruined Fukushima nuclear reactors. They would be optimized for mobility, dexterity, strength, endurance and something called “supervised autonomy.” The prize: millions of dollars of development money.

The DARPA Robotics Challenge held its semifinals a year and a half ago at a NASCAR speedway near Miami. Sixteen teams entered, representing a who’s who of technological powerhouses from the public and private sectors including MIT, Carnegie Mellon, NASA’s Jet Propulsion Laboratory and Lockheed Martin. The robots were judged on their ability to perform eight tasks, including driving a vehicle, climbing a ladder, crossing a debris field and using a drill to make a hole in a wall. For each task, the robots were allotted 30 minutes.

The winner, with a score of 27 out of a possible 32, was a boxy, long-limbed android made by a startup called Schaft, which came out of the robotics lab at the University of Tokyo. It had recently been acquired by Google. Following the semis, Schaft withdrew from the rest of the competition to focus on developing a commercial product; also, presumably, it no longer needed money from DARPA.

The second-place finisher, with 20 points–and special commendations for opening doors and drilling through walls–was the robot from the Florida Institute for Human and Machine Cognition. On June 5 it will compete in the finals at a fairground outside Los Angeles for $3.5 million in prize money.

DARPA has held contests like this before, with excellent and arguably world-changing results. The precursor for the DARPA Robotics Challenge, constantly invoked on all sides, is the DARPA Grand Challenge of 2004, which was intended to encourage innovation in self-driving cars.

The Grand Challenge was a triumph in the long term and a disaster in the short term. Fifteen driverless cars lined up to navigate a 142-mile (229 km) course in the Mojave Desert, with the winner getting $1 million. Two teams quit before the race even started. One car flipped over at the starting line. The most successful vehicle, fielded by Carnegie Mellon, made it 7 miles (11 km) before it got hopelessly stuck on a rock. There were no winners.

On the basis of that it would not have been ludicrous to conclude that self-driving cars were a technological dead end. But DARPA doubled and then tripled down with a challenge in 2005 and another in 2007. Now, a decade after that unpromising start, self-driving cars are considered practically a done deal. Google’s fleet of autonomous Lexus SUVs collectively drives about 10,000 miles (16,100 km) a week. Its own home-brewed prototype self-driving cars will debut on public roads in Mountain View, Calif., this summer. Carmakers like General Motors and Mercedes-Benz as well as startups like Uber are also seriously exploring self-driving vehicles. If humanoid robots follow the timeline of the Grand Challenge, or anything like it, we could be living and working alongside our mechanical doppelgängers inside of 15 years.

Battle Bot

The warehouse that serves as the IHMC’s DARPA Robotics Challenge war room is an unimposing structure on an otherwise undistinguished street corner, next door to a home-health-care service. From the outside it’s not obviously associated in any way with bleeding-edge mad science. Nobody answered the front door when I arrived, but a rear door turned out to be unlocked. Inside, engineers milled around quietly in front of a wall of screens–five TVs and eight monitors–that showed the robot from various points of view, the world from the robot’s point of view, the outputs of various sensors and lines of rapidly scrolling code. A couple of robo-bros orbited the office on RipStiks.

This team, with a few additions and subtractions, has been working together for two years. “The people who kicked off the robotics program here have always had their background in walking and bipedal robots and humanoid robots and stuff like that,” says Doug Stephen, an engineer at IHMC. “The DARPA Robotics Challenge is the biggest thing to happen research-wise in humanoid robotics in a long time. So it was a no-brainer: if we wanted to be relevant in the field and do cutting-edge cool stuff in that area, this was the thing to go out for.”

The IHMC’s robot, its physical frame at least, was not actually made in Florida, or even by the IHMC. Like six other teams coming into the finals, the IHMC elected not to build its own hardware but instead to use an off-the-shelf robotic chassis called Atlas, which is made by a company called Boston Dynamics. (Like Schaft, Boston Dynamics is now owned by Google–the search giant bought it in late 2013.) An Atlas robot is 6 ft. 2 in. (190 cm) tall, weighs 386 lb. (175 kg) and costs over $1 million. In the case of the IHMC’s robot, its weird sensor-studded head was made by yet another company, Pittsburgh-based Carnegie Robotics.

What makes the IHMC’s robot different from the others, and on the evidence better, is the software that tells it how to walk and balance and interpret the world around it. The primary source of this software expertise is Jerry Pratt, a soft-spoken, soft-shouldered engineer who leads the DARPA Robotics Challenge team at the IHMC. As an undergraduate, Pratt double-majored in computer science and mechanical engineering at MIT; robotics seemed like a natural way to combine the two. He’s been making robots that walk for about 20 years, starting when he was a graduate student, and he’s generally acknowledged to be among the world’s experts on the subject.

When I visited the IHMC there were 47 days to go before the finals–a countdown was posted on the wall–and the robot was nowhere near ready. DARPA had upped the ante considerably since the semifinals: whereas in the semis the robots were allowed to be tethered by cables, in the finals they will have to be freestanding. That means no safety lines to catch your robot if it falls over, and a 386-lb. robot falls hard. No power cables: all power will now have to come from on board, hence the massive backpack battery. No data lines: all communications between team and robot will be wireless.

And to simulate disaster-area conditions, those wireless connections will occasionally be disrupted or severed entirely, leaving the robot to fend for itself. “We’re going to assume that the disaster had screwed up the communications,” says Gill Pratt, who runs the Robotics Challenge for DARPA. (Gill is no relation to Jerry, though as it happens he was Jerry’s Ph.D. supervisor at MIT. The world of humanoid robotics is a small one.) “A great example of that is what happened on Sept. 11, where because of the infrastructure of the buildings themselves and because of the tremendous overload of first responders and all of the civilian population trying to talk at the same time, it was very difficult for any kind of message to get through.”

The format will change too. “The big difference is, last time you did all the tasks individually,” says Matt Johnson, who came to the IHMC after an earlier career flying helicopters for the Navy. “This time it’s all tasks sequentially–you have to step from task to task to task. A typical task took up to 30 minutes last time. This time you have one hour to do all the tasks.” Plus things went so well at the semifinals that DARPA decided to let a whole flood of new teams enter, 14 of them, including two from Germany, three from South Korea, one from China and five from Japan.

The new requirements meant that the robot had to be re-engineered to be wireless, so in November the team sent it back to Boston Dynamics to be completely rebuilt, “from the shins up” as one engineer put it. Now Pratt and Johnson and their team are working on getting the refurbished, upgraded, newly wireless robot properly calibrated. “It’s pretty much all the growing pains that come with a new robot,” Stephen says. “Things break, design decisions have to be reconsidered, except on a way shorter timescale. The pressure’s been intense in that regard.” The upgrade took two weeks longer than expected, so they’re under the gun.

According to the master schedule, which takes the form of a large Excel document prominently displayed on one wall, the project for the day is “arm tuning.” “Robots are really bad with awareness,” Johnson explains. “Everybody knows where their hands are. In fact you can close your eyes and still know where they are. Robots don’t get that for free, so you have to build awareness of where the body is. And what that involves is knowing where every single joint in the robot is.” In essence the robot is relearning to read the sensors on its joints and to figure out from that data where its arms and hands are.

Progress is glacial. It’s a good thing robots don’t get bored, because things break constantly. The lidar cuts out. The radio cuts out. “It’s dropping mad packets,” somebody says (meaning that data is getting lost somewhere between the operators and the robot). Someone else murmurs darkly about leaking hydraulic fluid. It’s the sort of process that would lend itself well to time-lapse photography. At this stage the team’s attitude toward the robot is a notch or two less than entirely affectionate–they treat it like some combination of mental patient, hapless child and truculent prisoner. I keep calling the robot “he,” but its handlers always refer to it as “it.”

He/it stands with its legs slightly bent–it can’t straighten them and lock its knees the way a person would. (Stephen explains: “The short version is, when the knees are perfectly straight, in the underlying math you end up having to figure out how to avoid dividing by zero.” I can’t quite tell if he’s joking or not.) It’s an uncanny presence: it moves more or less like a human being, but then it will have an abrupt full-body spasm, freeze in some unintentionally funny pose and fall over.

Its unpredictability makes it dangerous to be around, and the team tends to stay out of its area unless absolutely necessary. “Atlas is very strong,” Stephen says. “I wouldn’t say it’s necessarily a very safe robot to be close to. An arm flung out–you could definitely hurt somebody.” The engineers have a long PVC pipe with a boxing glove stuck on the end that they use for poking the robot cautiously, from a safe distance. They also keep handy a device that looks like a flashlight with a big button on the end: a kill switch.

As a shakeout procedure, to make sure everything’s going as planned, the robot is going to try turning the valve–this challenge was directly inspired by a valve in the Fukushima nuclear plant. One screen shows the output from the robot’s lidar sensor, and you can see the robot focusing on the circular valve handle: to the robot the world appears to be made out of clouds of tiny data points, like swarms of bees, displayed in blazing rainbow false colors–different colors indicate different distances–and overlaid with circular floating orbs and circles that show the movements of its arms and hands.

After 15 tense minutes of preliminary adjustments the robot suddenly snaps and starts flinging its arms madly back and forth. The effect isn’t worlds away from the robot on Lost in Space waving its arms and saying, “Does not compute!” The tantrum gets more and more violent until finally it knocks over a table, backhanded, and has to be shut down and hoisted back into its tree.

There’s a chorus of ironic laughs and cheers. Work stops for two hours as engineers pore over code and sensor logs. The schedule is trashed. I ask Johnson if he can think of a single realistic depiction of a robot in a movie. “No,” he says. “It would be too boring.”

One Small Step

The most feared of all the challenges in the finals is the car, not so much the driving of it as the getting out of it–the team refers to it as “the egress.” In the semifinals in 2013 the IHMC team made a strategic decision that the robot wasn’t going to drive the car at all. It was too hard, and too risky, and not worth enough points.

But to contend for first place in the finals, they’re pretty sure that the robot will have to at least attempt all the tasks. “It’s just a really challenging thing to get a robot that barely fits into a vehicle to get out of it,” Johnson says. “It’s sort of like a very heavy elderly person who’s legally blind trying to get out of a car.” To make matters worse, the egress is one of the first things the robot has to do. “Atlas currently cannot survive a fall, unless it happens to land just right,” Jerry Pratt says. “We drive, we go to get out of the car, the robot falls and breaks, and that’s it for the weekend, because it can’t be fixed.” There’s a real chance they could exit the competition with a score of exactly one point.

Keep in mind, this is a car that doesn’t even have doors. It’s a good example of how the things you’d think would be easy for a robot can actually turn out to be very, very hard. Another one would be the simple act of walking. Bipedal robots fall down, a lot. “Walking and balancing–we take it for granted, but it’s not that easy to describe mathematically,” Stephen says. “It’s something that we as humans do subconsciously, so to try and understand it and formalize it and then tell something else how to do it, it’s not necessarily easy.”

Which raises a question: If it takes so much backbreaking math, why teach a robot to walk at all? It’s a basic requirement of the DARPA Challenge, and a staple of science fiction, but when you think about it, why bother? Predator drones don’t walk. Roombas don’t walk. R2-D2 doesn’t walk. The attachment to legs and really the human form at all seems a little old-fashioned, even atavistic. The surface of the earth is a challenging enough environment for a robot as it is. Why not just put wheels on the robot and call it a day? That way it wouldn’t fall over all the time, plus you wouldn’t have to teach it to get in and out of cars, because it would basically be a car. Why even make a robot look like a human?

This is not an uncontroversial topic in the world of robotics. The conventional argument in favor of humanoid robots is that they’re better at operating in environments that were built by and for humans. “Doorways have a certain width, door handles have a certain height, the steering wheel on cars is in a certain place, the pedals are in a certain place,” Gill Pratt says. “All of these things are built for our form. If you want a machine to adapt to it, that makes lots of sense.”

But there’s room for disagreement on this score. Colin Angle is one of the world’s foremost roboticists and the CEO of iRobot, a prominent supplier of robots to the military; it also makes the Roomba. One thing iRobot doesn’t make is humanoid robots. “Walking robots aren’t particularly practical,” Angle says. He prefers wheels or even tank-style tracks–as examples he gives iRobot’s Kobra and PackBot robots, which are marketed to military and civil defense agencies. “They can run up stairs at 5 to 10 m.p.h. They don’t have to step, and you can drop them off the second story of buildings and they’ll survive. They’re designed to operate in human-style spaces, but they’re radically simpler solutions than legs.” He supports competitions like the Robotics Challenge as a way to stimulate innovation, but he points out that when Fukushima happened, there were in fact rescue robots already available. They just didn’t look like people. “When push came to shove a few years back, when the world needed a robot to go inside a reactor and help figure out how to shut it down, the robot that went in had tracks.”

There are good arguments on both sides. Jerry Pratt is eloquent on the topic of the human body’s exceptional mobility and its ultimate superiority to other forms in dealing with rough terrain. “Humans and primates are just so good at getting places,” he says. “You can crawl under a table, get on top of a table, move the table, you can climb over a garbage can, you can squeeze between objects. Imagine a door that’s wedged so it can only open about 10 inches: a human can get through that, no problem. The dimensions of a human are just really well suited for mobility through a really challenging environment.” Though if there’s one thing everyone agrees on, it’s that walking robots aren’t anywhere near ready for the field yet. Hence the DARPA Challenge.

Another issue that gets debated a lot in the robotics world is autonomy. Exactly how much independence should robots have? We think of robots that look like humans as thinking like humans too, but for the majority of the DARPA Challenge they’re actually being operated remotely by people. The IHMC robot has four computers on board, one for low-level functions and three to handle high-level operations like analyzing sensor data, but mostly they just keep it standing upright and oriented in the world. In terms of where it goes and what it does, it’s largely being driven the way you’d drive a remote-controlled car.

In spirit at least the DARPA Challenge encourages what it calls task-level autonomy: once a robot is where it needs to be, you tell it, “Open the door” or “Turn the valve,” and it then does the actual job all by itself. That’s one reason communication with human operators in the finals will be intermittent: robots that can fly solo when they have to will have an advantage. Gill Pratt compares the operator-robot partnership to that between a coach and a quarterback. “The machine only occasionally learns what the supervisor’s intent is, and the human only occasionally learns about what the situation is like on the other side.”

As far as Johnson is concerned, complete autonomy in robots is not only overrated, it’s not even necessarily desirable–it’s unlikely robots will ever get to a point where total unsupervised independence is a good idea. “Even if they’re super capable and do everything, there’s always going to be something you want to talk about to get the job done properly,” Johnson argues. “The domains we’re looking at for robotics, space exploration–do you really not want human involvement in that? Disaster response? Military operations–you better want human involvement in that! So to try to design stuff to eliminate humans is a fallacy.” After all, it’s rarely a good idea to give even human beings total autonomy. “[IHMC founder and CEO] Ken Ford likes to say, autonomous is not what you want, not even in people. He says autonomous people are the ones we have to incarcerate. ‘Cause if you’re fully autonomous and you don’t play well with others, that’s usually a bad sign.”

Fail Better

One of the key differences between humanoid robots and driverless cars is that everybody knows what driverless cars are for. It’s not as clear what the market for robots like Atlas is. Everyone’s watching Google, which spearheaded self-driving cars and which has rolled up at least half a dozen promising robotics companies in the past two years, but nobody has any idea yet, literally none, what its plans are. (Though Google did file a patent in March for a robot that can change its personality to suit different circumstances.) Gill Pratt suggests health care and elder care as possible practical applications. Jerry Pratt thinks a bit bigger. “Long term–and who knows if it’s going to be 10 or 15 or 200 years–I think the big commercial application is in a person’s home: emptying the dishwasher, doing their laundry.” He’s bullish on the robotic exploration and colonization of Mars.

But he acknowledges that there are a lot of problems to be solved before then. The balance problem. The cost problem. The safety problem. And of course before we can have robot butlers, or nurses, or first responders, or interplanetary colonists, the IHMC robot is going to have to figure out how to turn that valve handle.

Twenty-four hours after its table-whacking Lost in Space tantrum, it’s back in action. It paces toward the valve handle, bent-legged, cautious and intent, like an entomologist who has spotted a rare butterfly. It reaches out a claw-hand, stops and is overcome with a Parkinson’s-like tremor. Reboot. After half an hour it manages to put one hand on the valve, then it freezes again. It may have overheated. There’s talk of an air bubble in the hydraulic line. Reboot.

On the next try it comes in more slowly, swaying from side to side like a canny boxer. The robot reaches out, concentrating furiously, touches the handle, grips it–and slowly, deliberately rotates it 360 degrees, a quarter-turn at a time.

There is no cheering. The engineers continue to mill around. Someone recaps an April Fool’s joke from a Linux mailing list. The robot lets go of the handle and starts trembling again. A few minutes later it will get confused about where the floor is and fall over and be hauled up to the ceiling again–a wire-fu artist on a cigarette break. “There’s the old saying, If you fail to plan, then you’re planning to fail,” Johnson says. “I think Ben Franklin said that? In robotics, if you don’t plan to fail, you’re going to fail. You have to just count on failure.”

–WITH REPORTING BY ALEX FITZPATRICK AND DAN KEDMEY/NEW YORK CITY

More Must-Reads From TIME

Contact us at letters@time.com