TIME robotics

That Jibo Robot Does the Same Stuff as Your Phone, but People Are Freaking Out Anyway

jibo
Jibo

Jibo promises to be a lovable robot assistant, but it's unclear why you'd actually need one.

A crowdfunding campaign for a “family robot” called Jibo is picking up steam, blowing through its fundraising goals within the first day.

What is Jibo? It’s a little pod with a motorized swivel, equipped with cameras, microphones and a display. It recognizes faces and voices, and can act as a personal assistant by setting reminders, delivering messages and offering to take group photos. It also serves as a telepresence robot for video chat.

As of now, Jibo has raised more than $200,000 on IndieGogo–well beyond its $100,000 goal–and has racked up plenty of breathless coverage. Early bird pricing of $100 sold out long ago, but you can still claim a unit for $499, with an estimated December 2015 ship date.

Sorry to burst the hype bubble, but I’m not seeing how Jibo will more practical than a phone, a tablet or even a wearable device. Most of the things Jibo promises to do can be done better by the handset in your pocket–which, by the way, you don’t have to lug around from tabletop to tabletop.

To see what I mean, let’s deconstruct the scenario in Jibo’s pitch video, in which a man gets home from a long day at work. Jibo, perched on a nearby counter, turns on the lights, records an order for Chinese take-out, then starts reading back a voicemail from his girlfriend. The man then doubles the take-out order on the fly.

It’s the kind of demo that makes perfect sense unless you think about it too much. If home automation goes mainstream, a dedicated robot won’t be necessary, because our phones will do a better job of signaling when we’ve walked through the front door. The idea of having your messages read to you when you get home is a throwback to answering machines, which are obsolete now that we can check our messages from anywhere. As for the take-out order, you’ve got to be the dullest person in the world to order “the usual” every time you get home, and I’m not sure the man’s girlfriend will take kindly to having no input on what food she gets.

There is something to be said for a device that can persistently listen for your commands and act on them, but this is the same problem that wearable devices are trying to solve, and they’re better-suited to being wherever you are. While group photos and telepresence are potentially useful, now we’re getting into some very specific situations that don’t really justify a $500 purchase, regardless of how endearing Jibo tries to be. The only way Jibo makes sense as a robot is if it gains more physical capabilities, like a way to clean your windows or cook dinner, but it’s far too early to say whether that’s going to happen.

Maybe it’s unfair for me to judge at such an early stage, but that’s exactly what Jibo is trying to do through crowdfunding. The creators are asking people to throw money at something they’ve never seen, that has only been shown to the press in limited demos, and that won’t even ship until the tail end of next year. All we have to go on right now is a slick-looking pitch video and a whole bunch of promises. As talented as the folks behind Jibo seem to be, I’ve seen enough undercooked crowdfunded projects to know that some skepticism is in order.

TIME technology

FAA Won’t Allow Deliveries By Drone

This undated image provided by Amazon.com shows the so-called Prime Air unmanned aircraft project that Amazon is working on in its research and development labs.
This undated image provided by Amazon.com shows the so-called Prime Air unmanned aircraft project that Amazon is working on in its research and development labs. Amazon/AP

The Federal Aviation Administration is upholding a ban on using drones for commercial purposes, including delivering packages, according to a memo released this week.

The FAA has long said that commercial drone use is illegal, but a federal judge ruled in March that the FAA must accept public comment before adopting the rules, according to Ars Technica. The recent memo is a call for public input on its rules.

Drones that are used to deliver packages for a fee are barred under current statutes, the FAA emphasized in its Monday memo, even if the seller offers free shipping.

Drones also can’t be used for photographing a property for real estate listings, or determining whether crops need to be watered as part of a commercial farming operation, for instance.

The recent memo also means that Amazon’s hyped drone deliveries won’t be happening anytime soon, though there could be new rules in place in the coming years after the public comment period ends.

The National Park Service banned drone flights last week from its parks, saying that drones disturb wildlife and can endanger climbers.

 

TIME robotics

Meet Pepper, the Robot Who Can Read Your Emotions

A company in Japan has created a robot that can interact with humans on an emotional level

+ READ ARTICLE

A robot designed to read—and more importantly, respond to—users’ moods was unveiled this week by Softbank, a Japanese internet company.

Pepper, who stands 4 feet tall and weighs about 62 pounds, is equipped with facial-recognition technology and a number of cameras, audio recorders and sensors. That technology allows the robot to learn how to behave over time, instead of being programmed for specific tasks, Softbank said.

“Our aim is to develop affectionate robots that can make people smile,” said Masayoshi Son, the billionaire behind Softbank.

The humanoid, which is set to go on sale in Japan in February 2015, will cost about 198,000 yen ($1,900).

 

 

TIME robotics

How to Make Driverless Cars Behave

As self-driving cars become more advanced, auto makers may have to answer centuries-old philosophical debates -- and they're starting to realize it.

Imagine you’re winding through the Pacific Coast Highway in one of Google’s self-driving cars, with the ocean on your right and the hills of Malibu across the opposite lane to your left. Just as you’re turning one of the road’s blind corners, another car whips around the bend in the opposite direction. Its brakes have failed, and it’s headed for your lane.

With little room to maneuver and no time for human intervention, your robot car faces a decision. It could turn inward and slam the brakes to avoid a head-on collision, but this would potentially let the other car sail over the cliff wall. Alternatively, your car could brace for impact, keeping both cars on the road but potentially injuring you, along with the other car’s passengers.

In a crash situation, we don’t have time to think about morality, and studies show we act more on instinct. But for a computer, a fraction of a second is plenty of time to ponder an ethical decision–provided it’s been programmed to think that way.

The problem is that the answers aren’t always clear-cut. Should a driverless car jeopardize its passenger’s safety to save someone else’s life? Does the action change if the other vehicle is causing the crash? What if there are more passengers in the other car? Less morbidly, should a Google-powered car be able to divert your route to drive past an advertiser’s business? Should the driver be able to influence these hypothetical decisions before getting into the vehicle?

As driverless cars get closer to hitting the road, moral dilemmas are something the auto industry will need to consider. And while it’s still early days for the technology, a conversation about ethics is starting to happen.

The Daimler and Benz foundation, for instance, is funding a research project about how driverless cars will change society. Part of that project, led by California Polytechnic State University professor Patrick Lin, will be focused on ethics. Lin has arguably thought about the ethics of driverless cars more than anyone. He’s written about the topic for Wired and Forbes, and is currently on sabbatical working with the Center for Automotive Research at Stanford (CARS, of course), a group that partners with auto industry members on future technology.

Over the last year, Lin has been convincing the auto industry that it should be thinking about ethics, including briefings with Tesla Motors and auto supplier Bosch, and talks at Stanford with major industry players.

“I’ve been telling them that, at this very early stage, what’s important isn’t so much nailing down the right answers to difficult ethical dilemmas, but to raise awareness that ethics will matter much more as cars become more autonomous,” Lin wrote in an e-mail. “It’s about being thoughtful about certain decisions and able to defend them–in other words, it’s about showing your math.”

In a phone interview, Lin said that industry representatives often react to his talks with astonishment, as they realize driverless cars require ethical considerations.

Still, he said that it’s early days for driverless cars–we’re a long way from having computers that can read a situation like a human can–so manufacturers don’t yet have to worry too deeply about solving ethical scenarios. “We’re not quite there yet where we can collect a lot of the information that goes into some of these ethical dilemmas,” Lin said.

Perhaps that explains why auto makers aren’t eager to have the discussion in public at the moment. BMW, Ford and Audi–who are each working on automated driving features in their cars–declined to comment for this story. Google also wouldn’t comment on the record, even as it prepares to test fully autonomous cars with no steering wheels. And the auto makers who did comment are focused on the idea that the first driverless cars won’t take ethics into account at all.

“The cars are designed to minimize the overall risk for a traffic accident,” Volvo spokeswoman Malin Persson said in an e-mail. “If the situation is unsure, the car is made to come to a safe stop.” (Volvo, by the way, says it wants to eliminate serious injuries or deaths in its cars by 2020, but research has shown that even driverless cars will inevitably crash.)

John Capp, GM’s director of electrical and control systems research, said in an interview that getting to the point where a driverless car needs to account for ethics will be a gradual, step-by-step process. (GM plans to offer a “Super Cruise” feature for highway driving by 2018, but it’ll still require the driver to stay alert and take the wheel in emergencies.)

“It’s going to be a while before cars have the capability to completely replace a driver, and we have no intention of promising that to a driver until the technology is capable of doing it,” Capp said.

Capp has a point, in that even Google’s recently-announced city-dwelling cars have only a rudimentary ability to detect what’s around them. They can’t, for instance, distinguish an elderly person from a child, or figure out how many people are in another vehicle. But as Lin points out, it’s likely that these cars will gain more advanced sensors over time, improving their ability to make decisions on the driver’s behalf.

Once that happens, the actual programming should be easy. Stanford’s CARS group, for instance, has already developed tools so that auto makers can code morality into their cars. It’s one part of a larger framework that CARS is offering to auto makers, covering all aspects of driverless car software.

“These are mathematical problems that we can describe and solve,” Chris Gerdes, a Stanford engineering professor and the program director at CARS, said in an interview. “And so it’s really up to manufacturers, then, if they are interested in adopting some of these ideas, in whole or in part.”

So far, Stanford’s partners–including Audi, Volkswagen, Ford and Nissan–have been more interested in other aspects of the software, Gerdes said. But that’s starting to change now, as Gerdes and Lin have been raising awareness about ethics.

Gerdes noted that the framework doesn’t actually define what’s morally right and wrong. In other words, the dirty work of answering centuries-old philosophical debates would still have to be done by auto makers. “The question of what ethical framework is right is something that we don’t really have an answer for,” he said. “That’s where I think the discussion needs to take place.”

Noah Goodall, a researcher at the Virginia Center for Transportation Innovation and Research, has started thinking about how cars might actually distinguish right from wrong. In a paper due to be published this year, Goodall proposes a series of steps for programming robotic cars, designed to get more complex as technology advances.

In the first phase, driverless cars would simply try to minimize overall damage, guided by some basic principles. Cars may, for instance, prioritize multiple injuries over a single death–perhaps answering the anecdote at the top of this story–and property damage over human harm. If a car faces a choice between inflicting two similar injuries, it may be able to take inspiration from the medical field, which, for instance, assigns allocation scores to organ transplant recipients.

But Goodall said in an interview that these rules probably won’t cover every circumstance. That’s why he’s proposed a second phase, in which driverless cars learn ethics through simulations of crashes and near-crashes. “Humans would score potential actions and results as more or less ethical, and would be allowed to score outcomes without the time constraint of an actual crash,” Goodall writes in his paper.

In the final phase, Goodall expects that computers would be able to explain their decision-making back to us with natural language, so we can tweak their thinking accordingly. It’d be a lot like teaching morality to a child as it grows into an adult.

Even with enough supporting research, it’s unclear who would be responsible for coming up with ethical rules. Auto makers could devise their own standards, or pass the buck onto the insurance industry or lawmakers. In any case, experts agree that the industry will eventually have to conform to standards for how vehicles behave.

“Is it possible that there will be some mandatory code that makes certain decisions uniform or consistent across vehicle types? I would say that’s very likely,” said Robert Hartwig, President and Chief Economist for the Insurance Information Institute.

Hartwig believes public policy makers will ultimately be the ones to require ethical standards. He noted that the aviation industry already relies on similar standards for things like crash avoidance, as defined by international regulations.

“Will there be tussles in terms of what auto makers want to see, what software manufacturers want to see, maybe even what drivers want to see? Possibly, yes,” Hartwig said. “This is not going to be a smooth process, but … in the end, the benefit is going to be clear: fewer accidents, fewer injuries, fewer deaths on the road.”

It’s possible that some of these ethical concerns are overblown, and that as long as the net number of lives saved goes up significantly, most people won’t care about a few fringe disasters. The risk, Lin said, is that just a handful of unfortunate cases could equal major payouts by auto makers in liability lawsuits. In turn, these could become setbacks for further developments.

And on a broader level, driverless cars will be the first instance of robots navigating through society on a large scale. How we end up perceiving them could have a huge impact on whether robots become a bigger part of our lives.

“These are going to be driving next to your family and through your streets,” Lin said. “This industry is really going to set the tone for all of social robotics, so they really need to get it right.”

TIME Innovation

Ready or Not, Driverless Cars Are Coming

There may be an autobot in your driveway sooner than you think

+ READ ARTICLE

Predictions about a future in which cars that will fly, float or drive themselves have been staples of everything from science fiction to pop culture to corporate PR for decades. But now it looks like driverless cars, at least, may finally be hitting the road in short order.

Google announced as early as 2010 that it logged more than 140,000 miles in a self-driving car as part of a secret project. “While this project is very much in the experimental stage, it provides a glimpse of what transportation might look like in the future thanks to advanced computer science,” said Sebastian Thrun, a Google Fellow working on the company’s self-driving cars. “And that future is very exciting.”

Since then, Google and auto manufacturers have made great strides in refining and testing driverless technology by integrating semi-autonomous features into cars already on the market and building legal and public acceptance of the concept. But as the technology develops, questions have been raised about what it would mean if autonomous vehicles start hitting the roads in larger numbers. How do “robot cars” determine the best ways to react to an impending collision? How will human drivers and robots interact when they have to share the road? It won’t be long until we begin finding answers to these questions and others.

TIME robotics

Watch: Google’s Self-Driving Cars Head to the City

The tech giant's futuristic project is looking to overcome the obstacles of urban driving in Mountain View, Calif. after logging nearly 700,000 miles mostly on freeways

+ READ ARTICLE

The Google self-driving car project is training its sights on city streets.

The project’s cars have already logged nearly 700,000 miles, primarily on the freeway around Google’s hometown of Mountain View, Calif. Now, Google is teaching its cars to expertly navigate the streets of Mountain View, overcoming new obstacles like bikers swerving into the lane and crossing guards holding up stop signs, the company said in its first blog update on the project since Aug. 2012.

“A mile of city driving is much more complex than a mile of freeway driving, with hundreds of different objects moving according to different rules of the road in a small area,” Chris Urmson, director of the project, writes in the post.

The Google cars may drive themselves, but a driver still stays in the car to take over if necessary, and Urmson writes that the project is still teaching its cars to perfect the streets of Mountain View before they tackle other towns. Google co-founder Sergey Brin said in 2012 you could “count on one hand the number of years until people, ordinary people, can experience this,” according to the Associated Press.

 

TIME space travels

Google and NASA Cooperate on Building Floating Space Robots

NASA and Google will send floating robots equipped with 3-D mapping technology into orbit this summer, enabling them to navigate autonomously for the first time. "The future is awesome," brags Google

Tech giant Google is helping NASA make their floating robots smarter.

Since last summer, the two organizations have been cooperating on equipping NASA’s SPHERE satellites with 3-D mapping technology to allow the floating robots to move around more freely. They currently use a system based upon ultrasound and infrared light to navigate.

The SPHERES, which serve as robotic assistants to astronauts at the International Space Station, will be equipped with the technology behind newly unveiled Project Tango to better get around. The technology allows phones to make 3-D real-time maps of their environment, and so facilitating autonomous navigation.

“Think about having a free-flying robot that can fly around inside a space station, perhaps equipped with some type of future smartphone,” Zach Moratto, one of the NASA research engineers involved in the project, said in a video about the development.

The Project Tango-equipped floating robots will be launched into orbit this summer, a statement by Google said, before concluding on a bright note: “The future is awesome.”

TIME Innovation

Smooth Moves: The History and Evolution of Honda’s ASIMO Robot

+ READ ARTICLE

As the robotics realm continues to heat up, Honda’s ASIMO (short for Advanced Step in Innovative Mobility) is something of an old-timer.

It’s been around for 14 years, and has seen continual improvements – check out the above video for more of the backstory.

While some robots have a more menacing look – ahem, Atlas – ASIMO has always played the part of a cutesy, Jetsons-style robot meant, in Honda’s words, “to help those in society who need assistance.”

In that spirit, ASIMO is able to do things like opening and serving beverages. It knows sign language – both Japanese and English. It can avoid bumping into people in hallways. Stuff like that.

At the International Auto Show in New York last week, Honda showed off ASIMO’s latest improvements. The robot, once relatively rigid and… well, robotic, is now far more nimble, able to run, jump, climb stairs and kick soccer balls with more human-like dexterity.

 

TIME Surveillance

The New Cop on the Beat May Be a Bot

Knightscope K5 promises enhanced policing capabilities, courts controversy

+ READ ARTICLE

Have we as a species learned nothing from Robocop?

A Silicon Valley company called Knightscope is currently testing a prototype robot designed to detect and monitor criminal activity, much the way a police officer or a security guard would.

The Knightscope K5 is a five-foot-tall autonomous robot (one presumes that its resemblance to a Dalek is merely coincidental) that roams around your neighborhood, observing and gathering data and trying to predict where and when criminal activity will occur.

It carries no weaponry, but it has a pretty complete sensor package that includes thermal imaging, license plate reading and facial recognition.

This takes public surveillance a step beyond stationary cameras, and the challenges to personal privacy are clear. The K5 could do a whole lot of good by deterring crime, especially in neighborhoods that lack the resources to field an adequate police presence.

But where do you draw the line?

TIME robotics

DARPA-Funded Atlas Robot to Go Tetherless in the Next Six Months

+ READ ARTICLE

We’ve written about Boston Dynamics’ DARPA-funded Atlas robot again and again and again, but I got a chance to meet Atlas in person today and found out that everyone’s favorite terrifying humanoid is about to lose the system of cables and tubes that feeds it a steady diet of power, fluids and data.

Atlas has been handed over to CSAIL – the Computer Science and Artificial Intelligence Laboratory – at the Massachusetts Institute of Technology, where Team MIT has been responsible for getting the robot in shape for the DARPA Robotics Challenge. It’s a series of competitions eventually culminating next year in a $2 million prize given to the winning team.

MIT has been rejiggering Atlas’s code to make it faster and more autonomous: In the above video, postdoctoral associate Scott Kuindersma gives a brief overview of Atlas’s various parts, and explains how the robot’s being coded so that it can make its own decisions in certain instances and operate on its own during a blackout period of up to 30 seconds.

The end of the video features a demo in which Atlas comes across an obstacle – in this case, a two-by-four – and removes it from its path. This demonstration of decision-making, along with figuring out how to get Atlas to move about untethered (the actual hardware modifications would be done by Boston Dynamics) are a couple of the tasks that the MIT team is working on for the next round of the DARPA Robotics Challenge, which takes place in December.

Your browser, Internet Explorer 8 or below, is out of date. It has known security flaws and may not display all features of this and other websites.

Learn how to update your browser