TIME Innovation

Google Reveals Drone Delivery Plans

The company just started conducting tests after two years of work

+ READ ARTICLE

Google has begun testing a small number of drones as it explores a possible delivery service powered by the unmanned aerial vehicles, the company revealed Thursday.

Google has been working on Project Wing for two years but only began testing the drones in the Australian outback this month, the Washington Post reports. In trials, the small aircraft have flown between 130 and 195 ft. (well above houses and trees) and delivered items such as a water bottle and a first-aid kit. Google says it conducted more than 30 successful flights, including one that traveled more than half a mile.

While the Federal Aviation Administration is still developing guidelines and regulations for commercial drones, several companies besides Google, including Amazon, Facebook and Disney, have opened up about possible applications for drones, including delivery capabilities and high-speed Internet services. According to Google, it will be years before the company will develop a fully functional delivery service with drones traveling along preprogrammed routes.

“When you can get something near-instantly, it changes how you think about it,” the company said in a statement. “Think of the mom stuck at home with two sick kids, the hiker who’s met a poisonous snake, or the farmer out in the field with a sick animal. It could also open up new models for sharing goods rather than owning them — who needs a power drill for more than eight minutes a year?”

[WP]

TIME robotics

This Robot Army Can Organize Itself

Harvard University — YouTube

Expect the future to be overrun with millions more like it

Scientists at Harvard University have created an army of over a thousand tiny robots that can communicate with each other to perform complex actions. The breakthrough could lay the framework for future robot brigades that collaborate to execute large tasks such as environmental cleanup.

The 1,024 simple bots, called Kilobots, are each only a few centimeters wide, but communicate with each other using infrared light to create large star- or K-shaped formations. Only the initial instruction to form up needs to be given — after that, Kilobots organize themselves and cooperate with each together to smooth out logjams or redirect bots that have wandered off-course.

Michael Rubenstein, the lead author of the study published in the journal Science, says that Kilobots mimic units found in nature such as a group of ants that link together to forge a river, or a body of cells that assemble to form an organism. “Biological collectives involve enormous numbers of cooperating entities — whether you think of cells or insects or animals — that together accomplish a single task that is a magnitude beyond the scale of any individual,” Rubenstein said in a statement released by the Harvard School of Engineering and Applied Sciences.

Although scientists have directed simple bots to complete tasks before, this is the first time that such a large company has operated together. Radhika Nagpal, one of the researchers in the study, says that the Kilobots demonstrate the potential of robots to self-organize on a larger scale. “Increasingly, we’re going to see large numbers of robots working together, whether its hundreds of robots cooperating to achieve environmental cleanup or a quick disaster response, or millions of self-driving cars on our highways,” Nagpal said in a statement. “Understanding how to design ‘good’ systems at that scale will be critical.”

TIME Innovation

Origami-like Robot Folds Itself Into Shape and Walks Away

+ READ ARTICLE

As a writer, the path of least resistance would be to frame this piece along the lines of small robots assembling themselves, then building bigger robots, then taking over the world. The old robot-overlord routine.

But these MIT- and Harvard-developed self-folding robots — cool as they are — don’t look all that menacing quite yet. For starters, one of the key ingredients is polystyrene, which is the same stuff used in Shrinky Dinks. That’s adorable. Second, it takes around four minutes for the things to assemble and start walking away. And third, the assembly has to be pre-programmed, so there’s still some human intervention.

Thirty years from now? That might be a different story. I’ll be retired (or homeless) on a beach somewhere, though, so I’ll just head for the water if these things start getting uppity. They can’t swim, can they? Can they?!!

In the interim, the researchers envision self-assembling structures that could be used in dangerous places like space or battlefields.

[ExtremeTech]

TIME robotics

This Is What a ‘Family Robot’ Looks Like

Its name is Jibo, and it’s a sign of how emotion will play a key role in the integration of robots in our daily lives

fortunelogo-blue
This post is in partnership with Fortune, which offers the latest business and finance news. Read the article below originally published at Fortune.com.

At first glance, Jibo looks a bit like Wall-E’s robot girlfriend. Both Jibo, a real robot, and Wall-E’s girlfriend, the fictional Pixar character, have the look of a futuristic Apple product: reflective white plastic, round curves, a black screen for a “face,” and smooth swiveling movements.

But Jibo’s raison d’être is slightly more in line with Rosie, the robot maid from the 1960s animated television series The Jetsons, and its operating system is more akin to the one employed by Samantha, the artificially intelligent character from the 2013 Spike Jonze film Her. (One key difference: Jibo is male, according to its makers.)

Jibo is described as a “family robot” because it is able to see, hear, speak, learn, and help families with a variety of tasks around the house. It—he?—can “relate” by expressing itself in natural language, using “social and emotive cues so you understand each other better.” Jibo is meant to be a companion.

It’s the creation of a team of robot architects, cloud computing engineers, animators, conversational technologists, and human-robot interaction engineers. Jibo, Inc. is backed by $5.59 million in venture funding from investors including Charles River Ventures, Fairhaven Capital Partners, Osage University Partners, and angel backers.

So far, people like Jibo. A crowdfunding campaign, launched last month, raked in more than $1.5 million from more than 3,500 people, handily surpassing its $100,000 target. (The Boston-based company does not expect to ship its first units, priced at $499 each, until the 2015 winter holiday season. The crowdfunding campaign is designed to get developers excited about building apps for the robot, it said.)

Naturally, I had to meet Jibo. Off to a hotel room in Midtown Manhattan, then, where two Jibos and Dr. Cynthia Breazeal, the robot’s creator, awaited me. The robot is not yet fully functioning, it turns out. I watched a prepared demo where Jibo, about a foot tall, turned to look me in the eye. This was disarming at first, as if I was being followed by a security camera. Once he started talking to me, it began to feel more natural—as natural as a robot in a 1980s science fiction movie, anyway. Unlike his lesser robotic peers, or, say, a smartphone, Jibo did not rudely buzz or ding when there was a new message to communicate to me. He politely said, “Excuse me, Erin,” and waited for me to respond before continuing.

For the rest of the story, go to Fortune.com.

TIME robotics

That Jibo Robot Does the Same Stuff as Your Phone, but People Are Freaking Out Anyway

jibo
Jibo

Jibo promises to be a lovable robot assistant, but it's unclear why you'd actually need one.

A crowdfunding campaign for a “family robot” called Jibo is picking up steam, blowing through its fundraising goals within the first day.

What is Jibo? It’s a little pod with a motorized swivel, equipped with cameras, microphones and a display. It recognizes faces and voices, and can act as a personal assistant by setting reminders, delivering messages and offering to take group photos. It also serves as a telepresence robot for video chat.

As of now, Jibo has raised more than $200,000 on IndieGogo–well beyond its $100,000 goal–and has racked up plenty of breathless coverage. Early bird pricing of $100 sold out long ago, but you can still claim a unit for $499, with an estimated December 2015 ship date.

Sorry to burst the hype bubble, but I’m not seeing how Jibo will more practical than a phone, a tablet or even a wearable device. Most of the things Jibo promises to do can be done better by the handset in your pocket–which, by the way, you don’t have to lug around from tabletop to tabletop.

To see what I mean, let’s deconstruct the scenario in Jibo’s pitch video, in which a man gets home from a long day at work. Jibo, perched on a nearby counter, turns on the lights, records an order for Chinese take-out, then starts reading back a voicemail from his girlfriend. The man then doubles the take-out order on the fly.

It’s the kind of demo that makes perfect sense unless you think about it too much. If home automation goes mainstream, a dedicated robot won’t be necessary, because our phones will do a better job of signaling when we’ve walked through the front door. The idea of having your messages read to you when you get home is a throwback to answering machines, which are obsolete now that we can check our messages from anywhere. As for the take-out order, you’ve got to be the dullest person in the world to order “the usual” every time you get home, and I’m not sure the man’s girlfriend will take kindly to having no input on what food she gets.

There is something to be said for a device that can persistently listen for your commands and act on them, but this is the same problem that wearable devices are trying to solve, and they’re better-suited to being wherever you are. While group photos and telepresence are potentially useful, now we’re getting into some very specific situations that don’t really justify a $500 purchase, regardless of how endearing Jibo tries to be. The only way Jibo makes sense as a robot is if it gains more physical capabilities, like a way to clean your windows or cook dinner, but it’s far too early to say whether that’s going to happen.

Maybe it’s unfair for me to judge at such an early stage, but that’s exactly what Jibo is trying to do through crowdfunding. The creators are asking people to throw money at something they’ve never seen, that has only been shown to the press in limited demos, and that won’t even ship until the tail end of next year. All we have to go on right now is a slick-looking pitch video and a whole bunch of promises. As talented as the folks behind Jibo seem to be, I’ve seen enough undercooked crowdfunded projects to know that some skepticism is in order.

TIME technology

FAA Won’t Allow Deliveries By Drone

This undated image provided by Amazon.com shows the so-called Prime Air unmanned aircraft project that Amazon is working on in its research and development labs.
This undated image provided by Amazon.com shows the so-called Prime Air unmanned aircraft project that Amazon is working on in its research and development labs. Amazon/AP

The Federal Aviation Administration is upholding a ban on using drones for commercial purposes, including delivering packages, according to a memo released this week.

The FAA has long said that commercial drone use is illegal, but a federal judge ruled in March that the FAA must accept public comment before adopting the rules, according to Ars Technica. The recent memo is a call for public input on its rules.

Drones that are used to deliver packages for a fee are barred under current statutes, the FAA emphasized in its Monday memo, even if the seller offers free shipping.

Drones also can’t be used for photographing a property for real estate listings, or determining whether crops need to be watered as part of a commercial farming operation, for instance.

The recent memo also means that Amazon’s hyped drone deliveries won’t be happening anytime soon, though there could be new rules in place in the coming years after the public comment period ends.

The National Park Service banned drone flights last week from its parks, saying that drones disturb wildlife and can endanger climbers.

 

TIME robotics

Meet Pepper, the Robot Who Can Read Your Emotions

A company in Japan has created a robot that can interact with humans on an emotional level

+ READ ARTICLE

A robot designed to read—and more importantly, respond to—users’ moods was unveiled this week by Softbank, a Japanese internet company.

Pepper, who stands 4 feet tall and weighs about 62 pounds, is equipped with facial-recognition technology and a number of cameras, audio recorders and sensors. That technology allows the robot to learn how to behave over time, instead of being programmed for specific tasks, Softbank said.

“Our aim is to develop affectionate robots that can make people smile,” said Masayoshi Son, the billionaire behind Softbank.

The humanoid, which is set to go on sale in Japan in February 2015, will cost about 198,000 yen ($1,900).

 

 

TIME robotics

How to Make Driverless Cars Behave

As self-driving cars become more advanced, auto makers may have to answer centuries-old philosophical debates -- and they're starting to realize it.

Imagine you’re winding through the Pacific Coast Highway in one of Google’s self-driving cars, with the ocean on your right and the hills of Malibu across the opposite lane to your left. Just as you’re turning one of the road’s blind corners, another car whips around the bend in the opposite direction. Its brakes have failed, and it’s headed for your lane.

With little room to maneuver and no time for human intervention, your robot car faces a decision. It could turn inward and slam the brakes to avoid a head-on collision, but this would potentially let the other car sail over the cliff wall. Alternatively, your car could brace for impact, keeping both cars on the road but potentially injuring you, along with the other car’s passengers.

In a crash situation, we don’t have time to think about morality, and studies show we act more on instinct. But for a computer, a fraction of a second is plenty of time to ponder an ethical decision–provided it’s been programmed to think that way.

The problem is that the answers aren’t always clear-cut. Should a driverless car jeopardize its passenger’s safety to save someone else’s life? Does the action change if the other vehicle is causing the crash? What if there are more passengers in the other car? Less morbidly, should a Google-powered car be able to divert your route to drive past an advertiser’s business? Should the driver be able to influence these hypothetical decisions before getting into the vehicle?

As driverless cars get closer to hitting the road, moral dilemmas are something the auto industry will need to consider. And while it’s still early days for the technology, a conversation about ethics is starting to happen.

The Daimler and Benz foundation, for instance, is funding a research project about how driverless cars will change society. Part of that project, led by California Polytechnic State University professor Patrick Lin, will be focused on ethics. Lin has arguably thought about the ethics of driverless cars more than anyone. He’s written about the topic for Wired and Forbes, and is currently on sabbatical working with the Center for Automotive Research at Stanford (CARS, of course), a group that partners with auto industry members on future technology.

Over the last year, Lin has been convincing the auto industry that it should be thinking about ethics, including briefings with Tesla Motors and auto supplier Bosch, and talks at Stanford with major industry players.

“I’ve been telling them that, at this very early stage, what’s important isn’t so much nailing down the right answers to difficult ethical dilemmas, but to raise awareness that ethics will matter much more as cars become more autonomous,” Lin wrote in an e-mail. “It’s about being thoughtful about certain decisions and able to defend them–in other words, it’s about showing your math.”

In a phone interview, Lin said that industry representatives often react to his talks with astonishment, as they realize driverless cars require ethical considerations.

Still, he said that it’s early days for driverless cars–we’re a long way from having computers that can read a situation like a human can–so manufacturers don’t yet have to worry too deeply about solving ethical scenarios. “We’re not quite there yet where we can collect a lot of the information that goes into some of these ethical dilemmas,” Lin said.

Perhaps that explains why auto makers aren’t eager to have the discussion in public at the moment. BMW, Ford and Audi–who are each working on automated driving features in their cars–declined to comment for this story. Google also wouldn’t comment on the record, even as it prepares to test fully autonomous cars with no steering wheels. And the auto makers who did comment are focused on the idea that the first driverless cars won’t take ethics into account at all.

“The cars are designed to minimize the overall risk for a traffic accident,” Volvo spokeswoman Malin Persson said in an e-mail. “If the situation is unsure, the car is made to come to a safe stop.” (Volvo, by the way, says it wants to eliminate serious injuries or deaths in its cars by 2020, but research has shown that even driverless cars will inevitably crash.)

John Capp, GM’s director of electrical and control systems research, said in an interview that getting to the point where a driverless car needs to account for ethics will be a gradual, step-by-step process. (GM plans to offer a “Super Cruise” feature for highway driving by 2018, but it’ll still require the driver to stay alert and take the wheel in emergencies.)

“It’s going to be a while before cars have the capability to completely replace a driver, and we have no intention of promising that to a driver until the technology is capable of doing it,” Capp said.

Capp has a point, in that even Google’s recently-announced city-dwelling cars have only a rudimentary ability to detect what’s around them. They can’t, for instance, distinguish an elderly person from a child, or figure out how many people are in another vehicle. But as Lin points out, it’s likely that these cars will gain more advanced sensors over time, improving their ability to make decisions on the driver’s behalf.

Once that happens, the actual programming should be easy. Stanford’s CARS group, for instance, has already developed tools so that auto makers can code morality into their cars. It’s one part of a larger framework that CARS is offering to auto makers, covering all aspects of driverless car software.

“These are mathematical problems that we can describe and solve,” Chris Gerdes, a Stanford engineering professor and the program director at CARS, said in an interview. “And so it’s really up to manufacturers, then, if they are interested in adopting some of these ideas, in whole or in part.”

So far, Stanford’s partners–including Audi, Volkswagen, Ford and Nissan–have been more interested in other aspects of the software, Gerdes said. But that’s starting to change now, as Gerdes and Lin have been raising awareness about ethics.

Gerdes noted that the framework doesn’t actually define what’s morally right and wrong. In other words, the dirty work of answering centuries-old philosophical debates would still have to be done by auto makers. “The question of what ethical framework is right is something that we don’t really have an answer for,” he said. “That’s where I think the discussion needs to take place.”

Noah Goodall, a researcher at the Virginia Center for Transportation Innovation and Research, has started thinking about how cars might actually distinguish right from wrong. In a paper due to be published this year, Goodall proposes a series of steps for programming robotic cars, designed to get more complex as technology advances.

In the first phase, driverless cars would simply try to minimize overall damage, guided by some basic principles. Cars may, for instance, prioritize multiple injuries over a single death–perhaps answering the anecdote at the top of this story–and property damage over human harm. If a car faces a choice between inflicting two similar injuries, it may be able to take inspiration from the medical field, which, for instance, assigns allocation scores to organ transplant recipients.

But Goodall said in an interview that these rules probably won’t cover every circumstance. That’s why he’s proposed a second phase, in which driverless cars learn ethics through simulations of crashes and near-crashes. “Humans would score potential actions and results as more or less ethical, and would be allowed to score outcomes without the time constraint of an actual crash,” Goodall writes in his paper.

In the final phase, Goodall expects that computers would be able to explain their decision-making back to us with natural language, so we can tweak their thinking accordingly. It’d be a lot like teaching morality to a child as it grows into an adult.

Even with enough supporting research, it’s unclear who would be responsible for coming up with ethical rules. Auto makers could devise their own standards, or pass the buck onto the insurance industry or lawmakers. In any case, experts agree that the industry will eventually have to conform to standards for how vehicles behave.

“Is it possible that there will be some mandatory code that makes certain decisions uniform or consistent across vehicle types? I would say that’s very likely,” said Robert Hartwig, President and Chief Economist for the Insurance Information Institute.

Hartwig believes public policy makers will ultimately be the ones to require ethical standards. He noted that the aviation industry already relies on similar standards for things like crash avoidance, as defined by international regulations.

“Will there be tussles in terms of what auto makers want to see, what software manufacturers want to see, maybe even what drivers want to see? Possibly, yes,” Hartwig said. “This is not going to be a smooth process, but … in the end, the benefit is going to be clear: fewer accidents, fewer injuries, fewer deaths on the road.”

It’s possible that some of these ethical concerns are overblown, and that as long as the net number of lives saved goes up significantly, most people won’t care about a few fringe disasters. The risk, Lin said, is that just a handful of unfortunate cases could equal major payouts by auto makers in liability lawsuits. In turn, these could become setbacks for further developments.

And on a broader level, driverless cars will be the first instance of robots navigating through society on a large scale. How we end up perceiving them could have a huge impact on whether robots become a bigger part of our lives.

“These are going to be driving next to your family and through your streets,” Lin said. “This industry is really going to set the tone for all of social robotics, so they really need to get it right.”

TIME Innovation

Ready or Not, Driverless Cars Are Coming

There may be an autobot in your driveway sooner than you think

+ READ ARTICLE

Predictions about a future in which cars that will fly, float or drive themselves have been staples of everything from science fiction to pop culture to corporate PR for decades. But now it looks like driverless cars, at least, may finally be hitting the road in short order.

Google announced as early as 2010 that it logged more than 140,000 miles in a self-driving car as part of a secret project. “While this project is very much in the experimental stage, it provides a glimpse of what transportation might look like in the future thanks to advanced computer science,” said Sebastian Thrun, a Google Fellow working on the company’s self-driving cars. “And that future is very exciting.”

Since then, Google and auto manufacturers have made great strides in refining and testing driverless technology by integrating semi-autonomous features into cars already on the market and building legal and public acceptance of the concept. But as the technology develops, questions have been raised about what it would mean if autonomous vehicles start hitting the roads in larger numbers. How do “robot cars” determine the best ways to react to an impending collision? How will human drivers and robots interact when they have to share the road? It won’t be long until we begin finding answers to these questions and others.

TIME robotics

Watch: Google’s Self-Driving Cars Head to the City

The tech giant's futuristic project is looking to overcome the obstacles of urban driving in Mountain View, Calif. after logging nearly 700,000 miles mostly on freeways

+ READ ARTICLE

The Google self-driving car project is training its sights on city streets.

The project’s cars have already logged nearly 700,000 miles, primarily on the freeway around Google’s hometown of Mountain View, Calif. Now, Google is teaching its cars to expertly navigate the streets of Mountain View, overcoming new obstacles like bikers swerving into the lane and crossing guards holding up stop signs, the company said in its first blog update on the project since Aug. 2012.

“A mile of city driving is much more complex than a mile of freeway driving, with hundreds of different objects moving according to different rules of the road in a small area,” Chris Urmson, director of the project, writes in the post.

The Google cars may drive themselves, but a driver still stays in the car to take over if necessary, and Urmson writes that the project is still teaching its cars to perfect the streets of Mountain View before they tackle other towns. Google co-founder Sergey Brin said in 2012 you could “count on one hand the number of years until people, ordinary people, can experience this,” according to the Associated Press.

 

Your browser, Internet Explorer 8 or below, is out of date. It has known security flaws and may not display all features of this and other websites.

Learn how to update your browser
Follow

Get every new post delivered to your Inbox.

Join 46,388 other followers