This Tech Could Make Google’s Future Gadgets Even Smarter

3 minute read

Google’s latest partnership could result in smartphones that recognize objects much the same way humans do.

Machine learning startup Movidius said recently that it’s working with Google to “accelerate the adoption of deep learning within mobile devices.” Movidius makes a vision processor that attempts to replicate human eyesight, taking into account variables like depth and texture to put objects into context. That capability, CEO Remi El-Ouazzane says, could result in much more powerful smartphones and other devices.

“When you understand the context, than there are many things you can do,” says El-Ouazzane. “You can automate tasks, you can free up the human being to do [other] things.”

El-Ouazzane refused to talk about how his company’s chip might be used in any future Google products, like Android smartphones. But he did say it would show up “in the context of personal and wearable computing.” He also argued that it could improve unmanned aerial vehicles, or drones, by giving them the ability to better make sense of the video footage they’re recording as they fly about. A drone flown by an oil rig operator, for instance, might be able to detect damage as well as analyze the extent of the problem.

“The level of information you get will be much more sophisticated,” says El-Ouazzane.

Some commercially available software, like Google’s own Photos app, can already recognize particular people or objects in photos. Searching that app for “dogs,” for instance, pulls up only images of our four-legged friends. But that software relies on far-flung computers to run the actual computations. Movidius’ chip is different because its calculations happen right on the processor without any outside help, potentially speeding up the process and removing the need for an Internet connection.

Still, the executive admitted that truly replicating human eyesight and visual recognition is a steep challenge. “What our [vision processor] is doing definitely is not as tuned or as perfect as a [human] visual cortex,” he says. “We’re fighting 540 million years of human progression, so it takes time.”

The 10 Most Ambitious Google Projects

A driver drives a Google Inc. self-driving car in front of the company's headquarters in Mountain View, California on September 27, 2013.
Google Driverless Car The Google Self-Driving Car has been in the works since 2005 after a team of engineers won a grant from the U.S. Department of Defense to design an autonomous car. The project, which aims to reduce traffic accidents, has made headway in recent years as states passed laws permitting self-driving cars. Google plans a commercial release between 2017 and 2020.David Paul Morris—Bloomberg/Getty Images
Google Internet Balloon
Google has been testing balloons which sail into the stratosphere and beam Internet down to Earth. Jon Shenk—AP
This undated photo released by Google shows a contact lens Google is testing to explore tear glucose.
Google's smart contact lenses.Google/AP
Avatars from Google Lively.
Google Lively Google Lively was a web-based virtual community space where users could design avatars, chat with one another and personalize their online hangout space. The project was discontinued after a six-month stint in 2008 after limited success.Google/AP
Eye in the Sky
Google Earth Google's virtual map of the Earth allows users to tour the earth with 3-D satellite images. The project, which dates back to 2004, has already found significant applications in disaster relief.Google/AP
Google's modular phone (Project Ara) at Engadget Expand New York 2014 at Javits Center on Nov. 7, 2014 in New York City.
Project Ara Google's build-your-own-smartphone project allows users to customize their handsets to their own preferences, with the possibility of eliminating electronic waste by encouraging users to add hardware updates on their own terms. The team is working towards a limited market pilot in 2015.Bryan Bedder—Getty Images for Engadget Expand
colored pill capsules
Disease Detecting Pill Google unveiled its plans to disease-detecting ingestible pill in October, a project that'll let patients access their real-time health data to encourage preventative care. The pill will contain nanoparticles that can bind to certain cells and chemicals, with the possibility of detecting diseases like cancer in early stages.Getty Images
Flight team engineers Kenneth Jensen, left, Damon Vander Lind, center, and Matthew Peddie prepare for the first crosswind test of their 20kW Wing 7 airborne wind turbine prototype in Alameda, Calif. on May 24, 2011
Flying Wind Turbines The flying windmill is the project of Makani Power, a wind turbine developer acquired by Google in 2013. The tethered airborne turbines will harness wind energy for the goal of producing low-cost, renewable energyAndrea Dunlap—Makani Power/AP
Vic Gundotra, director of product management of Google, demonstrates Google+ on the Nexus 7 tablet during Google I/O 2012 at Moscone Center in San Francisco on June 27, 2012.
Google+ Google's social networking platform launched in 2011, the most successful service after several flops at designing a Facebook competitor, like the now-retired Google Buzz. Today, Google+ boasts over half a billion monthly active users.Stephen Lam—Reuters
Books
Google Books Google Books dates back to 2004, when Google partnered with libraries and universities to plan to digitize millions of volumes over the next several years. The project aims to make searching books as easy as searching the web.Getty Images

The Movidius chip relies on an emerging field called “deep learning,” referring to software that mimics the way humans learn from their experiences. Long a mostly academic concept, deep learning is starting to bear fruit in the form of consumer applications. Microsoft Skype’s real-time translator, for instance, is a result of research in this field.

But one disadvantage of deep learning systems is that they typically need to absorb massive amounts of data about a given subject before they can go off and make decisions on their own. For a typical deep learning program to correctly identify a dog, for instance, it first needs to look at lots of poodle pictures. Movidius’ goal is to reduce this need by what it calls “unsupervised networks,” which are systems that can recognize different types of objects on their own with less human intervention.

“I think today we are reaching a very high level of performance on those networks,” says El-Ouazzane.

More Must-Reads from TIME

Contact us at letters@time.com