The technology giant faces the biggest shift since its founding
Google.com had a good run. For years, it was the entryway to the World Wide Web for millions of people every single day. For years, it was the core moneymaker for what has become the most valuable Internet company of all time. And it created a new verb at the same time it destroyed the need to remember all kinds of basic minutiae, like state capitals, website URLs or the definition of the word “minutiae.”
Strike that, Google.com had a great run.
But, with more and more of our time online being spent far away from desktop computers, the website’s days as the central focus of its parent company have come to an end. Google wrote the first line of the website’s elegy in a May blog post announcing that mobile searches had surpassed desktop searches in at least ten countries, including the company’s biggest market, the United States. In the post, Google called the shift a “tremendous opportunity.”
In person, Amit Singhal, Google’s senior vice president in charge of all search-related products, acknowledges that the shift also brings unprecedented challenges for the company. “Mobile has actually made us very vulnerable in that sense because the future is nowhere close to what we earned on desktop over ten, 12 years of hard work,” he says, sitting in the company’s Mountain View, Calif. headquarters. “We are having to start from scratch again.”
While Google.com is not going anywhere, the difficulty for Google is that the way we access information is undergoing its most fundamental shift since we were first introduced to ten blue links in 1998. Google’s traditional search result listings, against which it serves ads to generate much of its revenue, are less than ideal to scroll through on a smartphone. They’re an impractical annoyance on a smart watch or smart television. And they’re impossible to implement safely in a moving car. Google plans to plug its software into all these devices—and many more—so it has begun to systematically rethink the way it presents results to users.
The solution starting to take shape is a brew of Google’s myriad Internet services, ambitious artificial intelligence and massive troves of user data. It is accessible in two closely related products bundled in the company’s mobile app: voice search, which lets users speak their questions instead of typing them, and Google Now, a predictive service that shows users vital information before they actually go searching for it. The company’s hope is that, together, this transforms the concept of “Googling” from something that happens via a static search bar into a kind of ongoing conversation with an omniscient assistant, ready to step in and fulfill any request—even ones you haven’t thought about yet.
If Google doesn’t figure out how to make the perfect virtual assistant, another tech company will. Apple’s Siri is likely the most famous competitor, automatically installed on hundreds of millions of iPhones and this year migrating to the Apple Watch and Apple TV. Microsoft’s Cortana is an integral part of its new operating system, Windows 10. Amazon has released a smart home appliance called Echo that sits in your living room and awaits voice commands. Facebook has M, a digital assistant accessible through its Messenger mobile app.
These companies view assistants as the way to control the cars, homes and other connected devices of the future. Every user need they can fulfill through their services is one less query being fed into the Google search box, and often, one fewer set of ads enticing users to click to destinations elsewhere. For Google, which will make an estimated $44 billion from search ads alone in 2015, the stakes couldn’t be higher.
In a series of in-depth interviews, Google executives, designers and researchers provided TIME the clearest picture yet of the company’s plan to transform itself in the coming years. “Google had such a clear role in terms of connecting users to information on the desktop Web,” says Aparna Chennapragada, a product director for Google Now. “What is the next Google? Is Google the next Google? That’s the kind of question we think about.”
Amit Singhal has at least two obsessions: search and Star Trek. The 47-year-old joined Google as its 176th employee in 2000 and has been working on search since. He’s also been spreading the gospel of Star Trek, a franchise he’s loved since his time as a boy in the mountainous region of Uttar Pradesh, India. At one point the company’s voice search project was under the codename Majel, a reference to the woman who voiced the Starship Enterprise’s AI computer.
It’s no surprise that Singhal eventually combined his two passions in the form of a prototype wearable modeled after the communicator that Captain Picard and company use to interact with the Enterprise. The Bluetooth-enabled lapel pin, which Google has never before discussed publicly, is equipped with a microphone and is activated with a simple tap. The device, which could output sound through a speaker or accompanying headphones, allows users to talk to Google without having to fish out their cell phones.
“I always wanted that pin,” says Singhal. “You just ask it anything and it works. That’s why we were like, ‘Let’s go prototype that and see how it feels.’” The device has not made it past the testing phase, but it shows the extent to which Google engineers are willing to go to find a natural new way to search.
Googlers regularly invoke the Star Trek computer or, more recently, Scarlett Johansson’s digital persona in the film Her, when laying out their vision for how people will interact with the company’s services in the future. But there are a lot of knotty problems to solve first. The most challenging has to do with voice—both the voice of the user and the voice of the company’s computer persona that responds to human questions.
In the old search box, Google interacts with users on purely transactional terms. We type a half-formed thought into the query bar and wade through blue links until we find what we were after. Or we assume if it’s not indexed by Google, it probably doesn’t exist. Studies show people have gotten worse at remembering facts but better at remembering how to find them on the Internet.
But when we open our mouths to search, the dynamic changes. There’s suddenly an expectation that Google will not only hear and understand every word we say, but also that it will respond in a natural, concise way, like another person would. “Your phone has to be your friend,” says Francoise Beaufays, a research scientist at Google specializing in speech recognition. “It needs to able to understand those very open, natural-language type of queries so that the user feels comfortable with it.”
Google has mostly solved the “hearing” part of the problem. Thanks to improved listening algorithms and greater computing power, the word error rate for voice search has fallen from 23% to 8% in the last two years. Scott Huffman, Google’s VP of engineering for conversational search, got a thrill in September when he was able to use voice search in a crowded Barcelona night club and it actually registered all his words. “I think you can argue that speech is at least as accurate as typing, and maybe more,” he says.
But transcribing words isn’t the same as understanding human language. Human beings are able to use context, apply multiple senses and draw on vast past experiences to interpret what other people are saying. Here, Google is still a long way off from achieving science fiction-level computer comprehension. “Meaning is something that has eluded computer science,” says Singhal. “Natural language processing—or understanding what was said—is one of the key nuts we will have to crack.”
Part of the solution is giving Google’s “brain”—called the Knowledge Graph—a large number of facts it can draw from. In computer science a “graph” is a collection of interconnected objects, like a family tree. Microsoft and Facebook have one. So does Google. Launched in 2012, the Google’s Knowledge Graph is a massive database which culls information from sources like Wikipedia so that the site can answer questions directly instead of just showing users blue links.
When a user asks “How tall is Barack Obama?” Google’s algorithms can look up all the information known about the President and respond with his height. Today, the Knowledge Graph has more than 1 billion entries, with new ones regularly added as users indicate interest in new topics via their search queries. Eventually the company wants at least 10 billion entries, Singhal says.
Thanks to its increasing sophistication at drawing connections between things in the Knowledge Graph, Google’s voice assistant is inching toward more conversational interactions. If you follow up the Obama question by asking, “Who he’s married to?” the Google app will know you’re asking about Michelle. A new feature rolled out Nov. 16 lets users ask even more complex questions that involve juggling two data points, such as “Who was the US. president when the Angels won the World Series?”
But even as advancements allow users to be less formal, the Google voice assistant remains devoid of personality, or even a readily marketable name. Ask Apple’s Siri what the meaning of life is and she can rattle off more than a dozen wisecracks: “I Kant answer that;” “A movie;” “All evidence to date suggests it’s chocolate.” Ask Google the same question and all you’ll see is a listing of search results.
This lack of charisma is deliberate, Google says. Incorporating highly scripted jokes into its assistant might boost the Google app’s charm, but it would give users a false impression of the program’s capabilities, Singhal says. “I’m not saying personality shouldn’t come, but the science to get that right doesn’t fully exist.”
Siri, by contrast, has been telling jokes since its inception in 2010. In fact, it may garner more headlines for its humor than its core functionality. “You’ve seen what happens in real life,” Singhal says. “That is interesting for a day or two, but then it kind of…loses its charm, let’s say.”
Undergirding all these interactions is Google’s massive investment in machine learning algorithms—programs that can essentially teach themselves to become more accurate without being tweaked by a human engineer. These powerful tools have become an obsession at the company, powering everything from YouTube recommendations to driverless cars to many aspects of search. In Google’s offices, signs above the urinals with titles like “Learning on the Loo” and “Teaching at the Toilet” offer a reminder of the importance of self-teaching code. “Machine learning is a core, transformative way by which we’re rethinking everything we’re doing,” CEO Sundar Pichai said during the company’s most recent earnings call.
Google’s voice algorithms, for example, become more adept at understanding unusual accents as more people with them use voice search. Google’s size, processing billions of search queries every day, is one of its key advantages over competitors as its users regularly feed its algorithms more info about the way they speak and think. “As I make sounds, one of the things your brain is doing is trying to map those sounds onto things that sort of make sense as sentences,” Huffman explains. “In some sense this language model idea is doing that same thing, but the data source we’re using is all the queries anyone’s ever said to Google.”
While not as sexy as a computer that can answer any question, the predictive arm of Google’s assistant, Google Now, could one day be an even more powerful evolution of traditional search. Launched in 2012, Now aims to seek out information for users before they even think of typing it into the search box.
Initially Now offered obvious features that were readily accessible via apps, like weather forecasts and sports scores. But the information available is getting increasingly sophisticated, making use of Google’s different services (and its tracking capabilities). The program will pick up on your daily commute schedule and use real-time traffic data to recommend when you should leave home to make it to work on time. It will comb through your emails to produce info cards showing your upcoming flight times, purchased movie tickets or incoming package shipments. Google Now can even remember where you parked you car.
The next, more ambitious phase of Google Now is just becoming available this fall. Called Now on Tap, the new feature allows Google to scan whatever is on a user’s smartphone screen and then pull up pertinent info or links to relevant apps. In a text conversation about The Martian, Now on Tap can pull up an informational card about the movie, a direct link to its YouTube trailer and a link to book tickets for the film through Fandango. View an Instagram photo of a friend on a vacation in Jamaica, and Now on Tap will pull up an info card with factoids about the country. Google says the service can scan any screen in any standard Android app for related information. “We think about this as the easy button for the phone,” says Chennapragada, a Google Now product director. Now on Tap is available on the latest version of Google’s mobile operating system, Android 6.0 or “Marshmallow,” which began rolling out to Android phones in October.
This represents the surface of Google’s plans. The company wants to implement Now cards that can tell you which of the restaurants near you has the shortest line, present vacation itineraries for an upcoming long weekend or send you reminders to take your prescription medicine. And the company is courting third-party developers to allow their data to be used to serve up new types of Now cards. Chennapragada draws a comparison to the evolution of Google Maps, which has made navigating cities easier. “Think of this as Google Now helping you not get lost in many, many parts of your life,” she says.
For users, Now presents an easy way to glean information from many different services without having to burrow through apps, copy and paste text into search boxes or fiddle with a mobile browser. For Google, the program places the company at the center of the mobile experience in the same way most people’s desktop experience is routed through Google.com. The company is currently trying to convince websites to allow its search bots to include content from their apps within Google’s search results, a process known as app indexing. Google has indexed 100 billion pages within apps so far, but some notable holdouts, including Amazon, make the scope of in-app searches more limited compared to the Web.
In addition to selling competitors on its plans with Now, Google has to convince users they should trade their personal data for convenience. Now is an opt-in service, and it works best when you essentially hand the digital keys of your life over to Google and trust the company to drive. Apple, which is building out its own predictive search feature into Siri, has been quick to cast Google’s data-hungry model as a potential invasion of privacy. In a June speech, Apple CEO Tim Cook did everything but call out Google by name when he said, “Some of the most prominent and successful companies have built their businesses by lulling their customers into complacency about their personal information. They’re gobbling up everything they can learn about you and trying to monetize it. We think that’s wrong.”
Google stresses that it doesn’t sell user data to third parties, but the company isn’t apologetic about the fact that Now works better the more Google knows about you. “We can build a far better future by knowing a little bit more about you,” Singhal says. “People should only opt in if they get value out of it. Otherwise they should go to ‘My Accounts’ and just delete all that data.”
Exactly how many people are finding that value is an open question. Google won’t disclose how many people use Google Now, saying only that the service is seeing “strong growth.” One telling point: instead of tracking monthly active users, the go-to metric for consumer tech products, Chennapragada says her team is fixated on the growth of daily active users, the people who have deeply integrated the app into their everyday routines.
At the product meetings where Google plans out the future of its search products, the desktop is rarely discussed. Even when talking about new features geared toward mobile, product leads are often peppered with questions about how people might interact with a new mechanism differently if it was in their car, on their wrist or in their living room. “More and more we’re spending our time thinking about the devices,” says Huffman. “With these devices, voice is really the only option.”
Google, like its competitors, has spent the last few years cramming its software into a variety of form factors. So far, there have been no breakout hits. Android Wear, Google’s operating system for the smart watch has been quickly outsold by the Apple Watch. Android Auto, a system that lets users link their phones to their cars, is locked in a battle with Apple’s CarPlay and facing some pushback from automakers who don’t want to share their data with Google. And Google Glass, the company’s much-hyped wearable once billed as the next iPhone, has been shelved indefinitely. Nest, a manufacturer of smart thermostats that Google acquired in 2014, has seen more success.
What the company will try next is anyone’s guess. Huffman envisions a future where a tiny device with Google’s digital assistant smarts could be inserted into a traditional home appliance, like a refrigerator or a television. Singhal could see projectors being implemented in the smart home. The company has filed patents for dolls that listen to their owners, robots that can download new personality types from the cloud and search results that are ranked based on how long you look at certain items on a screen.
“We don’t know what devices are coming, but we know they’re coming,” Singhal says. “Search and natural language is how you will interact with them and get information and services that you need in the moment, no matter if you’re cooking or driving or walking your dog or playing catch with your son.”
It’s easier to envision how Google’s predictive software will work across devices, no matter the form factor. The company envisions being able to answer more complex lifestyle questions in the future, making its digital assistant more like a life coach than a secretary. Tell Google Now you want to spend more time with your family, and the app might prompt you to call your significant other when it knows you have 30 minutes between two meetings on your calendar. If it knows you have a long weekend coming up, it could use your travel history to proactively suggest similar vacation spots that are in your typical price range.
To come up with new features for its assistant, the company regularly conducts surveys in which it asks people to disclose their daily needs at a given moment, whether Google can actually solve the problem or not. The answers to a recent survey in India were sprawled across a small room at the Googleplex via hundreds of notecards, organized into categories like “life improvement,” “parenting” and “cricket.” Looking through the people’s daily struggles, you can almost envision how Google wants to transition from surfacing basic, empirical facts (“best waffle in town”) to helping people meet specific goals (“I want to be slimmer by December”) or developing deep bonds with its users “(I want to know how to be happy”). “I want Google Now to help me not only just do the next thing,” Singhal says. “I want it to enable a better experience in this beautiful journey that we call life.”
But if Google’s plan truly is to build the Star Trek computer, it is going to want a return on its investment. Right now, there are no ads in Google Now, and the company hasn’t discussed ways ads might be integrated into the service in the future. Chennapragada will only say that monetization will come after the company “gets the right experience for users,” but it’s easy to imagine people being pushed some kind of sponsored content in Now based on their location.
Beyond the smartphone, it’s also not entirely clear how Google would insert sponsored messages into platforms controlled primarily through voice controls, like Android Auto. But even when Google’s not selling ads directly, it’s hoovering up information about its users that can make ads served on other platforms more valuable. “A lot of the opportunity is more around the data collection to understand consumer behavior and what they’re searching for in all these different places,” says Cathy Boyle, a senior analyst at eMarketer. “Then when they are in front of a screen, pull all that together and be able to deliver a more relevant message.”
In Google’s idealized version of the future, we’ll be more dependent on the company than ever before. Already we’ve replaced memorization of basic facts with the search box, and knowledge of the layout of cities with Google Maps. As technology “fades into the background,” as Singhal puts it, the company’s presence in our lives could become both more pervasive and less overt, a series of continual, small interactions with Google rather than a transactional visit to its homepage. Many of Google’s competitors have the same aim.
Singhal says this can be liberating. “Google converts data to information. You convert information to knowledge. And life converts knowledge to wisdom—for some,” he says. “If Google gives me the answer, that’s a good thing so that I can spend that extra five minutes or three minutes with our children. That enriches me in a different way. So that’s how I feel. We are kind of liberating humanity’s time from the mundane to the higher-order bits. I will never apologize for that.”