• Tech

Google’s New Email Feature Will Make Your Inbox Way Less Annoying

2 minute read

Sometimes it’s the quick and easy emails that go unanswered. Perhaps a friend is asking if you’re available for dinner, and you forgot to respond even though you were free that night.

Google’s latest feature for Inbox, the email app it unveiled last October, attempts to make situations like this easier to handle by suggesting responses to emails. Before even pressing the reply button, Google will suggest three responses based on the content in a received email.

The feature, which is called Smart Reply and launches on Nov. 5, uses machine learning to understand the context of a message and compose replies that make sense.

Machine learning refers to specific types of computer algorithms that can learn how do do things wtithout being specifically programmed to do so — such as completing a task or making predictions.

Google uses machine learning in several of its apps and services, including its Photos app, which can intelligently decipher subjects in photos so that the user can perform really targeted searches. For instance, a search for “dogs” should pull up all of the images in your library that contain dogs.

Google promises this new Smart Reply feature will improve as the user chooses responses from its suggestions more often. This makes sense — the more Smart Reply is used, the more Google learns about how that particular user typically responds to emails. Therefore, it can make predictions that are more accurate.

Smart Reply isn’t revolutionary — similar features exist on smartwatches, as typing on a tiny screen or speaking into a watch isn’t usually ideal. Still, it seems like a handy addition for responding to emails quickly on the go.

See the Fantastically Weird Images Google’s Self-Evolving Software Made

These pictures aren’t from the mind of a person on psychedelic drugs. Instead they represent the way Google’s AI software is able to reinterpret different images
Google’s software, called an artificial neural network, is able to learn in the same way a human brain does.
Google trains the network to see by feeding it millions of images that teach it how to interpret different objects.
The network is built of 10 to 30 layers of artificial neurons, each of which interpret different levels of complexity in an image.
These pictures were made by feeding the network an arbitrary image and allowing it to enhance whatever it deems most important
The images serve as a way to test how well the network has learned during training
The results are colorful, jarring and beautiful. Each layer of the network can come up with a different interpretationGoogle
Google calls this process of abstract re-interpretations of images “inceptionism”
Certain objects are often re-interpreted in similar ways. Horizon lines are often filled with towers and pagodas, rocks turn into buildings, and birds or insects appear in pictures of leaves
In addition to re-interpreting images, the network can create “dreams” from a random-noise image by continually building new impressions on top of its old ones Google
In the future neural networks could be used by artists as a new form of visual expression
Google will continue using neural networks for everything from voice recognition to identifying people in its Photos app

More Must-Reads From TIME

Contact us at letters@time.com