• Tech
  • apps

Google’s Photos App Is Getting a New Feature That’s Perfect For the Holidays

2 minute read

Google’s Photos app, introduced in May, is getting a new feature that makes it easier to create and share collaborative albums with other users, the company announced Thursday. The service will now support shared albums, which means users can make it so that friends and family members will be able to add new photos to a given album.

After creating an album in Google Photos, users will see an option to set the album to “collaborative.” Once this feature is turned on, whoever that user chooses to share the album with will be able to add photos to the collection. These contributors will get a notification asking if they’d like to join the album, and once they respond with a yes, the creator of the album will also receive a notification.

A Google account is required in order to contribute content and receive updates on an existing album.

It’s not a new or revolutionary idea by any means — Apple offers similar functionality with its iCloud Photo Library through a feature called iCloud Photo Sharing. This allows family and friends to subscribe to a user’s photo albums, leave comments, and get notified when new images are added.

It’s interesting, though, because shared albums was one of the few big features Google’s Photos app has been lacking until this point. Google Photos already offers two key advantages over other photo storage apps: an incredibly accurate search functionality that can actually tell what’s in each photo by using machine learning, and free unlimited storage for photos up to 16 megapixels.

See the Fantastically Weird Images Google’s Self-Evolving Software Made

These pictures aren’t from the mind of a person on psychedelic drugs. Instead they represent the way Google’s AI software is able to reinterpret different images
Google’s software, called an artificial neural network, is able to learn in the same way a human brain does.
Google trains the network to see by feeding it millions of images that teach it how to interpret different objects.
The network is built of 10 to 30 layers of artificial neurons, each of which interpret different levels of complexity in an image.
These pictures were made by feeding the network an arbitrary image and allowing it to enhance whatever it deems most important
The images serve as a way to test how well the network has learned during training
The results are colorful, jarring and beautiful. Each layer of the network can come up with a different interpretationGoogle
Google calls this process of abstract re-interpretations of images “inceptionism”
Certain objects are often re-interpreted in similar ways. Horizon lines are often filled with towers and pagodas, rocks turn into buildings, and birds or insects appear in pictures of leaves
In addition to re-interpreting images, the network can create “dreams” from a random-noise image by continually building new impressions on top of its old ones Google
In the future neural networks could be used by artists as a new form of visual expression
Google will continue using neural networks for everything from voice recognition to identifying people in its Photos app

More Must-Reads From TIME

Contact us at letters@time.com