A version of Pong is played on the orignial Magnovox Odyssey 200 during the launch party for the International Video Game Hall of Fame and Museum on August 13, 2009 in Ottumwa, Iowa.
David Greedy—Getty Images
By Alex Fitzpatrick
February 25, 2015

Need a new gaming buddy? Just call DeepMind.

The artificial intelligence company, owned by Google, has developed an algorithm that can learn how to play almost 50 classic arcade games nearly from scratch, according to a paper published Wednesday in the journal Nature.

The system can learn titles from Pong to Space Invaders after getting the same instructions no matter which game it’s about to learn, a big improvement from computers programmed from the get-go to master single games like chess.

While this research sounds like it’s all fun and games, it has big implications for artificial intelligence. According to Nature, DeepMind uses a combination of AI technologies based on the human brain that let it learn from experience as well as respond to rewards—in this case, high scores in video games—much like people respond to a jolt of dopamine. That means DeepMind could give researchers new insight in how to replicate human brain functions in digital code.

Still, DeepMind’s software isn’t about to destroy all your high scores. Nature points out it has trouble with maze games because it “struggles to link actions with distant consequences,” not unlike most of your buddies in high school. And for now, it can’t take what it learns from one game and apply it to another similar title.

Google bought DeepMind in January of last year for a reported $650 million.

[Nature]

SPONSORED FINANCIAL CONTENT

You May Like

EDIT POST