The term “Artificial Intelligence” was coined in the 1950’s by computer engineers who hoped to make computers as intelligent as humans. Since then, we’ve been able to get computers to perform pretty well at tasks such as playing chess, understanding speech, and sorting through images automatically. But because we don’t yet understand exactly how the human brain works, we still don’t have artificial “brains” that can do most tasks that a 4-year-old child has mastered. And a computer program that can beat a chess grandmaster can’t play tic-tac-toe.
Researchers have recently made good progress on a kind of computing called machine learning. In machine learning, rather than writing a detailed computer program to perform a task, you give a computer many examples of a task and try to get it to learn automatically.
Researchers have even shown that you can create one learning program that becomes expert at more than one kind of task. This has fed speculation that researchers are close to building a general intelligence, but that goal still seems elusive—most experts estimate it’s decades away from being a reality.
At Google we use these learning algorithms to make our products more useful. For example, when you speak to the Google app, we do a much better job than before at understanding exactly what you’re asking for. You can use Google Translate to translate from one language into another. And computers are starting to be able to understand images better, so, for example, you can search your personal photos by just saying, “Show me my photos of a sunset at the beach.”
These are exciting developments—and they seem a lot less terrifying than the ones from science fiction. It’s clearly a good thing if we can make computers that understand language and images, and that maybe feel a bit a bit less computer-like and a bit more human in their interactions.
Giannandrea leads Research and Machine Intelligence at Google