The world’s most curious computer is now scanning millions of online books and images in an attempt to understand all of the web’s images the way a human might.
Computer scientists at the University of Washington say the new program, called Learn Everything About Anything, or LEVAN, could produce more intuitive responses to image searches. The program begins with a basic search term like “shrimp.” It searches for the word across millions of Google Books, taking note of every modifier, be it “boiled,” “fried” “steamed,” or “peppered.” Armed with a Bubba Gump-like knowledge of shrimp, it searches the web for shrimp pictures, grouping them by appearance under the categories it has just learned. The result? A visual grouping of pictures that’s a feast for the eyes.
The search results spare users from clicking through page after page of nearly identical looking pictures. And unlike current visual groupings, no human curator is needed. “The new program needs no human supervision, and thus can automatically learn the visual knowledge for any concept,” said research scientist Santosh Divvala.
There’s just one drawback — LEVAN has a lot to learn. It currently has the vocabulary of a toddler and takes upwards of 12 hours to learn broader terms, such as “angry.” As a result, researchers have invited the public to pitch their own one-word concepts to LEVAN, because evidently it takes a village to raise an artificially intelligent algorithm.