• Ideas
  • Technology

Search Engines May Seem All-Knowing, But They’re Not. Here’s How to Get More Trustworthy Results

8 minute read
Ideas
Updated: | Originally published: ;
Tenner is a visiting scholar at Rutgers University and the Smithsonian Institution and the author most recently of The Efficiency Paradox: What Big Data Can’t Do

The most famous dictum of the science fiction writer and futurist Arthur C. Clarke may be his Third Law: “Any technology sufficiently advanced is indistinguishable from magic.” And for most of us, the efficiency of 21st-century search engines — Google, Bing, Yahoo and others — can be uncannily accurate. But when it comes to learning, instant gratification can be as much a bug as a feature.

Take high school students today. They have grown up using search engines and other web resources; they don’t need to understand how these tools work in order to use them. In fact, thanks to what’s called machine learning, search engines and other software can become more accurate — and even those who write the code for them may not be able to explain why.

What’s the problem with tools that become so natural to the generation has grown up using them? It is that, just as a stage magician may use elaborately concealed machinery to accomplish a trick, there are hidden mechanisms in search engines that people need to know about, just as they may have learned to play sports “naturally” but need coaching to avoid wasted effort and injuries. Searching needs to be taught — to everyone, but in schools particularly.

The very strength of modern search engines — the promotion of sources being cited by other frequently cited sources — can’t always filter out bad, even fake information that is popular enough. Of course, newspapers, magazines and books have always passed inaccuracies to each other; the former standard biography of the inspirational novelist Horatio Alger still influences some reference books, even though its author admitted fabricating sources decades ago. The difference is that we once were more likely to turn to trusted sources — from newspapers to massive encyclopedias — and had some recognition of their biases.

Now, the rankings of search engines are the result of inscrutable and anonymous yet authoritative-seeming processes that can sometimes hide falsity and bias. Part of the reason is that search engines are designed to appeal to what they perceive or predict as your values. For example, a search for information about alternative medicine will yield different pages in different nations depending on the attitudes of medical elites and of patients. If your previous queries have suggested an attitude, pro or con, search results may be biased to give you more of the same rather than to find the most scientifically rigorous conclusions.

Other search results are elevated in search ranking not by geographic inference or personal search history but by techniques called search engine optimization, which can be legitimate and useful but also can give an advantage to sites skilled at gaming the algorithm. This might make them more prominent, even if their information is incorrect or manipulative. Even without conscious intervention, search engine results can reflect racist attitudes. There is an old computer saying, “garbage in, garbage out.” If racists and sexists use a phrase often, the search engine may mindlessly reflect their attitudes. For instance, in 2004, the year Google went public, searching for the word “Jew” on the site called up anti-Semitic sites. While both of Google’s founders have Jewish or partly Jewish family backgrounds, the company on principle resisted any attempt to suppress what it considered the objective results of its programming. Organized hate groups can also manipulate rankings with social media campaigns.

Extremist tinkering with results can be especially dangerous because search engines and many other apps are designed to inspire a flattering feeling of mastery. Psychologists call this tendency the illusion of control. Think of casino bettors who believe their technique can affect a roll of the dice. The Dunning-Kruger effect, a related pitfall, is the tendency for people who are ignorant of a subject to be unaware of how little they know. This may have always been a problem for some students, but the web seems to make it unnecessary to know facts because they are so easy to look up — a problem that compounds itself again and again when the information a person may find is faulty.

The Dunning-Kruger effect, in turn, points to another challenge: to choose among a number of alternative sites yielded by a search, it’s often necessary to know a lot about the subject already. High school students, for instance, may be highly knowledgeable about some things but not necessarily about academic subjects. Wikipedia articles often rank highly in searches, and Wikipedia editors are usually quick to catch vandalism and to correct misinformation like, say, false Horatio Alger documents. But the very strength of Wikipedia — that so many editors add to, delete from and modify each other’s work — makes it more difficult for one of its pages to achieve the kind of systematic, clear contextualization required to teach complex and unfamiliar ideas. The reason is a paradox called the curse of knowledge, defined by the psychologist Steven Pinker as “the failure to understand that other people don’t know what we know.”

Social and computer scientists have discovered in the last ten years or so that the result of those four struggles is that young people who have grown up with the web — the so-called “digital natives” — are no more skilled than older people at using electronic technology. Search engine companies themselves acknowledge the need for education. Dan Russell, who studies user behavior for Google, found that only 10% of users know how to find a word in a web page or document using the Control-F command.

While many teachers are aware of the challenge of teaching search literacy, it’s unlikely that secondary schools — many woefully underfunded — will be able to make time for yet another subject. Fortunately, search engine companies are aware of the pedagogical problems they have inadvertently created, and Google and Microsoft offer online resources for teachers and students.

The right way to teach search skills isn’t to add yet another required subject, as legislators and administrators often do. Although some formal instruction will be necessary, we really need to make search a natural part of lessons and even vocational and on-the-job training, encouraging students of all kinds to collaborate in their searches, enlisting librarians — the search professionals — as coaches.

Search skills are the key not only to learning, but to learning how to learn. They can enable you to explore make discoveries of all sorts — including those that seem both silly and profound. For instance, years ago, I bought a rare set of nineteenth-century lithographed posters at an auction in Chicago. Almost nothing about the artist or his company appeared in library catalogs or periodical indexes. The Chicago history museum had no file. Then a fascinating story emerged from dozens of digitized newspapers and other sites searchable through Google. I traced the artist, named John McGreer, from his youth in the Mississippi river town of Muscatine, Iowa, to Chicago, where he witnessed the great fire of 1871 and co-founded a thriving business, the Cartoon Publishing Company, that churned out crude but attention-getting novelties. My posters, I discovered, were intended to be displayed in store windows to attract customers. The artist’s work also helped promote the dime museums that flourished in American cities after P.T. Barnum’s American Museum became the talk of New York City. The searching led me to further oddities: McGreer had a gift for weird legal trouble, including federal criminal prosecution for counterfeiting nickels. (Another search revealed the existence of a thriving nickel-counterfeiting scene in the 1890s and led me to articles about the phenomenon in numismatic magazines.) And through digitized newspapers online, I discovered his sad end in 1907, drowned by the wake of a steamer while painting on a barge moored on the Hudson River in 1908. This was a special sort of history that had been unavailable in the print resources around me. Now I am planning an exhibition around my once-inscrutable posters, and expect that search will reveal a lot more about the vanished world of dime-museum art.

Search engines don’t deliver truth on a platter. They are more like shop assistants who may have to go back to the stockroom again and again until they find what you are looking for. We customers must learn to ask the right questions in the right way. And the more we learn, the more useful the questions we will be able to ask.

Correction: June 26

The original version of this story misstated the name of Arthur C. Clarke’s law. It is his Third Law, not his Third Law of Robotics.

More Must-Reads From TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.