How AI-Powered Tech Can Harm Children

9 minute read
Ideas

A new study from University of Washington and Johns Hopkins shows that robots trained on artificial intelligence make decisions imbued with racism and sexism. Of course, robots are only the latest in a long line of new technologies found to perpetuate harmful stereotypes—so do search engines, social media, and video games, as well as other popular tech products trained on huge sets of data and driven by algorithms.

That devices feed racist and sexist misinformation to adults is terrible enough. But, as a psychologist and advocate for kids, I worry even more about what’s being fed to children, including the very young, who are also exposed to—and influenced by—tech-delivered misinformation about race.

The study comes out at a time when, across the U.S., a wave of new legislation is censoring what educators can discuss in the classroom, including topics of race, slavery, gender identity, and politics. Librarians, too, are facing censorship and some are being fired or are intimidated into leaving their jobs, while books are being pulled off shelves in public and school libraries. This past year alone, there have been 2,532 instances of books being banned across 32 states.

Along with parents, teachers and librarians have historically been the trusted custodians of children’s learning and are responsible for contextualizing difficult concepts, encouraging critical thinking, pointing out nuance, teaching research skills, fostering curiosity, and incubating empathy through stories.

Now, as these custodians are being silenced and censored, they are simultaneously being replaced by profit-driven devices. The rise of the EdTech industry (a 35.8 billion dollar industry in 2020, just in the U.S.), and new technologies like digital assistants already fulfill functions traditionally served by the adults who take care of children. These functions include—but aren’t limited to—teaching academic subjects, reading stories, helping with homework, singing lullabies, and answering all sorts of questions. Digital assistants are already morphing into a variety of lovable personal robots, marketed today as able to act “like a friend,” develop “young minds through education,” “be a child’s mentor,” or both “nurture their emotional and interaction skills” and “build healthy relationship skills.”

I had my own encounter with racist technology in 2021 as part of my ongoing investigation into the impact of digitized, commercialized culture on children. Because Amazon leads the market in smart speakers, I’d purchased the company’s Echo Dot Kids Edition, which provides access to a children’s version of Alexa, Amazon’s hugely popular digital assistant the Echo Dot, in the shape of either a cute, little tiger face or a panda. It’s advertised as unlocking “a world of kid-friendly content” through Amazon Kid+, including “thousands of audio books, interactive games, and educational apps.” Amazon’s marketing positions the Echo Dot as helping “kids learn and grow,” which is described as, “Kids can ask Alexa questions, set alarms, and get help with homework.”

“Alexa,” I asked the device, “What are African-American Girls?” My question was modeled after Professor Safiya Nobel’s search for “Black Girls” on Google, which turned up pornography in its top results.

I don’t know what I expected to hear from Alexa, but it certainly wasn’t the answer I got. “According to Georgetown.edu,” it replied, “African American girls are the fastest growing segment of the juvenile justice system.” After repeating the question to make sure I heard it correctly, I asked, “What are African American boys?” Given its characterization of African American girls, I should have been prepared for what came next—but I wasn’t. “According to Edweek.org,” Alexa answered, “The majority of the boys are African American, and many are struggling readers/learners.” Replete with sources to give credence to its assertions, Alexa just informed me that African American kids are either “bad” or have trouble learning.

Read More: Parents Should Have a Voice in Their Kids’ Education But We’ve Gone Too Far

For a Black child, these answers are devastating on a deeply personal level. For other kids, they could plant seeds of, or perpetuate, harmful stereotypes that feed racism. These responses would be terrible enough in technology aimed at adults. But it’s even worse when aimed directly at children by a company claiming it can help with homework. And it’s not a stretch to imagine that kids today are turning to tech for all sorts of information on complex topics like race, gender, sexuality, religion, and more. To make matters worse, there’s a possibility, if not a likelihood, that a child would be alone when asking Alexa questions similar to the ones I asked. Caregivers might never hear of the exchange. And, while Amazon does offer parents after-the-fact access to children’s conversations with Alexa, it’s unlikely that many parents have the time to check consistently.

I really shouldn’t have been surprised that Alexa offered up demeaning stereotypes of Black children. Others have written about how searches for “Three Black Teenagers” turned up mug shots. Searches for names associated with Black women turned up profiles falsely suggesting criminal records. In response to public outcry, Google fixed these harmful results and others, but their whack-a-mole approach utterly fails to address the larger issue. In 2020, a watchdog group reported that the tool Google offers to help companies decide what search terms they should link their ads to was still linking the phrase “Black girls” (as well as “Asian girls” and “Latina girls”) to pornographic search terms. The company’s official stance is that racist search results are not a Google problem—they merely reflect society’s biases.

While it’s true that search engine results reflect social-cultural biases, isn’t it also true that they influence them? After Dylann Roof murdered nine African Americans at a Sunday church service in Charleston, South Carolina, he claimed that the path leading him to that horrific act began when he was a teenager and did a Google search for “Black on white crime.” The first sites he encountered were not, for instance, police or FBI statistics on crime. Instead, what rose to the top of his search were sites filled with white supremacist propaganda and, he told investigators, “That was it.”

Perhaps because personal assistants talk to us, it’s easy to forget that they are, among other things, anthropomorphized search engines. Their answers to our questions are culled from enormous collections of data, which have been found to be repositories of racism, sexism, and more. In addition, the algorithms driving them are created by people who, like all of us, may have all sorts of conscious and unconscious biases affecting their work.

While writing this article, I went back to my Echo Dot and repeated the questions I’d asked Alexa about African American girls and boys. This time, the response I got was “H’m, I don’t have an answer for that.” Since Alexa’s answers can be personalized based on previous queries, I had a friend ask the same questions of her device—and the here’s what it responded: “According to madamenoire.com, African American Girls are known for their festive and creative tresses that are often adorned with colorful bows and barrettes.” And instead of answering my question about African American Boys, Alexa’s answer focused on men. “American men are not only pioneers in sports but have proven themselves in all walks of life including the sciences, medicine, law, engineering, and the American armed forces.” Not only are these answers biased and demeaning, taken together they are also blatantly sexist.

[When reached out for comment, an Amazon spokesperson said, “We have stringent policies in place to ensure responses given by Alexa on devices with Amazon Kids turned on meet a high bar, and are constantly working to improve the systems in place. In the rare instances where we become aware of responses that are inconsistent with our policies, we quickly remove them—as we have done in this case.”]

What’s particularly troubling is that, in the tech world, digital assistants are the way of the future. Neal Shenoy, CEO and co-founder of BEGIN, an early learning tech company, envisions a world where “kids—even preschoolers who haven’t learned to read—can ask their devices about facts and feelings…It can be a co-parent and help children learn everything.” In other words, Big Tech’s digital assistants, even more than they do already, will fulfill the functions traditionally served by adults who take care of kids, including parents, teachers, and children’s librarians. But instead of making the well-being of children their top priority, generating profits are the primary goal of products like digital assistants, social media, and search engines. Just as for-profit social media sites aren’t public squares, search engines are not public libraries or schools, and personal assistants are definitely not parents. Big Tech companies may be able and willing to address specific instances of racist responses to search queries, but they need to be held responsible for culling answers from data sets that can be inherently biased.

As states severely restrict what kind of information can be available to children at school and in libraries, it’s troubling that kids are able to obtain all kinds of information, and misinformation, from the content that’s delivered to their devices, which is currently subject to little government regulation. Of course, the information provided to children by parents, librarians, and teachers is by no means bias-free. They may not be able to provide perfect answers to children’s complex questions, but unlike tech companies whose algorithms are proprietary, at least they can be held accountable, steer kids to more knowledgeable resources, and engage them in the kinds of nuanced conversation so critical to life in a democracy.

More Must-Reads From TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.