Schools Shouldn’t Ban Access to ChatGPT

6 minute read
Ideas
Lipman is a Yale lecturer and former editor in chief of USA Today. Her new book is NEXT! The Power of Reinvention in Life and Work.
Distler is Strategist for AI, Data and Digital Health at the Patrick J. McGovern Foundation, a 21st-century philanthropy advancing AI and data solutions to create a thriving, equitable, and sustainable future for all.

When the artificial intelligence platform ChatGPT was released in late November, I was one of many educators who jumped on it, introducing it in the seminar I teach at Yale on the media and democracy. With its ability to communicate in plain-English prose, it was undeniably fun for the students to play with, composing everything from silly poems to job application letters.

But it was also deeply troubling. When I prompted it to spread misinformation, it generated a news article falsely asserting the “U.S. Electoral Commission” had found “rampant voter fraud” in the 2020 election. It was also alarmingly quick to complete the term paper assignment that my students had been working on for weeks. It instantly spit out six excellent topic ideas (written as country-western lyrics, as requested)—and then generated a paper on gender in the newsroom that, while not up to college standards, was credible enough to show how ChatGPT could soon morph into the ultimate cheating machine.

So it’s understandable why New York City’s Department of Education announced last week that it will ban access to ChatGPT on school devices. That decision, by the nation’s largest school district, was quickly followed by similar moves in Los Angeles and Baltimore, with others likely to join them.

Yet blocking access to ChatGPT is a mistake. There is a better way forward.

Students need now, more than ever, to understand how to navigate a world in which artificial intelligence is increasingly woven into everyday life. It’s a world that they, ultimately, will shape.

We hail from two professional fields that have an outsize interest in this debate. Joanne is a veteran journalist and editor deeply concerned about the potential for plagiarism and misinformation. Rebecca is a public health expert focused on artificial intelligence, who champions equitable adoption of new technologies.

We are also mother and daughter. Our dinner-table conversations have become a microcosm of the argument around ChatGPT, weighing its very real dangers against its equally real promise. Yet we both firmly believe that a blanket ban is a missed opportunity.

The New York City department’s justification for blocking ChatGPT illustrates why a ban is shortsighted. “While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success,” a department spokesperson explained (emphasis added).

Read More: An Interview with ChatGPT

Yet attempting to teach “critical thinking and problem-solving” skills – while ignoring the real world in which students will deploy those skills—is a fool’s errand. These students are growing up in an era when technology increasingly is driving human behavior and decision making. Their generation needs to understand how best to utilize it, what are its perils and shortcomings, how to interrogate it and how to use it in an ethical way.

What’s more, on a practical basis, a ban simply won’t work. Students will still have access to ChatGPT outside of school. Microsoft is reportedly in talks to invest in OpenAI, the company that created it, which would expand access further. And previous prohibitions have failed. Early researchers warned against using Google in schools because it would “harm students’ information literacy skills.” Wikipedia was banned early on in both colleges and school districts. Not surprisingly, students have always found creative ways to circumvent such bans.

Nor have fears that those technologies would trigger an educational Armageddon been realized. Today Google is an essential research tool. Wikipedia is ubiquitous, though it isn’t considered a reliable source for research purposes. ChatGPT is a far more powerful and disruptive tool, which only underscores how important it is for students to learn how to safely engage with it.

For example, educators can deploy the platform to teach those crucial critical thinking and problem-solving skills. They might ask students to analyze a ChatGPT-generated report on a historical event, to track down its sources, and to assess its validity—or lack thereof. They could teach rhetoric by having students challenge ChatGPT’s reasoning in its answers. Computer science students could analyze ChatGPT-generated code for flaws. The technology itself provides a framework to discuss the ethical considerations about benefits and harms of artificial intelligence.

This isn’t to minimize the risks surrounding ChatGPT. Some educators are setting up guardrails to prevent cheating, requiring essays to be written by hand or during class. Students themselves are getting involved, like the college senior who created an app to detect whether text is written by ChatGPT. OpenAI has said it is looking at ways to “mitigate” the dangers, including by potentially watermarking answers.

These are important steps. But they don’t let us off the hook when it comes to teaching all students how to understand and responsibly use not just ChatGPT, but also other new technologies to come.

This isn’t simply an academic exercise. In her work at the Patrick J. McGovern Foundation, Rebecca sees firsthand how AI is being implemented in healthcare around the world, even as challenges with bias and inequity remain. From arts to the environment, emerging technology is only becoming more intertwined with every aspect of our lives. Today’s students will soon be tomorrow’s leaders, tasked with ensuring that technology is designed and implemented in responsible and ethical ways.

Their education needs to start now. We’re reminded of when Google was founded in 1998, when Rebecca was in grade school in the New York City public school system. At first, she wasn’t allowed to use a computer for research. When it was finally allowed, she mistakenly used it to find articles to share with her class—not, as expected, to write a report. She was mortified.

But rather than discipline her, Rebecca’s teacher explained how the computer was intended to be a resource for learning, not a substitute. Her lesson was clear: technology should be a tool to expand students’ own thinking—not a crutch to limit it.

That lesson is even more important today. To ensure future generations are responsible stewards of technology, we need to create opportunities for them to participate in its design and use—beginning in the classroom.

More Must-Reads From TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.