Huffington is the founder and CEO of Thrive Global
The Age of AI has also ushered in the Age of Debates About AI. And Yuval Noah Harari, author of Sapiens and Homo Deus, and one of our foremost big-picture thinkers about the grand sweep of humanity, history and the future, is now out with Nexus: A Brief History of Information Networks from the Stone Age to AI.
Harari generally falls into the AI alarmist category, but his thinking pushes the conversation beyond the usual arguments. The book is a look at human history through the lens of how we gather and marshal information. For Harari, this is essential, because how we use—and misuse—information is central to how our history has unfolded and to our future with AI.
In what Harari calls the “naïve view of information,” humans have assumed that more information will necessarily lead to greater understanding and even wisdom about the world. But of course, this hasn’t been true. “If we are so wise, why are we so self-destructive?” Harari asks. Why do we produce things that might destroy us if we can’t control them?
For Harari—to paraphrase another big-picture thinker—the fault, dear Brutus, is not in ourselves, but in our information networks. Bad information leads to bad decisions. Just as we’re consuming more and more addictive junk food, we’re also consuming more and more addictive junk information.
He argues that the problem with artificial intelligence is that “AI isn’t a tool—it’s an agent.” And unlike other tools of potential destruction, “AI can process information by itself, and thereby replace humans in decision making.” In some ways, this is already happening. For example, in the way Facebook was used in Myanmar—the algorithms had “learned that outrage creates engagement, and without any explicit order from above they decided to promote outrage.”
Where I differ with Harari is that he seems to regard human nature as roughly fixed, and algorithms as inevitably exploiting human weaknesses and biases. To be fair, Harari does write that “as a historian I do believe in the possibility of change,” but that possibility of change at the individual level is swamped in the tide of history he covers, with a focus very much on systems and institutions, rather than the individual humans that make up those institutions.
Harari acknowledges that AI’s dangers are “not because of the malevolence of computers but because of our own shortcomings.” But he discounts the fact that we are not defined solely by our shortcomings and underestimates the human capacity to evolve. Aleksandr Solzhenitsyn, who was no stranger to systems that malevolently use networks of information, still saw the ultimate struggle as taking place within each human being: “The line separating good and evil,” he wrote, “passes not through states, nor between classes, nor between political parties either—but right through every human heart—and through all human hearts.”
So yes, AI and algorithms will certainly continue to be used to exploit the worst in us. But that same technology can also be used to strengthen what’s best in us, to nurture the better angels of our nature. Harari himself notes that “alongside greed, hubris, and cruelty, humans are also capable of love, compassion, humility, and joy.” But then why assume that AI will only be used to exploit our vices and not to fortify our virtues? After all, what’s best in us is at least as deeply imprinted and encoded as what’s worst in us. And that code is also open source for developers to build on.
Harari laments the “explicit orders from above” guiding the algorithms, but AI can allow for very different orders from above that promote benevolence and cooperation instead of division and outrage. “Institutions die without self-correcting mechanisms,” writes Harari. And the need to do the “hard and rather mundane work” of building those self-correcting mechanisms is what Harari calls the most important takeaway of the book. But it’s not just institutions that need self-correcting mechanisms. It’s humans, as well. By using AI, with its power of hyper-personalization, as a real time coach to strengthen what is best in us, we can also strengthen our individual self-correcting mechanisms and put ourselves in a better position to build those mechanisms for our institutions. “Human life is a balancing act between endeavoring to improve ourselves and accepting who we are,” he writes. AI can help us tip the balance toward the former.
Read More: How AI Can Help Humans Become More Human
Harari raises the allegory of Plato’s Cave, in which people are trapped in a cave and see only shadows on a wall, which they mistake for reality. But the technology preceding AI has already trapped us in Plato’s Cave. We’re already addicted to screens. We’re already completely polarized. The algorithms already do a great job of keeping us captive in a perpetual storm of outrage. Couldn’t AI be the technology that in fact leads us out of Plato’s Cave?
As Harari writes, “technology is rarely deterministic,” which means that, ultimately, AI will be what we make of it. “It has enormous positive potential to create the best health care systems in history, to help solve the climate crisis,” he writes, “and it can also lead to the rise of dystopian totalitarian regimes and new empires.”
Of course, there are going to be plenty of companies that continue to use algorithms to divide us and prey on our basest instincts. But we can also still create alternative models that augment our humanity. As Harari writes, “while computers are nowhere near their full potential, the same is true of humans.”
Read More: AI-Driven Behavior Change Could Transform Health Care
As it happens, it was in a conversation with Jordan Klepper on The Daily Show that Harari gave voice to the most important and hopeful summation of where we are with AI: “If for every dollar and every minute that we invest in developing artificial intelligence, we also invest in exploring and developing our our own minds, it will be okay. But if we put all our bets on technology, on AI, and neglect to develop ourselves, this is very bad news for humanity.”
Amen! When we recognize that humans are works in progress and that we are all on a journey of evolution, we can use all the tools at our disposal, including AI, to become the best versions of ourselves. This is the critical point in the nexus of humanity and technology that we find ourselves in, and the decisions we make in the coming years will determine if this will be, as Harari puts it, “a terminal error or the beginning of a hopeful new chapter in the evolution of life.”
More From TIME
More Must-Reads from TIME
- Donald Trump Is TIME's 2024 Person of the Year
- Why We Chose Trump as Person of the Year
- Is Intermittent Fasting Good or Bad for You?
- The 100 Must-Read Books of 2024
- The 20 Best Christmas TV Episodes
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- Merle Bombardieri Is Helping People Make the Baby Decision
Contact us at letters@time.com