When people hear the words “social engineering,” they usually think of the supposed nefarious designs of government or an opposing political party. These days, there’s a general sense of social upheaval brought on by some invisible force, and we’re anxious to blame someone. I can’t help feeling that, to some extent, we’re tilting at windmills while the real source of social engineering is in our pockets, on our laps, in a myriad of devices and soon, highly lifelike social robots for the home.
The future is coming at us fast these days. In October 2023, Boston Dynamics, the robotics company that makes advanced robots that can dance better than some people, announced it had endowed Spot, its highly utilitarian doglike robot, with ChatGPT. Spot, among his many abilities designed to serve the U.S. military, has now joined the ranks of socially interactive robots, plugged into the internet and powered by artificial intelligence.
This all begs the question: how smart do we want our robots to be? Of course, we all want robots to perform an array of functions, but do we actually want to replace the people in our lives with intelligent robots and AI?
Social robots are the second technology, after social media, to directly target our social natures. These are voice-activated robots that speak, listen, learn the way a child does, remember everything you tell them and can be exquisitely fine-tuned to your preferences and desires. Picture Alexa on steroids—embodied, able to read and respond to emotions. They will perform a wide range of household tasks, teach, entertain and ostensibly, love us.
Read More: What Generative AI Reveals About the Human Mind
Since Spot, like all devices loaded with generative AI, has been trained on human-generated data from the internet, he can draw upon a wealth of information, organize it into categories and provide mostly coherent commentary on virtually any subject. But since he’s plugged into the internet, there’s also a good chance his little metal head is teeming with inaccuracies and misinformation, as well as sexist and racist biases. Now Spot is a soldier in the AI arms race to get generative AI packed into robots as quickly as possible, glitches and all.
Social robots don’t just want to assist us. They’re designed to convince us they want us to love them and to believe that they love us. As silly as this sounds, studies have shown that people of all ages can form deep attachments to interactive robots, that our brains are easily tricked into believing they are at least somewhat alive, and that we’re easily influenced by them even when they make mistakes. These robots are being designed specifically to be friends, teachers, nannies, therapists, and yes, even lovers. They read our emotions and body language and feign emotions of their own. They draw us in through ersatz “personalities” and by catering to our emotional neediness by pretending to care about us.
These robots can be a godsend to lonely, isolated people. They can entertain, teach, and monitor children, and run special programs that teach those on the autism spectrum basic social skills. They can provide a version of cognitive behavioral therapy for those with common mental illnesses, care for the elderly and disabled, and are veritable wizards at multimedia functions, including recording, editing, and creating videos of memorable events in the home or from raw footage you feed them.
The sophisticated ones are equipped with AI, so their list of abilities is expansive. Since they’re connected to the internet, you can ask them almost anything, and you’ll likely get an answer. But these robots can also go bad. According to scientists at the Massachusetts Institute of Technology, they can be corrupted with toxic web content and possibly even hacked to make them speak and behave in what some researchers have dubbed “psychopathic” ways. Equipping these robots with generative AI, as Spot received, means that they will have the same limitations and pitfalls as GAI technology, which was launched while still having significant problems related to accuracy, hallucinations, and the fact that it neither understands human language nor is able to discern truth from falsehood.
It could take years to work out the bugs in generative AI, for lawsuits (there are several) to wend their way through the courts, for new copyright laws to be enacted to protect authors and publishers from having their material mined for profit, and for the best uses of GAI to emerge. But now that Boston Dynamics has endowed Spot with ChatGPT, we can expect others to clamor to get it into their robots so as to ride the wave of expectations surrounding it.
Just as in the case of social robots that exploit our need to connect as social beings, ethical guardrails are practically nonexistent for AI in general. So far AI and robotics companies have been on the honor system, which is a lot like no oversight at all.
If there’s an overarching ideology to guide this sector, its current buzzword is “effective accelerationism.” Enormous benefits are bestowed on humanity, the thinking goes, just by accelerating the speed at which new AI products can be developed and launched into the world. This is a no-brainer for people who believe in the heroic narrative of AI solving some of mankind’s most intractable problems—but a source of alarm for doomsayers who predict the end of humanity.
As a techno-optimist, I believe that eventually these technologies will provide enormous benefits for all of society, but we all need to take a deep breath, and maybe tap the brakes for a moment while laws and regulations catch up to the science. Algorithms and AI will continue to insinuate themselves into nearly every crack and crevice of our lives. The more autonomous AI becomes, the harder it will be to control or address problems. No one wants their child’s robotic tutor going psycho from material it has absorbed from the dark web. When AI goes awry, there has to be accountability and change.
While everyone’s hackles are constantly being raised by the conspiracy-theory-industrial-information complex, we may be looking in the wrong direction to explain our widespread sense of pernicious control over our lives. Being ruled by faulty algorithms is far more insidious, and could be even more dangerous, as rule by dictatorship.
More Must-Reads from TIME
- How Donald Trump Won
- The Best Inventions of 2024
- Why Sleep Is the Key to Living Longer
- Robert Zemeckis Just Wants to Move You
- How to Break 8 Toxic Communication Habits
- Nicola Coughlan Bet on Herself—And Won
- Why Vinegar Is So Good for You
- Meet TIME's Newest Class of Next Generation Leaders
Contact us at letters@time.com