“A Minister of Artificial Intelligence who is the age of my son, appointed to regulate a hypothetical technology, proves to me that your government has too much time and resources on its hands.” Those were the words of a senior government official during a bilateral meeting in 2017, soon after I was appointed as the world’s first Minister for Artificial Intelligence. Upon hearing that remark, I distinctly recall feeling a pang of indignation by their equating youth with incompetence, but even more so by their clear disregard and trivialization of AI.
Six years into my role of leading the UAE’s strategy to become the most prepared country for AI, the past year has been an exhilarating sprint of unprecedented AI advancements. From ChatGPT to Midjourney to HyenaDNA. It is now undeniable that AI is no longer a hypothetical technology, but one that warrants far more government time and resources across the globe.
I see a resemblance between these breakthroughs and the progression humanity has witnessed in areas such as mobility. Think of the evolution from horses to planes in just a few decades, where today horseback travel simply cannot compete with a 900 km/h aircraft, and extrapolate from that example to where the evolution of AI computation will take us. We are riding horses today. From Pascal’s calculator to the future of AI, the human mind will be eclipsed in both speed and complexity. Imagine, if you will, a veritable ‘Aladdin’s Lamp’ of technology. You write a prompt into this vessel and from it, like the genie of lore, springs forth your every digital wish. This is the exciting future we will live to experience.
However, at the risk of sounding the alarm, the potential for harm is colossal. Throughout history, we have witnessed catastrophic events galvanize governments into regulating technology: the Chernobyl nuclear disaster of 1986 led to a revision of the International Atomic Energy Agency’s safety guidelines; the Tenerife airport disaster of 1977 where two Boeing 747s collided led to standardized phraseology in air traffic control. An ‘Aladdin’s Genie’ going awry could result in a disaster on a scale we’ve never seen before. This could include everything from the paralysis of critical infrastructure by rogue AI, to the breakdown of trust in information because of believable deepfakes being spread by bots, to cyber threats that lead to substantial loss of human life. The impact far transcends the operations of an airport or the geographic boundaries of a city. Simply put, we cannot afford to wait for an AI catastrophe to regulate it.
In the face of such potential negative impact, accelerated by the continuous development of AI, it’s clear that traditional models of governance and regulation, that take years to formulate, are acutely ill-equipped. And this is coming from a person who has spent a third of his life regulating emerging technology in the UAE. An act to regulate AI that only comes into effect years down the line is not a benchmark for agility nor effectiveness. Furthermore, a single nation in our current global order, bound by borders and bureaucracy, is simply unable to grapple with a force as global and rapidly advancing as AI.
This calls for a fundamental reimagination of governance, one that is agile in its process and multilateral in its implementation. We must embrace the approach of pioneers like Elon Musk, who simultaneously alert us to the perils of unregulated AI while utilizing it to vigorously push the boundaries of humanity forward. We too must straddle this line, treating these alerts as malleable guardrails that guide rather than hinder AI’s development. Doing so requires dispelling the danger of ignorance around AI in government.
Beyond broadening government horizons, we must adopt a rational, simple and measured approach towards AI regulation, one that does not throttle innovation or inhibit adoption. Suppose an AI is faced with two critically ill patients, but resources only permit one to be treated. Who should the AI prioritize? Gone are the days of labyrinthine thousand-page policy documents that set an unattainable standard of compliance. Our focus must pivot towards embracing a blueprint, reminiscent of the simplicity found in Isaac Asimov’s famed ‘Three Laws of Robotics’. The first law prevents the AI from harming humans, or through inaction, allow humans to be harmed. Therefore, this law would defer the two critically ill patients’ conundrum to a human, who would rely on their ethical procedures and human judgment to make the decision.
These may be universal axioms that remain unshaken by the development of AI because their validity isn’t a matter of scientific proof, but rather a hallmark of our shared humanity when navigating the next AI trolley problem. They would remind us, and future generations to come, that AI must always be in service to human values, not the other way around.
I stand for a nation that has grown from global interconnection and international cooperation. I urge my counterparts across the world to convene and forge a consensual framework of universal basic laws for AI. This framework will provide the scaffolding from which we devise a variety of legislations from intellectual property to computational carbon footprint. Above all else, I firmly believe in our collective capacity to reimagine a new approach to AI governance, one that is agile, multilateral and most importantly, one that is now.
More Must-Reads from TIME
- Why Trump’s Message Worked on Latino Men
- What Trump’s Win Could Mean for Housing
- The 100 Must-Read Books of 2024
- Sleep Doctors Share the 1 Tip That’s Changed Their Lives
- Column: Let’s Bring Back Romance
- What It’s Like to Have Long COVID As a Kid
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Contact us at letters@time.com