Just last week Tesla and SpaceX CEO Elon Musk revealed that he was only worried about the work of one AI company, which many read to be Google.
Musk and his fellow worriers should feel a little more at ease to know that Google and DeepMind are thinking about ways to ensure full human control.
In an academic paper, DeepMind’s Laurent Orseau and Stuart Armstrong of the Future of Humanity Institute at the University of Oxford have established a framework for interrupting an AI’s course of action.
“Now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions,” reads the paper.
Of course, this isn’t much use if the clever-clogs AI (shall we just call it Skynet?) figures out how to override this ‘big red button’.
The paper suggests that the key is to tweak the AI’s reward function to prevent it from factoring in human interaction points and formulating undesirable shortcuts that circumvent them. So when a very literal big red button is pressed in a factory for safety reasons, the AI doesn’t learn that this is bad for productivity (thus a block to attaining its ‘reward’) and plough on regardless.
This article originally appeared on TrustedReviews.com
More Must-Reads from TIME
- Where Trump 2.0 Will Differ From 1.0
- How Elon Musk Became a Kingmaker
- The Power—And Limits—of Peer Support
- The 100 Must-Read Books of 2024
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Contact us at letters@time.com