Artificial Intelligence
Getty Images

Meet the Researchers Working to Make Sure Artificial Intelligence Is a Force for Good

With glass interior walls, exposed plumbing and a staff of young researchers dressed like Urban Outfitters models, New York University's AI Now Institute could easily be mistaken for the offices of any one of New York’s innumerable tech startups. For many of those small companies (and quite a few larger ones) the objective is straightforward: leverage new advances in computing, especially artificial intelligence (AI), to disrupt industries from social networking to medical research.

But for Meredith Whittaker and Kate Crawford, who co-founded AI Now together in 2017, it’s that disruption itself that’s under scrutiny. They are two of many experts who are working to ensure that, as corporations, entrepreneurs and governments roll out new AI applications, they do so in a way that's ethically sound.

“These tools are now impacting so many parts of our everyday life, from healthcare to criminal justice to education to hiring, and it’s happening simultaneously,” says Crawford. “That raises very serious implications about how people will be affected.”

AI has plenty of success stories, with positive outcomes in fields from healthcare to education to urban planning. But there have also been unexpected pitfalls. AI software has been abused as part of disinformation campaigns, accused of perpetuating racial and socioeconomic biases, and criticized for overstepping privacy bounds.

To help ensure future AI is developed in humanity's best interest, AI Now’s researchers have divided the challenges into four categories: rights and liberties; labor and automation; bias and inclusion; and safety and critical infrastructure . Rights and liberties pertains to the potential for AI to infringe on people’s civil liberties, like cases of facial recognition technology in public spaces. Labor and automation encompasses how workers are impacted by automated management and hiring systems. Bias and inclusion has to do with the potential for AI systems to exacerbate historical discrimination against marginalized groups. Finally, safety and critical infrastructure looks at risks posed by incorporating AI into important systems like the energy grid.

TIME may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.