<!-- wp:paragraph -->
With glass interior walls, exposed plumbing and a staff of young researchers dressed like Urban Outfitters models, New York University’s AI Now Institute could easily be mistaken for the offices of any one of New York’s innumerable tech startups. For many of those small companies (and quite a few larger ones) the objective is straightforward: leverage new advances in computing, especially artificial intelligence (AI), to disrupt industries from social networking to medical research.
<!-- /wp:paragraph --><!-- wp:paragraph -->
But for Meredith Whittaker and Kate Crawford, who co-founded AI Now together in 2017, it’s that disruption itself that’s under scrutiny. They are two of many experts who are working to ensure that, as corporations, entrepreneurs and governments roll out new AI applications, they do so in a way that’s ethically sound.
<!-- /wp:paragraph --><!-- wp:paragraph -->
“These tools are now impacting so many parts of our everyday life, from healthcare to criminal justice to education to hiring, and it’s happening simultaneously,” says Crawford. “That raises very serious implications about how people will be affected.”
<!-- /wp:paragraph --><!-- wp:paragraph -->
AI has plenty of success stories, with positive outcomes in fields from healthcare to education to urban planning. But there have also been unexpected pitfalls. AI software has been abused as part of disinformation campaigns, accused of perpetuating racial and socioeconomic biases, and criticized for overstepping privacy bounds.
<!-- /wp:paragraph --><!-- wp:paragraph -->
To help ensure future AI is developed in humanity’s best interest, AI Now’s researchers have divided the challenges into four categories: rights and liberties; labor and automation; bias and inclusion; and safety and critical infrastructure. Rights and liberties pertains to the potential for AI to infringe on people’s civil liberties, like cases of facial recognition technology in public spaces. Labor and automation encompasses how workers are impacted by automated management and hiring systems. Bias and inclusion has to do with the potential for AI systems to exacerbate historical discrimination against marginalized groups. Finally, safety and critical infrastructure looks at risks posed by incorporating AI into important systems like the energy grid.
<!-- /wp:paragraph --><!-- wp:gutenberg-custom-blocks/video-jw {"mediaId":"TwBD4f1B","autostart":false} -->
<!-- /wp:gutenberg-custom-blocks/video-jw --><!-- wp:paragraph -->
Each of those issues is gaining more of government leaders’ attention. In late June, Whittaker and other AI experts testified on the societal and ethical implications of AI before the House Committee on Science, Space, and Technology, while Rashida Richardson, AI Now’s director of policy research, spoke before the Senate Subcommittee on Communications, Technology, Innovation and the Internet. Tech workers are taking action as well. In 2018, some Google employees, led in part by Whittaker (who worked at the search giant until earlier this summer) organized in opposition to Project Maven, a Pentagon contract to design AI image recognition software for military drones. Also that year, Marriott workers went on strike to protest the implementation of AI systems that may have automated their jobs, among other grievances. Even some tech executives have joined calls for increased government oversight of the sector.
<!-- /wp:paragraph --><!-- wp:paragraph -->
AI Now is far from the only research institute founded in recent years to study ethical issues in AI. At Stanford University, the Institute for Human-Centered Artificial Intelligence has put ethical and societal implications at the core of its thinking on AI development, while the University of Michigan’s new Center for Ethics, Society, and Computing (ESC) focuses on addressing technology’s potential to replicate and exacerbate inequality and discrimination. Harvard’s Berkman Klein Center for Internet and Society concentrates in part on the challenges of ethics and governance in AI. In 2019, the organization co-hosted an “Assembly” program with the MIT Media lab, which brought together policymakers and technologists to work on AI ethics projects, like detecting bias in AI systems and accounting for the ethical risks of pursuing surveillance-related AI research.
<!-- /wp:paragraph --><!-- wp:paragraph -->
But in many ways, the field of AI ethics remains limited. Researchers say they are blocked from investigating many systems thanks to trade secrecy protections and laws like the Computer Fraud and Abuse Act (CFAA). As interpreted by the courts, that law criminalizes breaking a website or platform’s terms of service, an often necessary step for researchers trying to audit online AI systems for unfair biases.
<!-- /wp:paragraph --><!-- wp:paragraph -->
That may soon change. In 2016, the American Civil Liberties Union (ACLU) filed a suit against the U.S. Department of Justice in which the plaintiffs — a group of journalists and computer science academics — alleged that the CFAA’s protections are unconstitutional.“It’s a cutting-edge case,” says Esha Bhandari, the ACLU lawyer representing the plaintiffs. “It’s about the right to conduct anti-discrimination testing in the 21st century online.”
<!-- /wp:paragraph --><!-- wp:paragraph -->
Whatever the outcome of Bhandari’s case, researchers in AI ethics tend to agree that more needs to be done to ensure AI is working for our benefit. Of the experts who spoke with TIME, all agreed that regulation would help matters. As Lilly Irani, professor of communication, science studies and critical gender studies at the University of California San Diego puts it, “we can’t have a system where people are just harmed, harmed, harmed, and we rely on them to scream.”
<!-- /wp:paragraph --><!-- wp:gutenberg-custom-blocks/video-jw {"mediaId":"HsHtQhxw","autostart":false} -->
<!-- /wp:gutenberg-custom-blocks/video-jw --><!-- wp:paragraph -->
The path forward for ethical AI isn’t straightforward. Christian Sandvig, professor of digital media at the University of Michigan and director of ESC (and also a plaintiff in the 2016 suit against the Justice Department) worries that genuine calls for change in the AI field could be derailed in a process he calls “ethics-washing,” in which efforts to create more ethical AI look good on paper, but don’t actually accomplish much. Ethics-washing, Sandvig says, “make[s] it seem as though transformational change has occurred by liberally applying the word ‘ethics’ as though it were paint.”
<!-- /wp:paragraph --><!-- wp:paragraph -->
Whittaker acknowledges the potential for the AI ethics movement to be co-opted. But as someone who has fought for accountability from within Silicon Valley and outside it, Whittaker says she has seen the tech world begin to undergo a deep transformation in recent years. “You have thousands and thousands of workers across the industry who are recognizing the stakes of their work,” Whittaker explains. “We don’t want to be complicit in building things that do harm. We don’t want to be complicit in building things that benefit only a few and extract more and more from the many.”
<!-- /wp:paragraph --><!-- wp:paragraph -->
It may be too soon to tell if that new consciousness will precipitate real systemic change. But facing academic, regulatory and internal scrutiny, it is at least safe to say that the industry won’t be going back to the adolescent, devil-may-care days of “move fast and break things” anytime soon.
<!-- /wp:paragraph --><!-- wp:paragraph -->
“There has been a significant shift and it can’t be understated,” says Whittaker. “The cat is out of the box, and it’s not going back in.”
<!-- /wp:paragraph -->