Art: Charter / Photo: Steve Jennings & iStock Yevhenii Dubinko

On Monday of last week, I joined women from around the world at the Reykjavík Global Forum in Iceland, to talk democracy, technology, and artificial intelligence, among other topics. Four days later, Sam Altman was suddenly fired from his role as CEO of OpenAI, and I religiously followed the twists and turns, bravado, and infighting that unfolded over the weekend.

The bookends of the week form a bit of a metanarrative around how to save or support innovation that creates massive societal upheaval. After Altman’s dismissal from the nonprofit behind ChatGPT, we are left pondering who should be in charge of these new technologies that spare no industry or institution.

OpenAI’s six-member board fired Altman, vaguely alluding to lies in his communications with them. The organization operates as a partnership between its nonprofit and for-profit arms, whereby the latter raised money from investors and promised that profits above a certain level would be donated to the former. As The Atlantic explained: “The company’s charter bluntly states that OpenAI’s ‘primary fiduciary duty is to humanity,’ not to investors or even employees.”

Ultimately, Altman’s dismissal represents a fissure between these two sides: Silicon Valley’s go-go-go demand for scale and the desire for more care and caution in generative-AI innovation. This is not a new dilemma, but it is made more relevant by fundamental questions over AI—and massive miscalculations and missteps already made on the influence of technology on all aspects of life, work, and government.

Neither nonprofit nor venture backing is actually a revenue model.

To do good or make money? Many of us who entered the entrepreneurial space over the last few decades (and many of the types of leaders gathered in Reykjavík last week) say yes and yes.

In 2018, Deloitte named “The Rise of the Social Enterprise” one of the key human capital trends of the year. The new paradigm “considers a business less as a ‘company’ and more as an ‘institution,’ integrated into the social fabric of society,” according to one summary from an HR analyst. Covid and the racial justice protests of 2020 hastened the growth of this category, which largely refers to businesses that have social objectives or metrics as a goal even as they rely on commercial structures to run the organization.

And yet oversimplification of both the revenue generation and motives defining for-profit and nonprofit abound. Consider the frequency of unhelpful distinctions like this one, from a brand agency working with museums and community groups: “Unlike most of their counterparts in for-profit companies, nonprofit teams find themselves emotionally invested in their organization’s cause.”

This shorthanding serves no one, discounts the overwhelming demand for a more purposeful economy, and fuels the chaos like that around Altman’s departure, depicting one side as altruistic and the other side as ruthless. The truth is that nonprofits are businesses, too. And that for-profit businesses, inherently, are defined by serving a public need or demand.

“By oversimplifying this distinction, the public often characterizes nonprofits as poorly run businesses and for-profit businesses as existing for no purpose other than money,” nonprofit consultant Gary Bagley, who teaches nonprofit management at Columbia University and Baruch College, tells me. “In reality, both need to attend to the bottom line while aiming for different outcomes.”

And another thing nonprofits have in common with the venture-backed endeavors favored by Silicon Valley: the hunt for real, recurring revenue streams.

When past should not be prologue

Among sessions I attended in Iceland: a discussion on the potential for gender and racial bias in AI reflecting very real human prejudices.

“AI systems are algorithms. They do not have their own biases. They reflect the data they are trained from,” Rita Singh, associate professor at Carnegie Mellon’s Language Technologies Institute, told the crowd. “The question we must ponder is: Should AI researchers and developers be given charge of guiding society’s biases? Remember, these are not democratically elected people. They are hired on a for-profit basis, by fiat, by for-profit companies like Google, Meta, Microsoft, X. Should corporations and for-profit entities be given the authority to guide society’s biases, that too based on a profit-making motive?”

I followed up with Singh this week asking what she thought now. Altman and other “titans to profit from AI” operated, she said bluntly, “without heed to who got trampled in the way.” There’s a need for society at large to respond with similar urgency.

“The policymakers also need to act very very fast…to come up with a set of quantifiable, ratified guidelines,” she told me. “The race is on to build the next best AGI (artificial general intelligence) and there are no guardrails to contain the race. Without those—such as civic metrics encompassing inequality and bias—AGI is likely to crash and burn. These civic metrics must make it to the marketplace, and somehow drive it, so that the for-profit entities who are in the race are incentivized.”

The social-media platforms are a vacuous space.

We have gotten it wrong on tech governance before—and still are. The lack of regulation on social-media platforms is partly responsible for the crumbling of democratic institutions today. Platforms’ reticence to be responsible for what they publish continues to fuel recklessness and mistrust, even as these sites are modern-day utilities and needed vehicles to disseminate information, such as how to find a job or a Covid booster.

Even today, as Facebook deprioritizes the posting and sharing of journalistic links, users fill the void with memes, ill-formed opinion and made-up information—in between dozens of sponsored links to Friends bloopers or sales on sequin sweaters. These are the platforms we now turn to to save democracy? (Also: My semi-regular reminder that Facebook was originally launched as a platform called Facemash to rank women’s attractiveness. I repeat: These are the platforms we now turn to to save democracy?)

The mistakes of Silicon Valley seem doomed to be repeated when it comes to AI. Even as entire business models (journalism, Hollywood, and government, among them) contorted to meet the whims and needs of platforms, they are singing a very different tune today. Publishers, for example, are no longer rushing toward the endless scale demanded and enabled online. “In the heady days of 2016, Buzzfeed and Vice and Bustle and Refinery and Business Insider could brag about reaching 80 million people—you don’t need to reach 80 million people to have a great publishing business,” Bustle BDG CEO Bryan Goldberg recently said.

And yet, the algorithms fueling AI and the modern internet largely rely on data and behaviors from these old scale models.

Where that leaves Altman

Over the weekend, Microsoft CEO Satya Nadella announced he had hired Altman and former OpenAI president Greg Brockman, who was also removed from the board and then quit in solidarity with Altman. While this is being heralded as a coup that puts Microsoft in the “driver’s seat,” according to news reports, legitimate—and unsettling—questions remain. On Friday, the OpenAI board said a review found that Altman was “not consistently candid in his communications” with the board of directors, for example.

“The fact that Altman’s board pushed him out for ethical concerns, and he has since gone to Microsoft, is of grave concern. It is still not clear what the ethical concerns were, if they will be resolved by the Microsoft team, and what role their responsible AI leaders will have in setting parameters around Altman’s work,” says Mutale Nkonde, visiting policy fellow at Oxford and the CEO of AI for the People, which seeks to reduce the algorithmic bias of tech products. “This is one of the reasons Congress needs to come and enact legislation that will provide parameters within which Altman and other AI leaders can innovate.”

While Altman has been removed from the company that perhaps best illustrated the tensions surrounding AI’s future, it hardly settles the matter of his trustworthiness or leadership prowess. Ironically, we emerge no better served or with any greater oversight or insight.

Notably, the Reykjavík Global Forum was founded after women leaders felt there was no place for them, or their solutions-oriented approaches, at the World Economic Forum in Davos. We once again find ourselves in a mess made by men. The path forward likely remains in truly global gatherings among a diverse cross section—elected officials, nonprofit leaders and organizations, philanthropists, media, platforms and tech companies, lovers and defenders of democracy and democratized spaces—who seek to narrow the distances among us in order to best serve our communities.

Correction: Singh’s quote has been updated to clarify the context of her remark.

Read more from Charter

The handbook for the future of work, delivered to your inbox.

Subscribe
EDIT POST