The Price of Your AI-Generated Selfie

6 minute read
Ideas
Johl is an Indian-American writer and researcher. She is a Visiting Policy Fellow at The Oxford Internet Institute, and is a Member of the Board of Directors of The Roosevelt Institute.

The recent flooding of social media feeds with AI-generated “portraits” derived from databases of artists’ work has renewed conversation over data ownership and the potential power AI has to supplant livelihoods in the future. The 22 million individuals and counting who have already handed over their images to the Lensa application might be fine to receive the myriad of AI-illustrated images in exchange for their data. But the fundamental rights, principles, and freedoms users are giving up during this exchange remains largely unchecked.

In Web3 technology circles, much promises have been made of decentralized technologies to open up the possibility for individual ownership and monetization of data, returning power to “creators.” This reflects the political ethos held by Blockchain proponents like Etherum co-founder Joe Lubin, who ostensibly seek to supplant the existing power structures of finance through “permissionless” consensus-based transaction data structures.

In both cases, this isn’t a problem that can be solved by technology or the rhetoric of “algorithmic governance.” Instead, it requires addressing fundamental rights that precede capitalist exchanges. In the case of digital art and data ownership, the problem won’t be solved by new tech, but by better articulation of what rights individuals have to assert: what information is collected about them, how it’s used, with whom it’s shared, and for what purposes.

Research has shown that, by and large, individuals have little conception of the value that is generated from their data by companies, or how much data is collected. They often seem, on the surface, willing to make a bargain to share data in exchange for a service or product. In the case of the Lensa application, individuals uploaded over 10-20 personal images in exchange for a series of AI portraits, while their biometric data and likeness is being accessed, stored, or monetized not just at this present moment, but also for the future. (Lensa recently updated it’s privacy policy amidst concerns of how user data might be retained. It is worth noting that, in the U.S., only California, Colorado, Connecticut, Utah, and Virginia currently have comprehensive privacy laws).

Read More: Why It’s So Hard to Resist Turning Your Selfies Into Lensa AI Art

What researchers have framed as a kind of “digital resignation” has taken hold of technology consumers, who see no way to opt-out of a powerful system that collects and monetizes people’s data and interactions. The relative newness of AI technology, social media, and the internet gives the impression that what we are encountering is a fundamentally different problem from what we’ve seen before. That perception is intensified by the “information asymmetries” between technology companies and their customers, which obscure how much data is collected and used, and how valuable it is.

But if you approach data ownership as a proxy for certain access, usage, and control rights, then historical corollaries of how such rights have been dealt with in relation to other valuable resources provide frameworks for the current debate. Where control of a resource has been deemed of substantial interest to a society, government control and ownership has been implemented in the form of state-owned enterprises. The Oil and Natural Gas Corporation in India & The Australian National Broadband Network are two examples. In the case of state ownership of data, China is a prime example: Beijing recently passed the 2021 Personal Information Protection Law effectively granting state control of data stored anywhere in the country. In fact, the concept of data as a national resource is now cropping up amongst policy makers around the world.

Historically, when a resource has stood to generate tremendous wealth but also had the potential for catastrophic social and personal harms, “public trusts” have been instituted as a form of collective ownership. A commons approach renders the resource public, and law and custom function to limit uses. U.S. environmental trusts and protections of national lands and Norway’s collectively owned petroleum reserves are two examples of this. The U.S. has established data commons around publicly funded areas of research, like The National Cancer Institute’s Data Commons. Frameworks for collective versus individual property rights don’t just specify who the owner is—they also distinguish between who is answerable to certain claims, like access rights. Different types of property ownership reflect different goals and principles that guide governance, whether the point of property is access for everyone, preservation, or collective governance, itself.

An approach that seems more aligned with where the goals of current Web3 efforts eventually leads, is one of private ownership rights. Contemporary notions of private property stem from 17th-century philosopher John Locke’s theory of homesteading, which argued that ownership arises from mixing one’s labor with the natural world in order to create private property. That is, more or less, how tech companies make their case that when they take our data and turn it into something of value in the market: They own it. Of course, like in the case of much of the art criticized to create AI-generated portraits, there are valid questions around both creative labor and existing legal copyrights.

Given that the value reaped from most data sets is only realized once they become relational, aggregate, and economies of scale are reached—and that it’s exceedingly difficult to determine the “creator” of most data—it’s worth asking whether ownership is actually fundamental to assert control of one’s data. Whether data ownership is structured as private, common, or national, ultimately, there are historically fundamental rights that precede economic ownership, entirely. For example, the right to privacy is enshrined in the Universal Declaration of Human Rights. Recently, there have been successful efforts within the European Union to articulate a rights-based approach to limit uses of data—notably with the General Data Protection (GDPR) legislation, which restricts aggregation and trade in personal data. Under GDPR, individuals have rights to data access, data erasure, and data portability, all of which are grounded and built upon a legal framework of fundamental human rights.

As data continues to be produced on an unprecedented scale thanks to expanded use of sensors, wearables, AI, and connected devices, calling for data ownership is an underspecified demand. The better question for those concerned about the intrusive power of technology companies is not who owns data, but rather, what are the appropriate public uses of private data and what is the best way to facilitate these uses that adequately protects individuals’ rights?

Those calling for data ownership would be better off forgoing the language of ownership entirely, and instead making explicit that they are concerned with certain rights. Ultimately, while arguments calling for ownership may be appealing, it’s critical to remember that primary rights are human rights—not the rights conferred by corporations and capitalism.

More Must-Reads From TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.