It takes about 20 minutes to walk from the European headquarters of Facebook and Google, near Dublin’s docks, to the seat of the Irish Government. If you walk that route today you will pass 50 or 60 posters petitioning you to vote for or against this week’s referendum on legalizing abortion. Take your phone out of your pocket, and you will be equally bombarded with digital ads.
Posters are highly regulated in Ireland, but there are no rules governing the videos interrupting your Facebook feed or the images popping up in your gaming app. This regulatory blindspot is made more worrying by the nature of online ads, which can be highly targeted based on analysis of data about a person—as we saw with the Cambridge Analytica scandal.
Britain’s vote to leave the EU and the 2016 U.S. presidential election showed us how ads can share misinformation under the radar. Ads can also be purchased from abroad. It is predictable—but still depressing—that groups both inside and outside of Ireland have exploited these digital loopholes to try to influence the Irish abortion referendum.
With a strong constitution that can only be changed by referendum, Irish people frequently vote on social issues. A 1995 referendum legalized divorce, another in 1998 consented to the Northern Irish peace agreement, and most recently in 2015, the electorate voted to allow gay people to marry.
To keep these processes fair, Ireland does not allow political ads on TV or radio. This means posters and door knocking have been the mainstay of campaigning for generations. This activity has strict rules—posters and leaflets must say who paid for them, donations are limited to Irish citizens and businesses in small amounts, and TV and radio broadcasts must present both sides of the debate. More importantly, campaign images and messages are fact-checked and debated in radio shows and newspaper columns.
Yet online, in the past few months we have seen examples of overseas organizations from the U.S., U.K., and parts of Europe paying Facebook to target Irish voters with emotive content intended to sway their votes. We have seen untraceable groups paying to get lies about medical evidence and about their opponents in front of voters. We have seen numerous examples of unregulated spending on highly produced YouTube ads and banner ads across the web—from news websites to music streaming apps.
We only know about many of these examples because of a project I set up with some friends—the Transparent Referendum Initiative (TRI)—which is crowdsourcing a public database of digital ads. By the day before the vote, we had captured 1,300 referendum related ads on Facebook alone.
Journalists have turned this data into stories, enabling debate and increasing scrutiny. Yet the information is partial. Just 600 people are helping us crowdsource ads, and for non-Facebook ads we have to rely on screenshots. We have no way of knowing how much is spent, by whom, or who is being targeted and why. Only the companies selling the advertising space know this information.
As stories based on our data intensified, Facebook took the unprecedented decision on May 8 to block overseas groups from buying political ads. The next day Google announced they would not allow any referendum related ads at all on their platforms.
Has this self-regulation worked? Well, it doesn’t appear to have had a strong impact on the overall level of online ads. Ads on Facebook have increased exponentially. While YouTube ads stopped, large numbers of digital ads across the web continued, as spending shifted to platforms run by other companies.
In fact, when we told the Guardian newspaper that political ads were appearing on their site, investigated, they found that Google infrastructure was being used to serve political ads against their knowledge. They noted that with decisions being made by algorithms, “news websites are finding it hard to stop them appearing on their sites—a fact that has been exploited by campaigners.”
After Facebook and Google took these unanticipated steps as private companies just a couple weeks before the vote, some campaigners reacted strongly—claiming that these companies are themselves interfering in our democratic process by not allowing the adverts to appear.
Very little information was given about these decisions. Campaigners, as well as the Transparent Referendum Initiative, called on Facebook and Google to detail exactly what they saw in their records that prompted them to act to protect what they called “election integrity.” So far, they have made no public statements.
One positive result, however, is that this crisis has prompted commitments from the Irish government to reform traditional rules around campaigning. This will hopefully mean that the restrictions that exist for traditional campaigning will be extended to cover digital ads.
The next challenge is figuring out how to do this. Because, while it may only be a 20 minute walk from the tech companies to our parliament, they might as well be light years apart in terms of how they view the world.
Tech firms, with the ethos of “move fast and break things,” are the antithesis of the deliberative and slow-moving institutions we entrust to make our elections and referendums fair. On the flip side, technology is evolving so fast, and following such opaque algorithms, that those with responsibility to make rules about the things that affect us deeply are not able to keep pace and adapt.
We are playing regulatory whack-a-mole, and we are doing it with blindfolds on.
But what Ireland has shown is that transparency of online activity helps to bridge this gap. If the companies genuinely care about election integrity in Ireland and around the world, they will recognize that it is not their role to regulate it.
Rather, they must make available the information on their powerful tools, and highlight the ways they are being used—so that our institutions can do their job. Regulation is hard, and it has to be about enforcing our democratic values online and offline.
It needs to be based on deep insight into technology—but it needs to be done by the government.