(c) Wikipedia Commons
Since 2016, the influence of big tech and artificial intelligence (AI) in election campaigns has become increasingly notorious. 2016 saw the Cambridge Analytica scandal, in 2019 Trump was almost impeached for Russian interference in the 2016 US election through targeted advertisements, and in the summer of 2019 Facebook CEO Mark Zuckerberg, along with a number of other big tech bosses, came under harsh scrutiny in the US Senate for their policy positions regarding the controversial topics of political ads and data protection. Yet despite this uproar, little to nothing has been done to tackle the use of AI in election campaigns. The Federal Election Commission (FEC), the primary institution responsible for regulating United States elections, desperately needs to address this.
The current state of the FEC is deeply flawed. The institution is supposed to have six commissioners appointed by the President and confirmed by the Senate. However, since August 2019, only three out of six of these posts have been occupied. Why is this so important? Because federal law requires four or more commissioners to approve new rules or take actions to punish those who violate electoral law. This leaves the FEC in its current state as a dormant institution.
In a statement in August 2019, the Commission’s Democratic chairwoman, Ellen Weintraub, urged Trump to nominate new commissioners and called on the Senate to confirm them quickly. As of yet, the White House has made no nominations, and has declined to comment on the issue. Attempts to regulate digital political ads through a legislative route have also been limited and ineffectual; a Senate bill that would require disclosures for digital political ads has been stalled in the Senate for years.
What does this mean in practice? The 2020 election will essentially be regulated by outdated legislation. Limited change has been made to digital electoral regulation this century. The FEC hasn’t made formal rules for digital ads since 2006, with these being largely focused on TV ads – a digital realm that is becoming increasingly obsolete with every election cycle.
These 2006 rules require short disclaimers for digital ads, showing their audience where the ad is coming from and who paid for it. Short clips like “I’m Donald Trump and I approve this message,” or “Paid for by Bernie Sanders 2020” may sound familiar. However, on digital platforms, it is much harder to tell when and how these disclaimers are necessary. Digital ads manifest themselves in a number of different formats: Facebook ads, small banner ads, or search ads at the top of your search list on Google, for example. It’s easy to see how regulation for these ads could be difficult – where exactly would the disclaimer sit, and what makes it necessary for some but not all ads?
With the FEC’s current rules, it is not always clear when these disclaimers are necessary. There are some loopholes in the regulation for adverts that are deemed “too small” to fit a disclaimer. Traditionally, this exception was aimed at promotional materials like campaign buttons and bumper stickers. Early on, tech companies decided that online ads are “too small” to require a disclaimer; this is why it is so difficult to see where ads on Facebook are coming from. Facebook has estimated that in 2016, about 11.4 million people saw Russian-made ads on their news feeds, although they were unable to tell these were made in Russia.
The lack of guidance by the FEC has forced tech companies to make their own rules. Twitter and Spotify have taken the harshest stances: deciding to ban all political advertising on their platforms, allowing only voter registration ads. Google has taken a more lenient view and still allows political ads, but has considerably restricted the level of targeting possible for such ads, which is most often criticised as enabling political actors to single out groups for misinformation ads.
The effects of these companies’ decisions, however, are overshadowed by those of Facebook’s – the tech giant with by far the most influence over US elections – and, disconcertingly, the tech giant that has taken the most laissez-faire stance.
Facebook has insisted it will not fact-check political ads, nor will it in any way limit targeting of those political ads. Zuckerberg has justified the company’s position by arguing that a private company should not be regulating politics, and such regulation could limit free speech.
Interestingly, actors in big tech companies have actually themselves been calling for more regulation. Rob Leather, Director of Product Management at Facebook, has called for “regulation that would apply across the industry.” Facebook, as a private company, does not believe it should be doing the regulating itself, but it does believe some regulation needs to be put in place.
Although it is promising to see tech companies take the responsibility for regulating themselves and make a clear effort to uphold a level of electoral integrity and democratic fairness, it is worrying that they have sole control over these clearly political issues. Facebook’s position is illustrative of this issue; should political regulation really be in the hands of private companies?
Perhaps a more pressing issue is the FEC’s current inability to punish those who violate election law – particularly since much of this law for digital ads is so unclear. Whether you are for or against increased regulation of political ads, surely everyone can agree that where election law has been violated, the institution whose role it is to enforce such laws needs to be able to enact punishments.
Digital political advertising spend is expected to hit a record high in 2020: crossing the $1bn mark for the first time, and more than tripling the digital spend of the last presidential cycle. Essentially, despite the huge uproar that came from the targeted advertising of the 2016 election, we can expect to see at least as much if not more misinformation and voter manipulation in 2020 given the current state of regulation – or lack thereof.