Guarding Democracy in the Age of Disinformation: California Makes a Move

Recently California Governor, Gavin Newsom, signed a new law designed to intervene in the use of AI to create deepfakes – AI created images or videos – in political advertisements ahead of the upcoming US election.

2024 is the largest election year in history with more people going to the poles than ever before.  With 60 countries ranging from India to North Macedonia either having already visited the polls or like with the US, are on the verge of casting their votes, never before has the globe had a ballot paper so large.

It is also the first series of elections where the use of AI is accessible to millions of voters, which means that democratic governments around the world are having to get to grips with how AI has the potential to interact with their respective democratic processes.

Advertisement

California has shown that it will be the first to make the move combatting digital disinformation, with a new law making it illegal to create and publish deepfakes of a political nature. This is a critical first step in preserving trust in democracy and protecting voters from disinformation.

Entrepreneur and AI expert, Rotem Farkash, recently spoke about the potential dangers of deepfakes:

‘Deepfakes have an extraordinary ability to recreate videos of individuals in a way that is profoundly life-like. Even for those who are familiar with AI, like me, it is often very difficult to interpret fake videos. This is especially true when viewing media on smaller screens like our phones where small nuances that point to something being fake are even harder to pick up.’

Recent months have seen a flurry of incidents that outline the potential danger deepfake AI scams can pose, not only to people’s wallets, but also the way in which people formulate opinions on critical issues by feeding them false information.

In August this year scammers ran a fake live stream on YouTube with a deepfake version of Elon Musk saying that he would personally double any cryptocurrency sent to an account that the scam claimed to be his. The New York Times reported on another occasion where another deepfake of Mr Musk showed him supposedly endorsing a new investment opportunity that promised to deliver extraordinary returns.

Indeed, in June this year AI company DeepMind found that political deepfakes that impersonate politicians are the most prevalent case of malicious AI use. Before the UK election there were more than 100 deepfake videos impersonating Rishi Sunak on Facebook and they reached as many as 400,000 people in a single month.

As AI becomes increasingly commercialised it is also becoming accessible to malicious actors with an intent to interfere with democratic processes. Gavin Newsom’s new law is a first step in addressing some of the major issues that surround both misinformation and disinformation.

California’s new law is composed of three key elements. The first, means large platforms like Facebook are legally bound to either remove or label deceptive content. The second increases the severity of potential penalties for the use of deceptive AI in creating political material. Third, legally requires actors or organisations to disclose when AI is used in political advertisements.

Whilst there will inevitably be breakers of these rules, it at least makes social media companies legally liable to punishment if they are found to not be policing the use of deceptive AI creations in the political arena.

As the home of Silicon Valley and many of the world’s largest AI companies like OpenAI, the fact that California is the state leading the way on this is a vital component to the overall potential effectiveness of the new legislation.

The sunshine state has already collaborated with big tech companies to help students and educators interact with AI in the right way. As recently as August, Governor Newsom and Nvidia founder and CEO, Jensen Huang cosigned a new initiative to ensure this process of education is realised.

Nevertheless, there are questions around the potential effectiveness of such a law. Whilst it does place more pressure on tech firms to ensure that their platforms aren’t enabling AI-based disinformation, the legislation is unable to account for foreign actors with malicious intent. This will undoubtedly be an issue in the 2024 election, as even the US Department of State acknowledged in a briefing paper issued earlier this year.

The stakes of geopolitics and democracy more generally have never been as high as they are in 2024, and California’s new laws reflect a proactive and positive first step to combat a potentially devastating global issue.

Looking forward, other states and indeed, other countries must follow suit in protecting election integrity in the digital age. As we edge closer to November 5th, voters, platforms, and lawmakers must be vigilant of the potential dangers democracy faces in an AI-driven world. Though, rest assured that if other stakeholders follow in Gavin Newsom’s footsteps our democracies can rebuild and retain our trust in them.

Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Advertisement

Pin It on Pinterest

Share This