By Marty Swant • February 16, 2024 • 3 min read •
Ivy Liu
Nearly two dozen tech companies are making more commitments to address harmful AI content related to global elections.
With votes in more than 40 countries expected this year, major AI providers and online platforms have hashed out a new agreement detecting and preventing AI-generated misinformation. The agreement, “Tech Accord to Combat Deceptive Use of AI in 2024 Elections,” was announced today at the Munich Security Conference and includes eight commitments related to AI-generated images, audio and video.
The giants of ad-tech have pledged to collaborate, including Google, Meta, Adobe, IBM, Amazon and Microsoft, as have AI startups like OpenAI, Anthropic and Stability AI, and Inflection AI. The agreement also has signatories from major social platforms including Snap, LinkedIn, TikTok and X. Another noteworthy AI startup is is ElevenLabs, which experts have said was used to make an election-related audio deepfake resembling the voice of President Joe Biden.
The commitments are mostly symbolic and don’t include accountability tools. However, experts see the efforts as a step toward addressing the global nature of problematic AI content. The agreement also comes a week after Meta, OpenAI and Google announced plans to start labeling AI-generated content using standards set by the Coalition for Content Provenance and Authenticity (C2PA).
“Elections are the beating heart of democracies,” said Christopher Heusgen, chairman of the Munich Security Conference, in a statement about the agreement. “The Tech Accord to Combat Deceptive Use of AI in 2024 elections is a crucial step in advancing election integrity, increasing societal resilience, and creating trustworthy tech practices.”
New commitments include developing and implementing tools to combat election misinformation, assessing AI models to understand risks, detecting and addressing harmful content on various platforms. The companies are also committing to be transparent about how they address issues, as well as to work with governments and academics and fostering public awareness and media literacy.
Josh Lawson, director of AI election efforts at the Aspen Institute, said the agreement is “positive step in a much longer marathon against civic manipulation.” However, he also noted that the scope of the agreement is focused only on media but doesn’t address concerns about AI-generated text.
“This is a commitment not to sit on metadata,” he said. “This doesn’t address if you use a jailbroken or open source model that doesn’t incorporate C2PA. It’s not a commitment to label that.”
The cybersecurity firm McAfee is also among the signatories. Last month, the company debuted a new AI detection tool called Project Mockingbird that analyzes videos for AI-generated audio scams or other harmful content. During a demo with Digiday last month, McAfee CTO Steve Grobman said it’s vital to create comprehensive tools while also educating consumers to improve AI literacy.
“If I sent you an image of a political candidate doing something, people in 2024 understand that image might not be authentic,” said Grobman. “Even the term ‘photoshopped,’ is a verb, right? We need a bit more healthy skepticism where if you see a political candidate in a video or in an audio, take pause and not only take advantage of tools but also look a the reputation of the poster.”
Trust requires transparency, said Dana Rao, Adobe’s general counsel and chief trust officer. In a statement about the accord, Rao said the efforts are important for building “the infrastructure we need to provide context for the content consumers are seeing online.” Adobe is also one of the companies on the C2PA’s steering committee along with Google, Publicis Groupe, the BBC and others.
“With elections happening around the world this year, we need to invest in media literacy campaigns to ensure people know they can’t trust everything they see and hear online, and that there are tools out there to help them understand what’s true,” Rao said.
https://digiday.com/?p=535216