AI-generated images and audio are already making their way into the 2024 presidential election cycle. In an effort to stem the flow of misinformation ahead of what is expected to be a contentious election, Google announced Wednesday that it will require political advertisers to “prominently disclose” whenever their ads contain AI-altered or generated aspects. AI tools.” The new rules will build on the company’s existing manipulated media policy and take effect in November.
“Given the increasing prevalence of tools that create synthetic content, we’re expanding our policies a step further to require advertisers to disclose when their election ads include digitally altered or generated content,” a Google spokesperson said in a statement obtained by Google. the hill. Small and inconsistent edits, such as resizing images, minor background cleaning or color correction, will still be allowed — those that depict people or things they never actually did, or that otherwise alter the actual footage will be flagged.
Google policy requires ads that use AI aspects to be labeled in a “clear and conspicuous” manner that is easily visible to the user. Ads will first be moderated by Google’s own automated screening system and then reviewed by a human as needed.
Google’s actions stand in contrast to other companies in social media. X/Twitter recently announced that it has reversed its previous position and will allow political ads on the site, while Meta is taking heat for its own lackluster ad control efforts.
The Federal Election Commission has also started looking into this issue. Last month it sought public comment on “amending a standing rule prohibiting candidates or their agents from defrauding other candidates or political parties” to clarify that “a related statutory prohibition also applies to intentionally deceptive artificial intelligence campaign ads.”