Byting Back: Trusting Government Content in the Age of AI

Blog Post
Shutterstock
Feb. 22, 2024

“Your social security number has been frozen!”

“There’s a lawsuit against you for unpaid taxes!”

“Full discharge of all your federal student loans!”

Scammers posing as government representatives are a tale as old as time. While some of us can spot the red flags before falling victim to an attack, the rise of AI has intensified the urgency of this problem by rendering scammers’ tactics more realistic, as demonstrated by an apparent attempt at voter suppression that occurred during New Hampshire’s primary election in January, where voters received robocalls purportedly from President Biden telling them not to vote. With government imposter scams quickly becoming the top weapon of choice for bad actors seeking to extort Americans for financial or political gain, government agencies are faced with a pressing need to create a way for Americans to verify the true identity of the government or political actor contacting them. What’s more, creating effective means of authenticating official content can also support larger government efforts toward building a more robust American digital public infrastructure.

The Federal Trade Commission found government imposters are some of the most common scammers out there, with many fraudsters masquerading as the Social Security Administration, the Internal Revenue Service, or law enforcement agencies. The rise of accessible generative AI amplifies this problem. Bad actors now have an inexpensive and easy way to generate convincing phishing emails, text messages, and scam calls. And these capabilities will only increase and become more difficult to detect as the tech matures. The proliferation of online video ads featuring deepfaked celebrities like Oprah and Taylor Swift promoting Medicare and Medicaid scams will increase. Many AI experts have forecasted that the many elections of 2024 will become a breeding ground for AI-generated disinformation – the fake POTUS call in New Hampshire is just the start.

New cases of AI mimicking government agencies and political leaders will require a new mechanism for the federal government to help Americans distinguish between official communications and potential scams. Given that fraudulent schemes derive their success from misrepresentation, the most basic means of guarding against these types of attacks rely on authentication – corroborating that a person or organization contacting us is who they claim to be.

Content authentication in action

President Biden’s Executive Order on AI lays the groundwork for authentication measures. The EO directs the Department of Commerce to develop standards for content authentication and guidance for federal agencies to “make it easy for Americans to know that the communications they receive from their government are authentic.” In other words, much like the security measures embedded in physical currency to verify legitimate U.S. dollars, the Biden administration hopes to develop a widely-adopted method all Americans can use and trust for verifying the authenticity and origin of government communications, from official photographs and videos to tax documents.

In the days following the EO’s release, a senior administration official announced the White House would be working with a private-sector collective known as the Coalition for Content Provenance and Authenticity (C2PA) to advance a technical system for labeling official government content. C2PA is an open technical standards body that was founded by Adobe, Arm, BBC, Intel, Microsoft, and Truepic in 2021, and has developed an open-source protocol for“Content Credentials” to make it easier for content creators and users to verify the authenticity and origin of digital content, whether synthetic or non-synthetic. The C2PA has hundreds of global members across media, technology, academia and non-profits contributing to this open standard today.

C2PA’s protocol relies on what is known as “provenance data,” which is information describing the source of a piece of content, such as when and how it was created, as well as the history of downstream edits and changes made to it. This provenance data is securely bound via cryptography to a piece of content from the moment of creation and updated throughout the content lifecycle as it moves between users and travels across platforms. Key to C2PA’s protocol is its user-friendly nature – the technology is meant to be accessible to non-technical users, allowing anyone to personally inspect and authenticate the content credentials of a piece of media employing the standard – kind of like a nutrition label for online media, a clear disclosure method New America supports.

The use of data for content authentication is not necessarily a new concept. Understanding the source of information is a fundamental element of how we decide what to trust. However, the rise of AI-generated content has led many policymakers and technologists alike to argue there should be a standardized, interoperable method for verifying the origins of content in today’s murky information environment. Senator Gary Peters (D-MI), who has been active on AI regulation, is a long-standing advocate for the use of content provenance technologies in government. During passage of the 2024 National Defense Authorization Act late last year, Sen. Peters secured the inclusion of a provision mandating the creation of a Pentagon pilot program “to assess the feasibility of establishing content standard technologies on DOD-produced and owned media content.”

The use of provenance technologies are not the only proposed solutions to the content authentication problem, especially in the context of elections. In 2018, New America was involved in a pilot testing a blockchain-powered notarization platform that created a public mechanism for authenticating election observation reports and combatting disinformation in advance of a referendum in Macedonia. Additional examples of using tech for content verification include Cambodia and Mexico: both have begun incorporating QR codes into government-issued documents, allowing recipients to quickly and easily verify that those documents are really coming from who they say they are.

A little trust goes a long way

While important, technical solutions only constitute one piece of the puzzle when it comes to creating sustainable and viable strategies to mitigate the risks and harms associated with synthetic media. True resilience against fraud will involve integrating provenance technologies and strengthening digital literacy efforts to build a comprehensive defense accessible to all. Adopting a human-centered policy-making approach is crucial and must prioritize the development of policies that are accessible, equitable, and tailored to meet diverse needs, ensuring holistic protection for all citizens, especially the most vulnerable, against sophisticated scams.

Harnessing tech to defraud or disinform the public is a perpetually evolving problem, requiring a proactive government approach that gives individuals as many tools as possible to contextualize, evaluate, and authenticate a piece of content. This strategy, sometimes referred to as “pre-bunking,” ensures every individual is empowered to think critically and decide for themselves what to trust in an information environment rife with attempts to deceive. Addressing the challenge of content authentication will also require robust legislative and regulatory frameworks that set standards and guidelines for digital content verification, like C2PA. By enacting policies that encourage provenance technologies and digital signatures for all government-related communications, policymakers can create a more secure and trustworthy information environment. As a further benefit, regulations could incentivize the adoption of standardized authentication protocols by private entities, such as social media companies, to promote consistency and interoperability across platforms.

While the impact of impersonation schemes like the fake Biden robocall are impossible to measure using data and numbers, what is clear is the risk these attacks pose to democratic institutions and Americans’ relationship to their government. Managing digital public infrastructure isn’t just about technology – it's about trust, and securing buy-in from the communities digital solutions aim to reach. Creating secure and trustworthy channels of communication between government and the public can go a long way to enhancing confidence in public institutions and strengthening democratic governance in the long term.