Addressing the Harms of Deepfakes: A Call to Action for Policymakers and Technology Companies
The Rise of Deepfakes: A Threat to Democracy, Creativity, and Privacy
In today’s digital age, the rise of deepfakes poses a significant threat to various aspects of society, from democracy to individual privacy. Deepfakes, which are realistic AI-generated audio, video, or images that can recreate a person’s likeness, have the potential to be used by bad actors to deceive the public, undermine elections, exploit artists and performers, and harm everyday people.
As Chief Privacy & Trust Officer at IBM, Christina Montgomery, and Senior Fellow at IBM Policy Lab, Joshua New, highlight the urgent need for both technical and legal solutions to address the challenges posed by deepfakes. IBM has taken a proactive stance by signing the Munich Tech Accord, pledging to help mitigate the risks of AI being used to deceive the public and undermine elections. The company has also advocated for regulations that precisely target harmful applications of technology.
In their recent blog post, Montgomery and New outline three key priorities for policymakers to mitigate the harms of deepfakes:
1. Protecting elections: Democracy relies on free and fair elections, and deepfakes can be used to deceive voters by impersonating public officials and candidates. Policymakers should prohibit the distribution of materially deceptive deepfake content related to elections to safeguard the integrity of the electoral process.
2. Protecting creators: Musicians, artists, actors, and creators are at risk of having their likenesses exploited by bad actors using deepfakes for deceptive advertising and other malicious purposes. Policymakers should hold individuals and platforms accountable for producing and disseminating unauthorized deepfakes of creator performances.
3. Protecting people’s privacy: Everyday people, particularly women and minors, are vulnerable to harm from deepfakes, such as nonconsensual intimate imagery. Policymakers should create strong criminal and civil liability for those who distribute nonconsensual intimate audiovisual content, including AI-generated deepfakes.
IBM supports legislative efforts such as the Protect Elections from Deceptive AI Act, the NO FAKES Act, and the Preventing Deepfakes of Intimate Images Act to address these issues. The company also advocates for the EU AI Act, which covers deepfakes and imposes transparency requirements to identify inauthentic content.
In conclusion, addressing the challenges posed by deepfakes requires a comprehensive approach that combines changes in law and technology. IBM encourages policymakers to take swift action to target harmful applications of deepfakes and ensure that AI remains a positive force for the global economy and society.
For more information, you can download the PDF from the IBM website. For media inquiries, please contact Ashley Bright at brighta@us.ibm.com.
Source: IBM
By Christina Montgomery, Chief Privacy & Trust Officer, IBM and Joshua New, Senior Fellow, IBM Policy Lab
Date: Feb 28, 2024