In a move to tackle the rising threat of AI-generated deepfake pornography and child sexual abuse material, the White House has secured voluntary commitments from several major AI vendors, including Adobe, Microsoft, OpenAI, Cohere, and Anthropic. These companies have pledged to safeguard their AI models and datasets from being misused to create non-consensual deepfakes.
As part of the agreement, these tech giants will work to responsibly source their data, implement feedback loops to guard against AI-generated sexual abuse content, and remove nude images from training datasets when appropriate. Notably, data provider Common Crawl, which signed onto one aspect of the commitment, did not pledge to the removal of images or development of AI-specific tools, as it does not train AI models.
Although these pledges are non-binding and self-policed, the White House considers them a positive step in curbing the misuse of AI for harmful content. The announcement follows similar voluntary agreements from 2023 and highlights the growing concerns around deepfakes and AI-generated misinformation, especially in the lead-up to major global elections.
While some major AI vendors, such as Midjourney and Stability AI, opted out of the commitments, the White House continues to push for greater responsibility and transparency from AI companies. The move also mirrors international efforts in AI safety, as governments worldwide grapple with the challenges posed by rapid advancements in artificial intelligence.
Despite the lack of formal regulations in the U.S., this initiative marks another step toward addressing the ethical concerns surrounding AI, even as some experts remain skeptical about the effectiveness of voluntary agreements in the long term.