Scarlett Johansson Demands Deepfake Ban as AI Video Goes Viral

-

Scarlett Johansson is calling on the U.S. government to pass stricter laws regulating the use of artificial intelligence (AI) after an AI-generated deepfake video featuring her and other Jewish celebrities went viral. The video, which spread rapidly across social media platforms, falsely depicted Johansson and other well-known figures, including Jerry Seinfeld, Mila Kunis, Jack Black, Drake, Jake Gyllenhaal, and Adam Sandler, wearing a t-shirt that displayed the name “Kanye” alongside an image of a middle finger with a Star of David in the center.

The video surfaced shortly after rapper Ye (formerly known as Kanye West) returned to X (formerly Twitter) to post antisemitic comments. He also attempted to sell shirts featuring a swastika on his website, which has since been taken down following widespread backlash. The misleading deepfake clip appeared to show Johansson and other celebrities making a statement against West, but the actress has confirmed that she never participated in such a video and was deeply disturbed by the way AI was used to fabricate a false reality.

Johansson Speaks Out Against AI Manipulation

Johansson Speaks Out Against AI ManipulationJohansson Speaks Out Against AI Manipulation

 

In a statement to People, Johansson condemned the video and stressed the urgent need for legislative action to prevent AI from being used for deceptive and harmful purposes.

“I am a Jewish woman who has no tolerance for antisemitism or hate speech of any kind,” Johansson stated. “But I also firmly believe that the potential for hate speech multiplied by AI is a far greater threat than any one person who takes accountability for it. We must call out the misuse of AI, no matter its messaging, or we risk losing a hold on reality.”

Her statement highlights a growing concern among experts and activists about the dangers of AI-generated content, which can be used to spread misinformation, manipulate public opinion, and create fraudulent media that blurs the line between fact and fiction.

A Growing Concern: AI’s Role in Misinformation and Defamation

AI-generated deepfake videos have increasingly been used to create misleading content, whether for political, commercial, or malicious personal attacks. Johansson is not the first celebrity to be targeted by AI deepfakes. In recent years, several public figures, including actors, politicians, and journalists, have been victims of AI-generated content that distorts their images, voices, and actions.

In 2023, Johansson took legal action against an AI app developer for using her likeness in an online advertisement without permission. The ad featured an AI-generated version of her promoting a product she had no association with. More recently, she also publicly called out OpenAI for using a voice in ChatGPT that bore a striking resemblance to hers, which led to the company discontinuing the use of that voice. Johansson’s legal battles and public advocacy against AI misuse underline the importance of implementing stronger protections against the unauthorized use of AI-generated images and voices.

The Call for AI Legislation: Where Does the U.S. Government Stand?

Despite growing concerns, the U.S. government has yet to establish comprehensive federal regulations to address the risks posed by AI. While some lawmakers have introduced bills aimed at tackling specific issues—such as a proposal to combat sexually explicit deepfakes—progress on broader AI regulation has been slow.

President Joe Biden signed an executive order in 2023 outlining safety guidelines for AI development and use. However, after taking office in January 2025, President Donald Trump reversed this executive order, arguing that AI regulations should not hinder technological advancements. Instead, his administration introduced a new executive order titled “Removing Barriers to American Leadership in Artificial Intelligence,” which promotes AI innovation without what the administration describes as “excessive ideological oversight.” AI ethics experts have expressed concerns about this policy shift, arguing that the absence of robust regulations could potentially lead to increased misuse and misinformation of AI.

At the state level, efforts to regulate AI have faced significant challenges as well. In September 2024, California Governor Gavin Newsom vetoed a major AI safety bill, citing concerns that strict AI laws could stifle innovation and economic growth. This decision was widely criticized by digital rights activists and lawmakers pushing for stronger protections against AI-generated misinformation.

Meanwhile, the U.S. and the U.K. recently declined to sign an international AI declaration aimed at promoting ethical AI development and responsible usage. This refusal has led to growing concerns that the lack of international cooperation could make it more difficult to prevent the misuse of AI technologies worldwide.

The Need for Urgent Action

Johansson’s statement underscores the urgency of implementing laws that protect individuals from AI-driven exploitation. She emphasized that AI regulation is not a partisan issue but a matter of safeguarding fundamental rights and preserving the integrity of information in the digital age.

“I urge lawmakers to make the passing of legislation limiting AI use a top priority,” she said. “It is a bipartisan issue that enormously affects the immediate future of humanity at large.”

Experts in AI ethics and digital privacy have echoed Johansson’s concerns, warning that without immediate legal measures, AI-generated misinformation could have severe consequences for democracy, privacy, and personal security. The rapid advancement of AI means that deepfake technology is becoming more sophisticated, making it increasingly difficult to distinguish real content from fabricated media. This could lead to serious implications, from damaging reputations to influencing elections and inciting violence.

Public and Industry Reactions

The controversy surrounding AI deepfakes has sparked debate across various industries, from entertainment to tech and politics.

Media analysts have noted that AI-generated content has already been used in deceptive ways, including political campaigns, where deepfakes have been employed to create fake endorsements or manipulate public perception. Cybersecurity experts warn that AI-powered misinformation could become a powerful tool for malicious actors, including foreign governments attempting to interfere in democratic processes.

On the corporate side, major tech companies like Google, Meta, and Microsoft have been working on AI detection tools to combat the spread of deepfakes. However, critics argue that tech companies are not doing enough and that without government regulations, the industry’s self-policing efforts will remain inadequate.

What’s Next?

As AI technology continues to advance, Johansson’s call for stronger regulations adds to the growing pressure on lawmakers to address the ethical and security challenges posed by AI. The debate over AI legislation is expected to intensify in the coming months, especially as deepfake technology becomes more widespread and its potential dangers become harder to ignore.

In the meantime, experts recommend that individuals remain cautious about AI-generated content online. Media literacy, fact-checking, and awareness campaigns are crucial in helping the public navigate the evolving landscape of AI-generated misinformation.

Johansson’s advocacy for AI legislation has brought renewed attention to the issue, and her involvement may encourage more public figures to speak out against AI misuse. Whether lawmakers will take decisive action in response remains to be seen, but one thing is clear: the conversation around AI regulation is only just beginning.

Share this article

Recent posts