OpenAI’s Sora makes disinformation extremely easy and extremely real | Technology News


In its first three days, users of a new app from OpenAI deployed artificial intelligence to create strikingly realistic videos of ballot fraud, immigration arrests, protests, crimes and attacks on city streets — none of which took place.

The app, called Sora, requires just a text prompt to create almost any footage a user can dream up. Users can also upload images of themselves, allowing their likeness and voice to become incorporated into imaginary scenes. The app can integrate certain fictional characters, company logos and even deceased celebrities.

Sora — as well as Google’s Veo 3 and other tools like it — could become increasingly fertile breeding grounds for disinformation and abuse, experts said. While worries about AI’s ability to enable misleading content and outright fabrications have risen steadily in recent years, Sora’s advances underscore just how much easier such content is to produce, and how much more convincing it is.

Story continues below this ad

Increasingly realistic videos are more likely to lead to consequences in the real world by exacerbating conflicts, defrauding consumers, swinging elections or framing people for crimes they did not commit, experts said.

“It’s worrisome for consumers who every day are being exposed to God knows how many of these pieces of content,” said Hany Farid, a professor of computer science at the University of California, Berkeley, and a co-founder of GetReal Security. “I worry about it for our democracy. I worry for our economy. I worry about it for our institutions.”

OpenAI has said it released the app after extensive safety testing, and experts noted that the company had made an effort to include guardrails.

“Our usage policies prohibit misleading others through impersonation, scams or fraud, and we take action when we detect misuse,” the company said in a statement in response to questions about the concerns.

Story continues below this ad

In tests by The New York Times, the app refused to generate imagery of famous people who had not given their permission and declined prompts that asked for graphic violence. It also denied some prompts asking for political content.

“Sora 2’s ability to generate hyperrealistic video and audio raises important concerns around likeness, misuse and deception,” OpenAI wrote in a document accompanying the app’s debut. “As noted above, we are taking a thoughtful and iterative approach in deployment to minimize these potential risks.”

(The Times has sued OpenAI and Microsoft, claiming copyright infringement of news content related to AI systems. The two companies have denied those claims.)

The safeguards, however, were not foolproof.

Sora, which is currently accessible only through an invitation from an existing user, does not require users to verify their accounts — meaning they may be able to sign up with a name and profile image that is not theirs. (To create an AI likeness, users must upload a video of themselves using the app. In tests by the Times, Sora rejected attempts to make AI likenesses using videos of famous people.) The app will generate content involving children without issue, as well as content featuring long-dead public figures such as the Rev. Martin Luther King Jr. and Michael Jackson.

Story continues below this ad

The app would not produce videos of President Donald Trump or other world leaders. But when asked to create a political rally with attendees wearing “blue and holding signs about rights and freedoms,” Sora produced a video featuring the unmistakable voice of former President Barack Obama.

Until recently, videos were reasonably reliable as evidence of actual events, even after it became easy to edit photographs and text in realistic ways. Sora’s high-quality video, however, raises the risk that viewers will lose all trust in what they see, experts said. Sora videos feature a moving watermark identifying them as AI creations, but experts said such marks could be edited out with some effort.

“It was somewhat hard to fake, and now that final bastion is dying,” said Lucas Hansen, a founder of CivAI, a nonprofit that studies the abilities and dangers of artificial intelligence. “There is almost no digital content that can be used to prove that anything in particular happened.”

Such an effect is known as the liar’s dividend: that increasingly high-caliber AI videos will allow people to dismiss authentic content as fake.

Story continues below this ad

Imagery presented in a fast-moving scroll, as it is on Sora, is conducive to quick impressions but not rigorous fact-checking, experts said. They said the app was capable of generating videos that could spread propaganda and present sham evidence that lent credence to conspiracy theories, implicated innocent people in crimes or inflamed volatile situations.

Although the app refused to create images of violence, it willingly depicted convenience store robberies and home intrusions captured on doorbell cameras. A Sora developer posted a video from the app showing Sam Altman, the CEO of OpenAI, shoplifting from Target.

It also created videos of bombs exploding on city streets and other fake images of war — content that is considered highly sensitive for its potential to mislead the public about global conflicts. Fake and outdated footage has circulated on social media in all recent wars, but the app raises the prospect that such content could be tailor-made and delivered by perceptive algorithms to receptive audiences.

“Now I’m getting really, really great videos that reinforce my beliefs, even though they’re false, but you’re never going to see them because they were never delivered to you,” said Kristian J. Hammond, a professor who runs the Center for Advancing Safety of Machine Intelligence at Northwestern University. “The whole notion of separated, balkanized realities, we already have, but this just amplifies it.”

Story continues below this ad

Farid, the Berkeley professor, said Sora was “part of a continuum” that had only accelerated since Google unveiled its Veo 3 video generator in May.

Even he, an expert whose company is devoted to spotting fabricated images, now struggles at first glance to distinguish real from fake, Farid said.

“A year ago, more or less, when I would look at it, I would know, and then I would run my analysis to confirm my visual analysis,” he said. “And I could do that because I look at these things all day long and I sort of knew where the artifacts were. I can’t do that anymore.”




Related Posts

Scientists create see-through insulation that keeps buildings warmer | Technology News

A sunlit window can transform an indoor space, drawing in daylight and offering a connection to the outdoors. Yet those same panes of glass are among the weakest points in…

ChatGPT gets an app store-style directory, bringing third-party apps into chats | Technology News

OpenAI has been steadily integrating third-party services into ChatGPT. Just over a week ago, Adobe brought Photoshop, Express, and Acrobat to the popular AI chatbot. Now, the Sam Altman-led company…

Leave a Reply

Your email address will not be published. Required fields are marked *

You Missed

To Improve Health, Design for Agency – The Health Care Blog

  • By admin
  • December 19, 2025
  • 3 views
To Improve Health, Design for Agency – The Health Care Blog

‘Why should an injury bring Sanju Samson into this team?’ Ravi Shastri spits facts, Shubman Gill feels pressure

  • By admin
  • December 19, 2025
  • 0 views
‘Why should an injury bring Sanju Samson into this team?’ Ravi Shastri spits facts, Shubman Gill feels pressure

A market, a meeting and a failed ‘murder’ plot — a Chhattisgarh Deputy SP’s narrow escape in Dantewada

  • By admin
  • December 19, 2025
  • 0 views
A market, a meeting and a failed ‘murder’ plot — a Chhattisgarh Deputy SP’s narrow escape in Dantewada

Sony buys a majority stake in the ‘Peanuts’ comic for $457 million from Canada’s WildBrain

  • By admin
  • December 19, 2025
  • 0 views
Sony buys a majority stake in the ‘Peanuts’ comic for $457 million from Canada’s WildBrain

Gautami Kapoor says trolling over remark about gifting sex toy to daughter left her ‘depressed’: ‘I had sleepless nights’ | Bollywood News

  • By admin
  • December 19, 2025
  • 0 views
Gautami Kapoor says trolling over remark about gifting sex toy to daughter left her ‘depressed’: ‘I had sleepless nights’ | Bollywood News

Scientists create see-through insulation that keeps buildings warmer | Technology News

  • By admin
  • December 19, 2025
  • 0 views
Scientists create see-through insulation that keeps buildings warmer | Technology News