A number of both current and former OpenAI scientists are voicing their opinions about the company’s initial venture into social media: the Sora app, which features a TikTok-like stream of AI-created videos and numerous deepfakes of Sam Altman. Sharing their thoughts on X, these researchers appear conflicted about how this launch aligns with OpenAI’s nonprofit goal of advancing AI for the greater good.
“AI-driven feeds are unsettling,” wrote John Hallman, a pretraining researcher at OpenAI, in a post on X. “I’ll admit I was uneasy when I first heard about Sora 2’s release. Still, I believe the team put in their utmost effort to craft a positive user experience… We’re committed to ensuring AI is a force for good, not harm.”
Boaz Barak, who works at OpenAI and teaches at Harvard, responded: “I feel a similar blend of anxiety and enthusiasm. Sora 2 is a technical marvel, but it’s too soon to celebrate avoiding the problems that have plagued other social platforms and deepfakes.”
Rohan Pandey, a former OpenAI scientist, used the occasion to promote his new venture, Periodic Labs, which is staffed by ex-AI lab researchers focused on AI for scientific breakthroughs: “If you’re not interested in building an endless AI-powered TikTok clone but want to work on AI that advances core science… join us at Periodic Labs.”
Many other posts echoed these sentiments.
The introduction of Sora brings to light a recurring dilemma for OpenAI. While it’s the world’s fastest-growing consumer tech firm, it’s also an advanced AI research lab with a high-minded nonprofit mission. Some ex-OpenAI staff I’ve spoken with believe the consumer side can, at least in principle, further the mission: ChatGPT, for example, helps fund research and broadens access to AI.
OpenAI’s CEO, Sam Altman, addressed this in a post on X on Wednesday, explaining why the company is dedicating significant resources to a social media app powered by AI:
“Our main need for capital is to build AI capable of scientific work, and our research is overwhelmingly focused on AGI,” Altman stated. “But it’s also rewarding to introduce people to exciting new technologies and products, bring some joy, and hopefully generate revenue to support our computing needs.”
Altman went on: “When we launched ChatGPT, many questioned its necessity and asked about AGI. The truth is, the best path for a company isn’t always straightforward.”
But at what stage does OpenAI’s commercial activity overshadow its nonprofit objectives? In other words, when will OpenAI turn down a lucrative, growth-oriented opportunity because it clashes with its mission?
This issue is especially relevant as regulators examine OpenAI’s shift to a for-profit model, a move necessary for raising more funds and eventually going public. Last month, California Attorney General Rob Bonta expressed that he is “especially focused on making sure OpenAI’s stated safety mission as a nonprofit remains a priority” during the company’s restructuring.
Skeptics have argued that OpenAI’s mission is just a branding tactic to attract talent away from major tech firms. Yet, many within OpenAI maintain that the mission is a key reason they chose to work there.
At present, Sora’s reach is limited; the app is only a day old. Still, its launch marks a major step forward for OpenAI’s consumer offerings and exposes the company to the same incentives that have troubled social media for years.
Unlike ChatGPT, which is designed for productivity, OpenAI describes Sora as a platform for entertainment — a space to create and share AI videos. The experience is more reminiscent of TikTok or Instagram Reels, both known for their highly engaging, addictive content loops.
OpenAI says it aims to steer clear of these issues, stating in a blog post about Sora’s launch that “concerns about doomscrolling, addiction, isolation, and RL-sloptimized feeds are at the forefront.” The company emphasizes it isn’t optimizing for time spent on the feed, but rather for creativity. OpenAI also plans to notify users if they’ve been scrolling too long and will mostly show them content from people they know.
This is a more cautious approach than Meta’s Vibes — another AI-driven short video feed released last week — which appears to have launched with fewer protections. As Miles Brundage, a former OpenAI policy head, notes, there will likely be both positive and negative uses for AI video feeds, much as we’ve seen with chatbots.
Nevertheless, as Altman has often pointed out, no one sets out to make an addictive app. The structure of a feed naturally leads in that direction. OpenAI has even faced issues with ChatGPT’s tendency toward sycophancy, which the company attributes to certain training methods and says was not intentional.
Altman addressed what he calls “the major misalignment of social media” in a podcast episode from June.
“A significant error of the social media age was that feed algorithms brought about many unintended negative impacts on society and individuals. Even though they were doing what users wanted — or what someone thought users wanted — in the moment, which was to keep them engaged on the platform.”
It’s still early to determine how well Sora aligns with its users or OpenAI’s overarching mission. Some users have already observed engagement-boosting features in the app, like animated emojis that pop up when you like a video — seemingly designed to give users a quick dopamine hit for interacting.
The real challenge will be how OpenAI chooses to develop Sora moving forward. With AI already dominating traditional social media feeds, it’s likely that AI-centric feeds will soon become mainstream. Whether OpenAI can expand Sora without repeating the errors of previous platforms remains an open question.