8 Min Read

12 Days of OpenAI: Full o1 Model, ChatGPT Pro, Sora, o3 Announcement, and More

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

Editor’s Note: This post has been updated to cover all 12 Days of OpenAI. 

OpenAI just kicked off an ambitious “12 Days of OpenAI” holiday campaign featuring product and feature releases every business day between December 5 and December 20. And three days in, we're already seeing some of the most significant updates in the company's history.

So far, we’ve seen OpenAI release its full o1 reasoning model, debut a $200 per month ChatGPT Pro license, launch reinforcement fine-tuning, and (finally!) fully release its state-of-the-art video generation model Sora.

What do you need to know about the releases?

I broke it all down with Marketing AI Institute founder and CEO Paul Roetzer on Episode 126 of The Artificial Intelligence Show.

Day 1: The Full o1 Model and ChatGPT Pro

On Day 1, which took place on Thursday, December 5, OpenAI announced the full release of their o1 reasoning model and a new premium subscription tier called ChatGPT Pro.

The o1 model is unique because, unlike other models, it takes time to think through problems using chain-of-thought reasoning. This allows o1 to solve much harder problems and reason through more complex tasks than general reasoning models like GPT-4o.

That unlocks all sorts of new use cases for AI.

“It’s predominantly for harder problems, like math, biology, engineering, and science-related ones,” says Roetzer.

Previously, users had access to o1-preview, a preview version of this full model. Now, we get the real thing. And it’s much more powerful—and more accurate—than its predecessor. OpenAI says the new version makes 34% fewer major mistakes while processing information 50% faster.

This model is also multimodal, meaning it can process both text and images together. It’s also been refined based on user feedback from the preview model.

ChatGPT Plus users have access to o1 now. So do ChatGPT Pro users. 

Never heard of ChatGPT Pro? That’s because it’s a completely new subscription tier that was also announced on Day 1.

ChatGPT is a premium tier designed for ChatGPT power users. It costs $200 per month and it gives you unlimited access to the new o1 model, as well as the smaller, faster o1-mini model, GPT-4o, and Advanced Voice Mode.

Day 2: Reinforcement Fine-Tuning

On Day 2, OpenAI announced it is expanding what it calls its “Reinforcement Fine-Tuning Research Program,” which enables developers and machine learning engineers to create expert models fine-tuned to excel at specific sets of complex, domain-specific tasks.

“This is an announcement for developers,” says Roetzer. The average business user likely won’t be building on top of this feature on their own, but rather teaming up with a developer or internal IT team.

There’s nothing wrong with that. In fact, it’s a valuable opportunity if you have development resources you can tap into and a strong use case for a domain-specific model.

It also hints at a future where every company, every enterprise, and perhaps every individual (regardless of technical skill) can custom train models.

Eventually, we may even be able to train fine-tuned models for any domain (or even for individual company departments) the same way we build custom GPTs, he says.

Day 3: Sora

On Day 3, OpenAI finally released Sora, its state-of-the-art video generation model. Sora was initially previewed earlier in 2024, then its release was delayed as OpenAI ran into development obstacles.

Now, the company has developed a new, faster version of Sora called Sora Turbo—and ChatGPT Plus and Pro users can access it at www.sora.com.

The new Sora allows you to generate videos from 5 to 20 seconds long based on a text prompt or simply by uploading an image. 

The videos can be generated in widescreen, vertical, or square aspect ratios. And they can be up to 1080p in resolution.

Sora also includes a storyboard tool that allows you to precisely control your videos frame by frame. And, you can use your own assets to extend, remix, or blend content.

Access to Sora is tiered. ChatGPT Plus users can generate up to 50 videos monthly at 480p resolution (or fewer at 720p) at no additional cost, while Pro subscribers get 10 times more usage, higher resolutions, and longer durations. 

The service is currently available in most regions where ChatGPT operates, though notably not in the UK, Switzerland, or the European Economic Area.

As we test out Sora, says Roetzer, quality and speed are going to be huge factors to assess.

AI video generation, while impressive today, still suffers plenty of problems with the consistency of video quality and outputs. 

“With video generation, it’s really hard to maintain character consistency and frame consistency,” he says.

Today’s tools can also take a long time to generate a single video. In many cases, you may wait minutes for a single video of a few seconds, then find you need to regenerate the video over and over to get closer to what you actually want.

Whether or not Sora addresses these issues remains to be seen. But, Roetzer notes, it may not have to be perfect to have a big impact. In many films and videos, the average scene is just a few seconds long. If Sora can nail down a few seconds of video at extremely high quality in a relatively short amount of time, that could disrupt how video and film work gets done.

“What if it’s really, really good at five seconds?,” says Roetzer. “That’s enough because you can just stitch together frame by frame—and all of a sudden start building some really incredible things. So I expect adoption of this to be massive if it works really well.”

Day 4: Canvas

On Day 4, OpenAI announced the general release of Canvas in ChatGPT.

Canvas is a side panel that has responses from ChatGPT on a shared, editable, and sharable page, so you can more effectively collaborate with ChatGPT on writing and coding tasks.

Day 5: ChatGPT in Apple Intelligence

One of the most significant announcements so far in this event was OpenAI's integration with Apple Intelligence.

"This is the first time where I feel like we're starting to see the vision for Apple intelligence," says Roetzer.

The integration brings major improvements to Siri's capabilities, allowing it to handle complex queries with more natural, context-aware responses powered by ChatGPT. Users can seamlessly move between Siri and the ChatGPT app, with Siri able to access various ChatGPT tools.

Roetzer says he’s historically been a big critic of Siri’s capabilities, finding it unable to handle even the most basic queries well. But this new functionality thanks to ChatGPT actually makes Siri much more helpful than before—and it’s a good start to what he hopes is eventually an even smarter Siri.

“I could absolutely see myself using Siri now way more,” he says.

Day 6: Advanced Voice Mode Paired with Video

On Day 6, ChatGPT’s Advanced Voice Mode got a major upgrade. It now includes video capabilities. Users can interact with ChatGPT through their phone's camera, allowing the AI to see and respond to what's happening in real-time.

“The Advanced Voice improvements with video and screen share are huge,” says Roetzer. “We now have in ChatGPT the ability to show it something.”

That includes not just video, but screen sharing. When testing it, he says, you can immediately start to see all sorts of valuable applications across business and life. 

For instance, screen sharing while doing your work or having Advanced Voice Mode help you on the go while traveling or shopping could be very useful.

"If you're traveling, if you're looking at signs and trying to understand it, if you're looking at products in a store—you now have vision capability on your phone to do this with," says Roetzer.

As a festive bonus, on Day 6 OpenAI also introduced a limited time “Santa Mode” in Advanced Voice Mode. Santa Mode lets you chat with a virtual version of Santa as you go about your holiday errands.

Day 7: Projects in ChatGPT

On Day 7, OpenAI announced better organization in ChatGPT with a new feature called Projects, designed to help users manage their AI conversations more effectively. 

The system works like a sophisticated folder structure, allowing users to group related conversations and resources together in a more intuitive way.

Projects appears in the ChatGPT sidebar, where users can create new projects, customize them with different colors, and add specific instructions to guide how ChatGPT responds within that particular project. 

The feature also allows users to attach files and add existing chat conversations, making it easier to keep track of ongoing work and conversations.

Roetzer found the feature useful. But it currently has one big limitation: You can’t organize chats with custom GPTs in Projects, which significantly limits the feature’s utility if you use GPTs often.

The good news? OpenAI’s Chief Product Officer responded to a post on X about this that Roetzer published, indicating they’re working on adding this functionality now.

Day 8: Upgrades to ChatGPT Search

OpenAI announced on Day 8 some improvements to ChatGPT Search. 

Not only is search faster, but it’s now also optimized for mobile. The team demonstrated a clean visual list of businesses and search results for a search they performed to find a restaurant in San Francisco with Mexican food and outdoor patios with heaters.

Search is also now going to be enabled in Advanced Voice Mode. The team talked to ChatGPT to identify current holiday events happening in the coming week in different locations around the world—and got up-to-date information about the locations and hours of events, as well as the weather on certain days.

Day 9: Developer Goodies

On Day 9, OpenAI announced a major expansion of its developer tools and capabilities, headlined by the release of OpenAI o1 to their API. The new reasoning model, designed for complex multi-step tasks, is showing impressive improvements over its preview version, setting new benchmarks in areas like coding, mathematics, and visual understanding.

The company is also making significant strides in real-time AI applications with updates to their Realtime API. They've introduced WebRTC support for easier integration of voice features, and notably, they're reducing costs substantially—dropping audio token prices by 60% for their GPT-4o model. A new mini version is also available at one-tenth of the previous rates.

A particularly innovative addition is their new "Preference Fine-Tuning" system, which allows developers to customize AI models based on comparative examples rather than fixed targets. This approach is especially effective for subjective tasks where tone and style matter more than strict correctness. 

To make their tools more accessible to a broader range of developers, OpenAI is also releasing official Software Development Kits (SDKs) for both Go and Java programming languages, adding to their existing support for Python, Node.js, and .NET.

Day 10: 1-800-CHATGPT

Day 10 took a bit of a different approach to innovation...by revisiting the past. 

On this day, OpenAI launched a traditional phone number for ChatGPT: 1-800-CHATGPT. 

U.S. users can now call 1-800-242-8478 to speak directly with ChatGPT, with each user getting 15 minutes of free access per month. The service is also available globally through WhatsApp messaging, though OpenAI is still working on integrating these conversations with existing ChatGPT accounts.

It may sound like a weird move, but the logic could make sense: This experiment could expand the AI assistant's reach to users who might not typically engage with mobile apps or web interfaces.

Day 11: Working with Apps

On Day 11, OpenAI announced significant upgrades to ChatGPT's macOS app, transforming it into a more versatile digital assistant that can work across multiple applications.

A key addition is the integration of Advanced Voice Mode into the desktop app, enabling users to interact with ChatGPT through voice commands without needing to switch between windows or open a browser. This hands-free functionality makes it easier to multitask while getting AI assistance for tasks like drafting emails or brainstorming ideas.

The most significant update, though, was the new "Working with Apps" feature, which allows ChatGPT to interact directly with various Mac applications. For developers, this means the AI can analyze code within editors like Warp or Xcode, offering suggestions and explanations, or even writing new code snippets.

The functionality also extends beyond coding to productivity apps like Apple Notes, Quip, and Notion, where ChatGPT can help with writing and planning tasks while citing internet sources.

Day 12: o3

OpenAI concluded its "12 Days of Shipmas" event with potentially its biggest announcement yet: the o3 family of AI models. This next-generation system builds upon the company's earlier o1 "reasoning" model but with significantly enhanced capabilities that, according to OpenAI, approach artificial general intelligence under certain conditions.

The o3 family includes both a standard model and o3-mini, a smaller version fine-tuned for specific tasks. What sets these models apart is their advanced reasoning abilities—they can effectively fact-check themselves and think through problems step-by-step using what OpenAI calls a "private chain of thought."

While this deliberative process takes longer than traditional AI responses, it leads to more reliable results, particularly in complex domains like physics, science, and mathematics.

A notable innovation is the ability to adjust the model's "reasoning time" with low, medium, and high compute settings. On benchmarks, the high compute setting has achieved remarkable results, including an 87.5% score on the ARC-AGI test designed to evaluate general intelligence capabilities. However, this level of performance comes at a significant cost—potentially thousands of dollars per challenge.

The release comes with important caveats. Safety researchers have found that these reasoning models may attempt to deceive users at higher rates than conventional AI systems. OpenAI acknowledges these risks and says it's using a new "deliberative alignment" technique to ensure the models adhere to safety principles.

The models aren't immediately available to the public yet. Safety researchers can apply for o3-mini preview access today, with broader availability planned for late January. The timeline for the full o3 model's release remains unspecified.

Related Posts

OpenAI's o3 Just Beat Humans at Reasoning: Here's What That Means for Your Career

Mike Kaput | January 7, 2025

OpenAI just announced a brand-new model that may have just crossed a major threshold in AI capabilities—and it has everyone talking.

OpenAI o1: What You Need to Know

Mike Kaput | September 17, 2024

OpenAI has released an initial version of its code-named “Strawberry” project—a new AI model that displays advanced reasoning.

[The AI Show Episode 99]: Microsoft Build’s AI Announcement, OpenAI Scarlett Johansson Controversy, and Google’s Awful AI Overviews

Claire Prudhomme | May 28, 2024

In Episode 99 of the AI Show: Microsoft unveils Copilot+ PCs and AI features, OpenAI faces controversy over GPT-4o voice, and Google's Sundar Pichai addresses AI's impact on search