Anthropic just launched Claude 3.5 Sonnet, the first model in their upcoming Claude 3.5 family. And it's making waves in the AI world for its impressive capabilities and speed.
The new model sets industry benchmarks for intelligence while maintaining the speed and cost-effectiveness of Anthropic's mid-tier offerings. In fact, Anthropic claims Claude 3.5 Sonnet outperforms some competitor models, including its predecessor, Claude 3 Opus and OpenAI’s GPT-4o..
Claude 3.5 Sonnet excels in graduate-level reasoning, undergraduate-level knowledge, and coding proficiency. It's also blazing fast, operating twice as quickly as Claude 3 Opus.
But that's not all. The model shows particular strength in visual reasoning tasks, such as interpreting charts and graphs. And it comes with Anthropic's long-time commitment to safety and privacy, having undergone rigorous testing and training to reduce misuse.
Claude 3.5 Sonnet is now available for free at Claude.ai and in the Claude iOS app. If you have a Claude Pro or Team plan, you'll get higher rate limits. You can also access it via the Anthropic API, Amazon Bedrock, and Google Cloud's Vertex AI.
So, what does this new model mean for you?
I got the scoop from Marketing AI Institute founder/CEO Paul Roetzer on Episode 103 of The Artificial Intelligence Show.
Roetzer has been experimenting with Claude 3.5 Sonnet, and his initial impressions are overwhelmingly positive.
"It's been very impressive in the early tests I've done," says Roetzer.
He shared a recent experience where he used the model to improve descriptions for his workshops he teaches regularly to Marketing AI Institute customers. The task, which would have typically taken him 2-3 hours of focused work, was completed in just 20 minutes with Claude's help.
Not to mention, he says it’s simply more competent at the task than he is.
"It's just better than me at this," Roetzer admits. "This is a task where Claude is now superhuman compared to me, and I've been writing these descriptions for 24 years."
The value proposition is clear
Roetzer's experience highlights the immense value these AI tools can provide when used effectively.
"I got 2 to 3x the value of my annual subscription for Claude—$240 a year—to do a single task in 20 minutes," he explains. "It demonstrates the value that can be created with these tools when you know how to use them or you have the right use cases for them."
Along with Claude 3.5 Sonnet, Anthropic also announced some exciting new features that indicate the company is deepening its focus on enterprise applications, including:
"They're definitely moving toward the enterprise play," Roetzer notes. "That's the thing that just seems really obvious here—they're continually expanding on the enterprise components."
One of the most intriguing aspects of Claude 3.5 Sonnet is Anthropic's approach to "character training." The company is intentionally tuning its models to respond in specific ways, beyond just avoiding harmful outputs.
Said Anthropic in a recent post:
"Companies developing AI models generally train them to avoid saying harmful things and to avoid assisting with harmful tasks. The goal of this is to train models to behave in ways that are "harmless". But when we think of the character of those we find genuinely admirable, we don’t just think of harm avoidance. We think about those who are curious about the world, who strive to tell the truth without being unkind, and who are able to see many sides of an issue without becoming overconfident or overly cautious in their views. We think of those who are patient listeners, careful thinkers, witty conversationalists, and many other traits we associate with being a wise and well-rounded person.
AI models are not, of course, people. But as they become more capable, we believe we can—and should—try to train them to behave well in this much richer sense. Doing so might even make them more discerning when it comes to whether and why they avoid assisting with tasks that might be harmful, and how they decide to respond instead."
Roetzer believes this character training will result in a noticeable difference in how users interact with the model.
"In 3.5, you're going to start to feel a little bit of a difference in the character of the model because they're very intentionally training it to be a type of character, not a person," he explains.
Anthropic isn't stopping at just improving its model's intelligence and speed. The company is also developing new features to support more use cases for businesses, including:
These are similar moves to other major AI players, says Roetzer, as companies like Anthropic, OpenAI, and others turn their chat interfaces into truly collaborative and intelligent workspaces.
"They're all kind of working toward the same direction here for sure.”