<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

3 Min Read

AI Is Evolving From Thinking Fast to Thinking Slow: Why That Changes Everything

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

Sequoia Capital, one of tech's most influential venture capital firms, just dropped a major analysis of where generative AI is headed. And their conclusion is clear:

We're entering a new era of AI that could fundamentally change how these systems work—and what they're capable of.

What do you need to know about what’s coming?

I got the answer from Marketing AI Institute founder and CEO Paul Roetzer on Episode 120 of The Artificial Intelligence Show.

The Evolution of AI Thinking

In the analysis, Sequoia partners Sonia Huang and Pat Grady observe a crucial shift:

“Two years into the Generative AI revolution, research is progressing in the field from “thinking fast”—rapid-fire pre-trained responses—to “thinking slow”— reasoning at inference time. This evolution is unlocking a new cohort of agentic applications.”

This is about the difference between ‘System 1’ and ‘System 2’ thinking, says Roetzer. He offers the following analogy: 

System 1 thinking is like answering the question “What is the capital of Ohio?” (It’s Columbus.) System 1 is quick, factual recall. 

System 2 is like explaining why Columbus became the capital of Ohio, which requires actual reasoning and multiple steps of thought.

The breakthrough being observed by Sequoia is giving AI systems time to “think,” so they can engage in System 2 thinking.

“The basic premise is that when we give the machine time to think, it seems to be able to do much more complex things in math, biology, business strategy, etc.,” says Roetzer.

That, in turn, unlocks completely new AI capabilities. And, argues Sequoia, it is resulting in a new scaling law emerging: 

The more inference time compute that is given to a model, the better it can reason.

The Rise of the “Wrapper” Companies

Sequoia's analysis also points out that the foundation layer of generative AI is stabilizing around major players like: Microsoft/OpenAI, Google/DeepMind, Meta, and Anthropic. Their previous predictions of a single dominant model company have not come true. Instead, we’re seeing a pattern where companies catch up to each other every 3-6 months.

Also contrary to earlier predictions, Sequoia sees massive value in companies that build specialized applications on top of foundation models—what they call "wrappers."

These companies:

  • Focus on specific domains (legal, customer service, marketing, etc.)
  • Leverage domain expertise to create specialized assistants
  • Build valuable intellectual property despite not owning the underlying models

“Sequoia is saying that wrappers are actually critical,” says Roetzer.

While a handful of frontier model companies the boundaries of how generally intelligent AI can actually get, there will be a massive need for companies that build tools to effectively apply this intelligence to specific domains.

“It requires domain expertise to build a legal assistant or a customer service assistant or a marketing agency assistant,” says Roetzer. “And that that's actually where the knowledge or the value will accrue in the venture capital world is at the wrapper layer for people that build these domain specific things.

Where We’re Headed

Building on Sequoia’s analysis, Roetzer predicts that we won’t even use the handful of dominant frontier models directly in many cases.

Plenty of “less” intelligent models are more than adequate for many different tasks. Highly advanced System 2 AI—or even AGI—simply won’t be a fit for many things that we’re trying to accomplish.

“The reality is that many of the use cases in business, like helping us write our emails or brainstorm ideas or build a marketing strategy, don’t require a $10 billion frontier model,” he says.

Instead, we’re likely to see 4-5 dominant frontier models that are all approaching AGI—or have reached it—in coming years. 

But the most powerful models will act as “project managers,” orchestrating the symphony of specialized models and agents that work behind the scenes to accomplish what we’re trying to do when we prompt AI.

Instead of us picking the right models for the job, superior AI will do it for us.

“When we go into ChatGPT, instead of having to pick from 1 of 4 models, which makes no sense from a user experience, I’ll just put in my prompt and then the most powerful model figures out which model is best to solve that,” he says.

The implications of Roetzer’s predictions and Sequoia’s analysis are significant:

We're moving beyond simple pattern matching to true reasoning capabilities, and the focus for model companies is shifting from massive pre-training to scalable inference. And, as we see the rise of the wrappers, domain expertise will become increasingly valuable. Finally, we will all use a symphony of different tools and models to achieve our goals as users, but the user experience will be simple—the most sophisticated models will choose and manage our tools for us.

Related Posts

Sequoia Capital Says Generative AI Is Growing Up

Mike Kaput | September 26, 2023

Venture capital powerhouse Sequoia Capital just dropped a deep analysis telling us where generative AI is headed next.

Enterprises Set to Increase Generative AI Spend 2X-5X Says Famed VC Firm

Mike Kaput | March 26, 2024

Generative AI in the enterprise is at an "inflection point" says famed venture capital firm Andreessen Horowitz. And they say their new research proves it.

3 Expert Videos on How to Think About AI

Mike Kaput | April 21, 2020

There are lots of ways to use AI at your company. These 3 videos from AI experts are some of the best we've found to understand and leverage the tech.