2 Min Read

OpenAI Cofounder: AI Impact Will Be "Monumental, Earth-Shattering"

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

Ilya Sutskever is certainly one of the lesser known OpenAI cofounders outside of tech circles, but his contributions to the advancements of AI over the last decade are huge—and he just gave a rare interview to MIT Technology Review.

In this wide-ranging discussion, he makes some bold statements about the sheer power of AI today—and in the near-future—and speaks of artificial general intelligence (AGI) as a near-term inevitability.

Because of that belief, he’s now working on “superalignment,” or making sure that humans remain in control when AI achieves “superintelligence” (i.e. it’s able to outsmart us at every cognitive task).

When asked about the risks and rewards he sees coming from AI down the line, he says at one point: “It’s going to be monumental, earth-shattering. There will be a before and an after.”

Why It Matters

This is about understanding the people who build AI. It’s important to have a sense of what the key builders in AI believe, because it gives you clues as to where the technology is going. People like Sutskever, though less well-known, play a huge role in the direction of AI in the near future.

Connecting the Dots

On Episode 70 of The Marketing AI Show, Marketing AI Institute founder and CEO Paul Roetzer gave me a glimpse into how people like Sutskever think about AI.

  1. This is all about AGI. People like Sutskever believe that there’s a significant possibility we can create artificial general intelligence (AGI)—or AI that is smarter than humans across many tasks—and that it can possibly happen in the relatively near future.
  2. This isn’t as crazy as it sounds. You can disagree with the premise of AGI, and many researchers do. But you also have to accept that Sutskever isn’t an outlier. Many big players in AI believe we are approaching AGI, and that the world will be split into before and after we develop AGI.
  3. The big change here came with CharGPT. The AGI conversation was barely being had a year ago. It was definitely a fringe belief among researchers. But Sutskever says in the interview that ChatGPT changed all that. It was a surprise hit. He says OpenAI’s expectations couldn’t have been lower for the product. But, despite its flaws, it felt so much like magic that it “allowed machine learning researchers to dream,” he told MIT Technology Review. It became much more realistic to imagine a super-smart AI that could do many things well.
  4. And people like Sutskever don’t see AGI as bad. They’re not losing sleep about the Terminator. They believe the development of AGI leads to an amazing future for humanity. “I’ve talked with enough people in the research community who share that vision, that if we can do this, we can create an amazing, abundant future for everyone,” says Roetzer.

What to Do About It

Worrying about AGI probably isn’t high on your to-do list, even if you wanted to.

But that’s not why this subject is important. It’s important because you can develop an even stronger competitive advantage with AI by taking a few minutes to get inside the heads of the people building it.

Even if you think their beliefs are out there, you can be better-versed in the longer-term implications of the technology by taking them seriously.

Related Posts

What Really Happened at OpenAI?

Mike Kaput | November 28, 2023

Details are beginning to emerge about what happened at OpenAI—and it may be crazier than you think.

Ilya Sutskever's Surprise Departure from OpenAI

Mike Kaput | May 21, 2024

Ilya Sutskever, OpenAI co-founder and controversial player in last year’s boardroom coup against CEO Sam Altman, has left the company.

Elon Musk Sues OpenAI

Mike Kaput | March 5, 2024

Elon Musk is suing OpenAI for breach of contract. This post breaks down everything that's going on.