2 Min Read

Sam Altman, Other AI Leaders (Though Not All) Join Homeland Security AI Board

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

OpenAI's Sam Altman and other leaders are joining an AI safety board run by the US Department of Homeland Security.

The board is called the AI Safety and Security Board. Nearly two dozen tech, business, and political leaders are on the board. And their job is to help the Department of Homeland Security safely deploy AI within critical national infrastructure.

Because of that mission, the board brings together a diverse group of people.

In additional to Altman, Microsoft's Satya Nadella, Nvidia's Jensen Huang, Anthropic's Dario Amodei, and Alphabet's Sundar Pichai are on it. 

Board members also include the CEO of Northrop Grumman, the mayor of Seattle, the governor of Maryland, and renowned AI researcher Fei-Fei Li.

What do you need to know about this new AI development?

I got the answer from Marketing AI Institute founder / CEO Paul Roetzer on Episode 95 of The Artificial Intelligence Show.

Notable omissions from the AI safety board

The people not on the board are notable, says Roetzer.

"The obvious thing is Meta's not on there. So no Zuckerberg, no [Meta Chief AI Scientist] Yann LeCun," he says.

Elon Musk, who runs his own AI company, is also absent.

Generally, says Roetzer, the board seems to lack representation from open source AI leaders. Meta just released its open source Llama 3 model. Musk has been vocal about the need for open source AI.

"A lot of the people on here from the AI perspective are not the big ones pushing for open source acceleration," notes Roetzer.

Infrastructure is "a problem" and "attack vector"

The board is also drawing attention for others on it who aren't directly involved in AI.

 

However, says Roetzer, it's important to note that at least some of the board members make sense, given the focus on critical infrastructure.

"It makes perfect sense that someone from a petroleum company, someone from an airline company, you would want diversity," says Roetzer. 

"They don't have to be AI experts to be able to explain how transportation in the United States works or how the energy grid works."

That doesn't mean open source advocates should be omitted. But it is likely the focus on Homeland Security here is on who can most help with needs related to infrastructure.

Roetzer emphasizes the importance of these conversations, noting that infrastructure is a major problem and potential attack vector in the US, where much of the national infrastructure is outdated.

"In many cases, the infrastructure is 70 years old or more, and that's a problem and it's an attack vector," he says.

Related Posts

Meta's Llama 3.1 Shakes Up the AI Landscape: What You Need to Know

Mike Kaput | July 30, 2024

Meta has released Llama 3.1 405B, its latest, most advanced open model. Here's everything you need to know about what it is—and why it matters.

Elon Musk Sues OpenAI

Mike Kaput | March 5, 2024

Elon Musk is suing OpenAI for breach of contract. This post breaks down everything that's going on.

OpenAI's Ex-Board Strikes Back

Mike Kaput | June 4, 2024

OpenAI is charging full steam ahead with its next frontier model, assumed to be GPT-5, even as the company weathers a storm of controversies.