2 Min Read

Sam Altman, Other AI Leaders (Though Not All) Join Homeland Security AI Board

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

OpenAI's Sam Altman and other leaders are joining an AI safety board run by the US Department of Homeland Security.

The board is called the AI Safety and Security Board. Nearly two dozen tech, business, and political leaders are on the board. And their job is to help the Department of Homeland Security safely deploy AI within critical national infrastructure.

Because of that mission, the board brings together a diverse group of people.

In additional to Altman, Microsoft's Satya Nadella, Nvidia's Jensen Huang, Anthropic's Dario Amodei, and Alphabet's Sundar Pichai are on it. 

Board members also include the CEO of Northrop Grumman, the mayor of Seattle, the governor of Maryland, and renowned AI researcher Fei-Fei Li.

What do you need to know about this new AI development?

I got the answer from Marketing AI Institute founder / CEO Paul Roetzer on Episode 95 of The Artificial Intelligence Show.

Notable omissions from the AI safety board

The people not on the board are notable, says Roetzer.

"The obvious thing is Meta's not on there. So no Zuckerberg, no [Meta Chief AI Scientist] Yann LeCun," he says.

Elon Musk, who runs his own AI company, is also absent.

Generally, says Roetzer, the board seems to lack representation from open source AI leaders. Meta just released its open source Llama 3 model. Musk has been vocal about the need for open source AI.

"A lot of the people on here from the AI perspective are not the big ones pushing for open source acceleration," notes Roetzer.

Infrastructure is "a problem" and "attack vector"

The board is also drawing attention for others on it who aren't directly involved in AI.

 

However, says Roetzer, it's important to note that at least some of the board members make sense, given the focus on critical infrastructure.

"It makes perfect sense that someone from a petroleum company, someone from an airline company, you would want diversity," says Roetzer. 

"They don't have to be AI experts to be able to explain how transportation in the United States works or how the energy grid works."

That doesn't mean open source advocates should be omitted. But it is likely the focus on Homeland Security here is on who can most help with needs related to infrastructure.

Roetzer emphasizes the importance of these conversations, noting that infrastructure is a major problem and potential attack vector in the US, where much of the national infrastructure is outdated.

"In many cases, the infrastructure is 70 years old or more, and that's a problem and it's an attack vector," he says.

Related Posts

Sam Altman Is Raising Trillions to Reshape AI As We Know It

Mike Kaput | February 13, 2024

OpenAI CEO Sam Altman is trying to raise as much as $5-$7 trillion to reshape AI. Here's what's going on.

"Neither OpenAI Nor Any Other Frontier Lab Is Ready": Senior AI Safety Expert Sounds Alarm

Mike Kaput | October 29, 2024

A high-profile departure from OpenAI has sparked fresh concerns about the AI industry's preparedness for artificial general intelligence (AGI).

ChatGPT’s Creator Sounds the Alarm About Superintelligent AI

Mike Kaput | February 28, 2023

OpenAI just published a post suggesting we need to start planning for AGI.