CEOs of OpenAI, Google and Microsoft to join other tech leaders on federal AI safety panel

nexninja
4 Min Read


Washington
CNN
 — 

The US authorities has requested main synthetic intelligence corporations for recommendation on easy methods to use the expertise they’re creating to defend airways, utilities and different essential infrastructure, notably from AI-powered assaults.

The Division of Homeland Safety stated Friday that the panel it’s creating will embody CEOs from a number of the world’s largest corporations and industries.

The checklist consists of Google chief govt Sundar Pichai, Microsoft chief govt Satya Nadella and OpenAI chief govt Sam Altman, but in addition the pinnacle of protection contractors corresponding to Northrop Grumman and air provider Delta Air Traces.

The transfer displays the US authorities’s shut collaboration with the non-public sector because it scrambles to handle each the dangers and advantages of AI within the absence of a targeted national AI law.

The gathering of consultants will make suggestions to telecommunications corporations, pipeline operators, electrical utilities and different sectors about how they’ll “responsibly” use AI, DHS stated. The group can even assist put together these sectors for “AI-related disruptions.”

“Synthetic intelligence is a transformative expertise that may advance our nationwide pursuits in unprecedented methods,” stated DHS Secretary Alejandro Mayorkas, in a release. “On the similar time, it presents actual dangers — dangers that we will mitigate by adopting greatest practices and taking different studied, concrete actions.”

Among the many panel’s different members are the CEOs of expertise suppliers corresponding to Amazon Internet Companies, IBM and Cisco; chipmakers corresponding to AMD; AI mannequin builders corresponding to Anthropic; and civil rights teams such because the Attorneys’ Committee for Civil Rights Beneath Legislation.

It additionally consists of federal, state and native authorities officers, in addition to main teachers in AI corresponding to Fei-Fei Li, co-director of Stanford College’s Human-centered Synthetic Intelligence Institute.

The 22-member AI Security and Safety Board is an outgrowth of a 2023 executive order signed by President Joe Biden, who referred to as for a cross-industry physique to make “suggestions for enhancing safety, resilience, and incident response associated to AI utilization in essential infrastructure.”

That very same govt order additionally led this yr to government-wide rules regulating how federal businesses should buy and use AI in their very own programs. The US authorities already makes use of machine studying or synthetic intelligence for more than 200 distinct purposes, corresponding to monitoring volcano exercise, monitoring wildfires and figuring out wildlife from satellite tv for pc imagery.

In the meantime, deepfake audio and video, which use AI to push faux content material, have emerged as a key concern for US officers making an attempt to guard the 2024 US election from rampant mis- and disinformation. A faux robocall in January imitating Biden’s voice urged Democrats to not vote in New Hampshire’s major, sounding alarms amongst US officers targeted on election safety. A New Orleans magician told CNN {that a} Democratic political marketing consultant employed him to make the robocall. However there’s concern that overseas adversaries like Russia, China or Iran may exploit the identical expertise.

“It’s a threat that’s actual,” Mayorkas instructed reporters on Friday whereas discussing the AI advisory board. “We’re seeing hostile nation-states engaged and we work to counter their efforts to unduly affect our elections.”

Source link

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *