Politics

White House Unveils Initiatives to Reduce Risks of AI

[ad_1]

The White House on Thursday will host its first gathering of chief executives of companies building artificial intelligence since a boom in A.I.-powered chatbots has prompted growing calls to regulate the technology.

Vice President Kamala Harris and other administration officials are scheduled to meet with the leaders of Google, Microsoft, OpenAI, the maker of the popular ChatGPT chatbot, and Anthropic, an A.I. start-up, to discuss the technology.

The White House planned to impress upon the companies that they had a responsibility to address the risks of new A.I. developments. “We aim to have a frank discussion of the risks we each see in current and near-term A.I. development, actions to mitigate those risks and other ways we can work together to ensure the American people benefit from advances in A.I. while being protected from its harms,” said Arati Prabhakar, the director of the White House office of science and technology policy, in an invitation to the meeting obtained by The New York Times.

Hours before the meeting, the White House announced that the National Science Foundation plans to spend $140 million on new research centers devoted to A.I. The administration also pledged to release draft guidelines for government agencies to ensure that their use of A.I. safeguards “the American people’s rights and safety,” adding that several A.I. companies had agreed to make their products available for scrutiny in August at a cybersecurity conference.

The announcements build on earlier efforts by the administration to place guardrails on A.I. Last year, the White House released what it called a blueprint for an A.I. bill of rights, which said that automated systems should protect users’ data privacy, shield them from discriminatory outcomes and make clear why certain actions were taken. In January, the Commerce Department also released a framework for reducing risk in A.I. development, which had been in the works for years.

The introduction of chatbots like ChatGPT and Google’s Bard has put huge pressure on governments to act. The European Union, which had already been negotiating regulations to A.I., has faced new demands to regulate a broader swath of A.I., instead of just systems seen as inherently high risk.

In the United States, members of Congress, including Senator Chuck Schumer of New York, the majority leader, have moved to draft or propose legislation to regulate A.I. But concrete steps to rein in the technology in the country may be more likely to come first from law enforcement agencies in Washington.

A group of government agencies pledged in April to “monitor the development and use of automated systems and promote responsible innovation,” while punishing violations of the law committed using the technology.

In a guest essay in The Times on Wednesday, Lina Khan, the chair of the Federal Trade Commission, said the nation was at a “key decision point” with A.I. She likened the technology’s recent developments to the birth of tech giants like Google and Facebook, and she warned that, without proper regulation, the technology could entrench the power of the biggest tech companies and give scammers a potent tool.

“As the use of A.I. becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself,” she said.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *