AI could change the 2024 presidential election: Should voters worry?
Stiller, Apatow, Bateman debate using AI in filmmaking
At the Directors Guild of America Awards, filmmakers considered whether they would be open to directing something with a script written by an artificially intelligent chatbot after Microsoft’s newly revamped Bing search engine generated headlines for its writing capabilities.
The Republican National Committee fired off an attack ad as soon as President Joe Biden announced his reelection campaign last week.
The 30-second spot which used fake visuals of China invading Taiwan, financial markets crashing and immigrants overrunning the border sported a disclaimer: “Built entirely with AI imagery.”
The ad – which the GOP called “an AI-generated look into the country’s possible future if Joe Biden is re-elected in 2024” – is a sign of what’s to come in the 2024 presidential election, experts say.
2024 promises to be the first AI election cycle with artificial intelligence potentially playing a pivotal role at the ballot box. And that’s raising concerns.
AI crack down? Senate leader Schumer unveils plans to crack down on AI
Fake Twitter accounts Is that Twitter account real? 4 ways to help you spot a fake account.
Even as the technology grows more sophisticated and powerful, spreading into all aspects of American life, there are still very few rules governing its use.
Spurred by the Biden attack ad, Rep. Yvette D. Clarke, D-N.Y., introduced a bill Tuesday that would require that political ads disclose the use of AI-generated imagery.
“The upcoming 2024 election cycle will be the first time in U.S. history where AI generated content will be used in political ads by campaigns, parties, and Super PACs,” Clarke said in a statement. “If AI-generated content can manipulate and deceive people on a large scale, it can have devastating consequences for our national security and election security.”
Political campaigns are pressure testing AI for everything from fundraising emails to get out the vote chatbots, Nathan Sanders, a data scientist and an affiliate at the Berkman Klein Center at Harvard University, and Bruce Schneier, a fellow and lecturer at the Harvard Kennedy School, wrote in The Atlantic.
“Previous technological revolutions – railroad, radio, television, and the World Wide Web – transformed how candidates connect to their constituents, and we should expect the same from generative AI,” Sanders and Schneier wrote.
Best-case scenario: AI gets voters more engaged and decreases polarization, they said. Worst-case scenario: AI is used to mislead or manipulate voters.
“AI will enable instant responses and more precise voter targeting,” said Darrell West, a senior fellow at the Center for Technology Innovation at the Brookings Institution.
What’s setting off alarm bells: The potential to use AI for dirty tricks, such as “deepfakes,” videos and images that have been digitally created or altered with AI or machine learning to make it appear as if people have said or done things they have not.
“This will be the first AI election that draws on digital tools that can generate videos, pictures, audiotapes and many other things,” West said. “There is a risk that disinformation will expand and expose voters to false material that will look authentic. Mass manipulation is dangerous for democracy because it could distort voter decision-making. Right now, there is no required disclosure so voters may not even know that the videos are fake.”
What’s more, concerns are growing about bad actors using AI to meddle in the election.
“Before AI could take all your jobs, it could certainly do a lot of damage in the hands of spammers, people who want to manipulate elections,” Microsoft chief economist Michael Schwarz said at a World Economic Forum event Wednesday.
Top executives from AI firm Anthropic, Microsoft, Google and OpenAI will meet with Vice President Kamala Harris Thursday to discuss AI development, the White House told CNBC.
Sahred From Source link Technology