Google parent Alphabet’s global affairs president Kent Walker and Microsoft President Brad Smith greet U.S. Senator John Cornyn (R-TX) before a Senate Intelligence Committee hearing on election threats, on Capitol Hill in Washington, U.S., September 18, 2024. REUTERS/Anna Rose Layden
By Katie Paul
U.S. Lawmakers Grill Tech Executives on Election Disinformation Preparedness
NEW YORK (Reuters) – U.S. lawmakers questioned technology executives on Wednesday about their strategies to combat foreign disinformation threats in the lead-up to the November elections. Both senators and tech leaders pinpointed the 48 hours surrounding Election Day as the most vulnerable period.
“There is a potential moment of peril ahead. Today we are 48 days away from the election… the most perilous moment will come, I think, 48 hours before the election,” testified Microsoft President Brad Smith during a hearing convened by the U.S. Senate Intelligence Committee.
Senator Mark Warner, the committee chair, concurred with Smith but added that the 48 hours after polls close on November 5 could be “equally if not more significant,” particularly if the election outcome is close.
Executives from Google and Meta, which owns Facebook, Instagram, and WhatsApp, also provided testimony. Elon Musk’s X was invited but declined to participate, citing the resignation of its invited witness, former global affairs head Nick Pickles. TikTok was not invited to attend, according to a company spokesperson.
To highlight his concerns about the critical moments before voting, Smith referenced an incident from Slovakia’s 2023 election, where a fake voice recording of a party leader discussing vote rigging circulated just before the election.
Warner and other senators pointed to tactics exposed during a recent U.S. crackdown on alleged Russian influence operations, which included fake websites designed to mimic real U.S. news organizations such as Fox News and the Washington Post.
“How does this get through? How do we know how extensive this is?” Warner pressed the executives. He requested that the companies provide data to the committee by next week, detailing how many Americans viewed the misleading content and the volume of advertisements promoting it.
In response to rising concerns over new generative artificial intelligence technologies that can easily create realistic fake images, audio, and video, tech companies have largely adopted labeling and watermarking measures.
When asked how they would respond if a deepfake of a political candidate emerged right before the elections, Smith and Meta’s President of Global Affairs Nick Clegg affirmed that their companies would label the content. Clegg added that Meta might also limit the content’s distribution.
(Reporting by Katie Paul; editing by Diane Craft)