Internet companies unprepared for misinformation ahead of election, study suggests

A recent study by Mozilla, in collaboration with Finland-based disinformation research firm CheckFirst, has revealed that tech giants such as Apple, Google, Meta, TikTok, and X, despite maintaining searchable public ad libraries, fail to provide meaningful data. 

This data is crucial for users, journalists, and advocates to monitor scams and disinformation effectively, especially as the 2024 presidential election approaches.

The report released on April 16 states that major tech platforms, which hold data for millions of users, are failing to provide adequate ad transparency tools. This deficiency is particularly concerning given the significant threat misinformation poses to the public.

The European Union’s Digital Services Act mandates that large tech platforms maintain ad libraries and other resources like application programming interfaces (APIs). These tools are designed to enhance ad transparency and are used by researchers and the public.

To evaluate ad transparency across various companies, Mozilla conducted tests on each platform and found that none achieved the "ready for action" designation. 

The analysis reviewed a dozen ad transparency tools developed by tech platforms to assist advertising monitors. This included tools on platforms like X, TikTok, LinkedIn, Google Search, and sites operated by Meta and Apple. 

The organizations examined these platforms ' ad repositories using criteria from the European Union’s 2023 Digital Services Act (DSA) along with Mozilla's in-house ad library guidelines. They focused on factors such as public availability, advertisement content, details about who paid for the ads, and how users are targeted.

The findings showed that the platforms varied in their effectiveness. Some lacked essential data and functionality, while others exhibited significant gaps. According to the study, a few only met the "bare minimum" standards.

"Ad transparency tools are essential for platform accountability — a first line of defense, like smoke detectors," said Mozilla EU advocacy lead Claire Pershan. "But our research shows most of the world’s largest platforms are not offering up functionally useful ad repositories. The current batch of tools exist, yes — but in some cases, that’s about all that can be said about them."

X, formerly Twitter, was notably the poorest performer regarding data accessibility and search capabilities.

"X’s transparency tools are an utter disappointment," explained Pershan. 

What is ad transparency? 

Ad transparency refers to the degree to which companies reveal information about the advertisements they serve, particularly on digital platforms. This concept is essential in ensuring accountability and fairness in advertising, enabling consumers and regulators to see who is behind an ad, whom it targets, and how much is being spent. 

Television ads are typically transparent due to stringent regulations that require clear disclosure of who is funding them. Viewers can easily see who is behind these ads, minimizing the risk of misinformation. 

However, digital ads lack similar transparency. Although some platforms have introduced measures like ad libraries, enforcement is inconsistent and sometimes incomplete. 

Digital ads can be targeted very specifically and altered rapidly, making it harder to trace their origins. This lack of clarity and the ability to micro-target audiences contribute to significant concerns about misinformation, as bad actors can use these ads to influence public opinion without sufficient oversight.

Why does ad transparency matter? 

The surge in artificial intelligence and AI-generated content has significantly exacerbated concerns about election-related misinformation. 

According to data from the machine learning firm Clarity, the production of deepfakes has soared by 900% annually. 

Why this trend is so concerning dates back to the 2016 U.S. presidential election, when a significant scandal emerged involving Russian interference through the use of automated bots and social media accounts. 

These "Russian bots" were part of a coordinated effort to influence public opinion and electoral outcomes by spreading misinformation, amplifying divisive social and political messages, and sowing discord among the electorate. 

Investigations revealed that these operations were linked to the Internet Research Agency, a Kremlin-backed entity. 

The bots impersonated Americans to post and share politically charged content, targeting specific demographic groups to exploit societal tensions. This cyber interference led to widespread scrutiny of social media platforms' roles in political processes, prompting calls for stricter regulations and measures to safeguard the integrity of elections.