Google’s 2024 Ads Safety Report, released on April 16, 2025, reveals a sharp escalation in its efforts to combat AI‑driven ad fraud. The company blocked 5.1 billion ads and suspended 39.2 million advertiser accounts over the past year, citing a surge in scams leveraging generative AI and deepfake technologies. These figures represent an unprecedented enforcement level, underscoring the magnitude of the threat posed by automated content abuses across Google’s ad platforms.
According to the report, AI‑generated impersonation scams have become especially prevalent. Bad actors are using large language models to craft realistic ad copy and generative video tools to create deepfake impersonations of celebrities and public figures. These campaigns often promote fraudulent investment schemes or cryptocurrency scams, exploiting users’ trust in familiar faces. In response, Google’s safety systems flagged and removed 415 million scam‑related ads, while permanently suspending over 700,000 accounts tied to impersonation violations.
To identify emerging threats more effectively, Google expanded its use of artificial intelligence across multiple fronts. In 2024 alone, the company introduced more than 50 enhancements to its machine‑learning models, enabling real‑time detection of policy breaches before ads were served. Signals such as business impersonation indicators and illegitimate payment details were integrated into these models, allowing Google to preemptively block harmful ads. Over 90 percent of suspended accounts were disabled by AI before they could launch any campaigns.
Beyond AI enhancements, Google implemented thirty‑plus policy updates covering ads and publisher standards. These updates tightened rules on misrepresentation, prohibited content types, and the use of AI‑generated media, especially in political and financial ads. Advertiser identity verification was expanded to over 200 countries and territories, resulting in 8,900 new election advertisers being vetted and 10.7 million election‑related ads removed for non‑compliance.
The report breaks down enforcement across various violation categories. Financial services ads accounted for 193.7 million removals, followed by 146 million for gambling and games, 122.5 million for adult content, and 104.8 million for healthcare claims. Trademark misuse and ad network abuse together represented over 1.3 billion of the blocked ads. Publisher pages also faced scrutiny, with 1.3 billion pages having ads blocked or restricted for hosting prohibited content such as malware, tobacco promotions, and shock content.
Year‑over‑year comparisons highlight the scale of the crackdown. In 2023, Google suspended 12.7 million advertiser accounts, removed 5.5 billion bad ads, and restricted 6.9 billion more. The leap to 39.2 million suspensions and 5.1 billion removed ads in 2024 marks a more than three‑fold increase in account enforcement, illustrating how generative AI has accelerated both the creation of illicit ads and Google’s countermeasures.
Alex Rodriguez, General Manager for Ads Safety at Google, emphasized the human‑in‑the‑loop approach: “While our AI models have driven significant gains in speed and scale, expert analysts remain essential to refine policies and investigate complex scams.” Rodriguez noted that a dedicated team of over 100 specialists from Ads Safety, Trust & Safety, and DeepMind collaborated on developing countermeasures against deepfake scams, leading to a 90 percent drop in user‑reported impersonation ads.
Geographically, the United States topped the list with 39.2 million account suspensions and 1.8 billion ad removals, followed by India with 2.9 million suspensions and 247.4 million ads removed. Other regions saw similar patterns, as localized generative AI tools facilitated sophisticated scams in multiple languages and markets.
For advertisers, the report serves as a warning: compliance is non‑negotiable. Google recommends staying abreast of policy changes, maintaining transparent business information, auditing creatives for misrepresentation, and promptly appealing suspensions when errors occur. Investing in compliance review tools and leveraging Google’s Policy Center can help legitimate advertisers avoid collateral enforcement actions.
As AI‑powered scams continue to evolve, Google’s 2024 Ads Safety Report illustrates an ongoing arms race between malicious actors and platform defenses. While the company’s expanded AI systems and policy framework have significantly raised the barrier for bad actors, the proliferation of generative tools means vigilance and innovation will remain critical in safeguarding the digital advertising ecosystem.