Advertisment

Google’s 2023 Ads Safety Report outlines trend of AI, power of LLMs for ad safety and more

According to Google’s blog, last year more than 90% of publisher page level enforcement started with the use of machine learning models, including latest LLMs

author-image
BestMediaInfo Bureau
New Update
ga report
Listen to this article
0.75x 1x 1.5x
00:00 / 00:00

Delhi: Google has released their annual Ads Safety Report to share the progress made in enforcing our advertiser and publisher policies and to hold ourselves accountable in the work of maintaining a healthy ad-supported internet.

The key trend in 2023 was the impact of generative AI and Google aims to outline the work being done to address them head-on. The company is embracing the technology, specifically Large Language Models (LLMs).

LLMs rapidly review and interpret content at a high volume, while also capturing important nuances within that content. For example,Google’s policy against Unreliable Financial Claims which includes ads promoting get-rich-quick schemes. The bad actors behind these types of ads have grown more sophisticated. They adjust their tactics and tailor ads around new financial services or products, such as investment advice or digital currencies, to scam users.

LLMs are more capable of quickly recognising new trends in financial services, identifying the patterns of bad actors who are abusing those trends and distinguishing a legitimate business from a get-rich-quick scam.

Duncan Lennox, VP and GM of Ads Privacy and Safety, said, “Google has just begun to leverage the power of LLMs for ads safety. Gemini, launched publicly last year, is Google's most capable AI modeI. We're excited to have started bringing its sophisticated reasoning capabilities into our ads safety and enforcement efforts.”

In November, Google launched a Limited Ads Serving policy, aimed to protect users by limiting the reach of advertisers with whom we are less familiar. Under this policy, they implemented a “get-to-know-you” period for advertisers who don’t yet have an established track record of good behavior, during which impressions for their ads might be limited in certain circumstances—for example, when there is an unclear relationship between the advertiser and a brand they are referencing. 

It aims to ensure well-intentioned advertisers are able to build up trust with users, while limiting the reach of bad actors and reducing the risk of scams and misleading ads.

Toward the end of 2023 and into 2024, Google faced a targeted campaign of ads featuring the likeness of public figures to scam users, often through the use of deepfakes. A dedicated team was created to respond immediately. They pinpointed patterns in the bad actors’ behavior, trained automated enforcement models to detect similar ads and began removing them at scale. Google also updated their misrepresentation policy to better enable us to rapidly suspend the accounts of bad actors.

Overall, Google blocked or removed 206.5 million advertisements for violating the misrepresentation policy, which includes many scam tactics and 273.4 million advertisements for violating the financial services policy. 

Google also blocked or removed over 1 billion advertisements for violating their policy against abusing the ad network, which includes promoting malware.

Google aims to make significant investments in detection technology and partnering with organisations like the Global Anti-Scam Alliance and Stop Scams UK to facilitate information sharing and protect consumers worldwide.

Google has long-standing identity verification and transparency requirements for election advertisers, as well as restrictions on how these advertisers can target their election ads. All election ads must also include a “paid for by” disclosure and are compiled in our publicly available transparency report. In 2023, Google verified more than 5,000 new election advertisers and removed more than 7.3M election ads that came from advertisers who did not complete verification.

Last year, they were the first tech company to launch a new disclosure requirement for election ads containing synthetic content. Additionally, they continued to enforce policies against ads that promote demonstrably false election claims that could undermine trust or participation in democratic processes.

Lennox remarked, “In 2023, we blocked or removed over 5.5 billion ads, slightly up from the prior year, and 12.7 million advertiser accounts, nearly double from the previous year. Similarly, we work to protect advertisers and people by removing our ads from publisher pages and sites that violate our policies, such as sexually explicit content or dangerous products. In 2023, we blocked or restricted ads from serving on more than 2.1 billion publisher pages, up slightly from 2022. We are also getting better at tackling pervasive or egregious violations. We took broader site-level enforcement action on more than 395,000 publisher sites, up markedly from 2022.”

According to Google’s blog, last year more than 90% of publisher page level enforcement started with the use of machine learning models, including latest LLMs. 

Google is continuously developing new policies, strengthening enforcement systems, deepening cross-industry collaboration and offering more control to people, publishers and advertisers.

In 2023, for example, Google launched the Ads Transparency Center, a searchable hub of all ads from verified advertisers, which helps people quickly and easily learn more about the ads they see on Search, YouTube and Display. They also updated their suitability controls to make it simpler and quicker for advertisers to exclude topics that they wish to avoid across YouTube and Display inventory. Overall, they made 31 updates to Ads and Publisher policies.

Advertisment