Amid the ongoing discussion over how online platforms can be, and have been, misused by bad actors to manipulate and influence opinion, Google has released a new update on its efforts to address such concerns, using advanced technology to detect and enforce its revised rules around ad use.
As per Google:
"Google has a crucial stake in a healthy and sustainable digital advertising ecosystem - something we've worked to enable for nearly 20 years. Every day, we invest significant team hours and technological resources in protecting the users, advertisers and publishers that make the internet so useful."
And last year, those efforts ramped up significantly - according to Google's latest report, the platform removed more than 2.3 billion ads in 2018 due to violations of both new and existing policies, which equates to some six million Google ads being struck down every single day.
In 2018, those removals included:
- Nearly 207,000 ads for ticket resellers
- Over 531,000 ads for bail bonds
- Around 58.8 million phishing ads.
Google has strengthened its policies around each of these specific elements, after finding that they were being used in manipulative and dishonest ways.
"For example, we created a new policy banning ads from for-profit bail bond providers because we saw evidence that this sector was taking advantage of vulnerable communities. Similarly, when we saw a rise in ads promoting deceptive experiences to users seeking addiction treatment services, we consulted with experts and restricted advertising to certified organizations. In all, we introduced 31 new ads policies in 2018 to address abuses in areas including third-party tech support, ticket resellers, cryptocurrency and local services such as garage door repairmen, bail bonds and addiction treatment facilities."
In addition to this, Google has also improved its detection tools, enabling it to remove not just single ads, but also the accounts behind them. In 2018, Google says it terminated close to a million bad advertiser accounts, almost double the amount it banned in 2017. Google also launched additional detection classifiers in order to "better detect "badness" at the page level", which facilitated the removal of ads from nearly 28 million individual pages which violated its publisher policies.
And Google's also ramping up its efforts to remove monetization from accounts which profit from misinformation:
"In 2018, we removed ads from approximately 1.2 million pages, more than 22,000 apps, and nearly 15,000 sites across our ad network for violations of policies directed at misrepresentative, hateful or other low-quality content. More specifically, we removed ads from almost 74,000 pages for violating our “dangerous or derogatory” content policy, and took down approximately 190,000 ads for violating this policy. This policy includes a prohibition on hate speech and protects our users, advertisers and publishers from hateful content across platforms."
Removing the financial incentive behind publishing this type of content is key to limiting its spread. Of course, some of these activities are funded by government-originated groups, which would lessen the relevance of financial impact, but stopping them from reaching more people through targeted ads is still a powerful way to restrict exposure.
As noted, given the ongoing concerns around how digital platforms can be used to spread such content, its good to see the major players in Google and Facebook taking increased action to address each element. Given their presence, more needs to be done by these companies to protect vulnerable users and limit misinformation - both Google and Facebook are generating billions in revenue through ads, so additional investment into improving their processes is clearly justified.
The numbers here show that Google is taking action - there's still a long way to go, but it is positive to look at the impact Google's efforts have had thus far.