Facebook has announced some new brand safety controls for video advertisers, with topic exclusions, based on machine learning, and 'publisher allow' lists to better control campaign display.
First up is Topic Exclusions, which provides video advertisers with a new way to control which video posts their ads can appear within, based on the content of the video.
As explained by Facebook:
"Topic exclusion will offer in-stream advertisers a more granular exclusionary tool that allows for content-level suitability. Powered by machine learning technology, topic exclusion is designed to allow in-stream advertisers to choose content-level exclusions from four different topics: news, politics, gaming, and religious and spiritual content."
As you can see in the above screenshot, advertisers will be able to prevent their ads from being shown in video uploads related to these content areas, though the same limitations won't apply to live-streams.
Facebook hasn't provided technical detail on how the system determines which videos fall into each category (other than the above note that it's 'powered by machine learning'), but the assumption would be that it also uses historical context for each uploader, its Page classification, and subsequent comments and engagement to determine each video's focus.
The other addition is "publisher allow lists", which will give advertisers the capacity to select a specific list of publishers with which it wants to have its ads shown.
That will enable advertisers to run their campaigns exclusively on the content from these publishers.
Brand Safety controls came into focus back in 2017 after YouTube lost millions in ad revenue when publishers started pulling their ads due to them appearing alongside extremist and hate speech content. Of course, the correct answer would be for YouTube and other platforms to remove extremist and hate speech content outright, but with variable levels of risk involved due to brand association related to different types of content, all digital platforms have since been working to add new placement control options to stop unwanted connection.
Worth noting too that both Facebook and YouTube have also been working to take more action against such content, but brand safety controls like this provide more capacity for advertisers to better protect themselves from such concerns, putting more control in their hands, as opposed to simply relying on the platforms and their tools.
In addition to this, Facebook has also recently improved its third-party auditing status, being added to the inaugural group of Trustworthy Accountability Group Brand Safety Certified companies.
"We've also been working with Global Alliance for Responsible Media (GARM) to align on brand safety standards and definitions, scaling education, common tools and systems, and independent oversight for the industry. We have aligned with GARM on the definitions for the 11 categories including hate speech and acts of aggression that are included in the GARM/4A’s Brand Safety Floor and Suitability Framework."
These broader associations enable digital platforms to better align with accepted benchmarks and practices, and stamp out problematic groups en masse, as opposed to each platform going it alone.
That approach with help to establish new industry standards, and enable further action on such concerns, which will ultimately give more control and assurance to ad partners.
You can read more about Facebook's latest brand safety updates here.