Do you know who and what your brand adverts could unknowingly support?

Estimated reading time
3 minutes
6th February 2020
Author: Nick Mason
Posted in: Content Distribution & Promotion, Technology

YouTube strikes again. Global brands including Samsung and L’Oréal have been unknowingly advertising on climate-misinformation videos. This has reignited concerns over the streaming site’s algorithms, which have created controversy for multiple brands in recent years.

Greenpeace supports climate change denial?

It wasn’t just corporate brands whose ads appeared on these misinformation videos. Environmentally conscious groups such as WWF, Greenpeace, and Friends of the Earth were unwittingly driving traffic to climate conspiracy videos. This was due to YouTube’s recommendation algorithms, as well as running ads on the videos themselves.

A report by activist group Avaaz uncovered that YouTube has been driving millions of viewers to climate-denial videos via its Up Next feature, suggestions bar, and ads. Most importantly, this algorithm is responsible for the vast majority of what users watch on the site. It makes up about 70% of the total time users spend on the platform.

Some of the videos claimed that there is no evidence CO2 emissions are the dominant factor in climate change. This is despite the overwhelming amount of scientific literature on the subject.

A doctor in a lab coat looks through a microscope

A spokesman for the Conscious Advertising Network wants brands to take more responsibility for where their advertising is going. He believes that the platforms monetize hate on an industrial scale.  “In a similar way misinformation and climate denial is now big business, inadvertently funded by brands. Facing into the climate emergency, the question is: has there ever been a more dangerous form of misinformation?”

The growing monetization of hate

This certainly isn’t the first case of brands appearing where they don’t want to be. Back in 2017, Channel 4, L’Oréal, The Guardian, Transport for London, and even the UK government pulled millions of pounds worth of advertisements from YouTube.

An investigation by The Times found that these brands, among others, were unknowingly funding extremist videos including “rape apologists, anti-semites, and banned hate preachers.”

An out-of-focus woman prepares to be interviewed on camera

Dan Brooke, the CMO of Channel 4, said: “It is a direct contravention of assurances our media buying agency had received on our behalf from YouTube. We are not satisfied that YouTube is currently a safe environment, therefore we have removed all Channel 4 advertising from the platform with immediate effect.”

A spokeswoman for L’Oréal said the company was “unaware” and “horrified” to discover this.

She added, “It seems some of the YouTube inventory sold to us by Google was incorrectly categorized by them. As a result, our campaign featured on these channels. We are taking immediate action to remedy this issue. We will be working yet more closely with Google to prevent this from happening in the future.”

However, with L’Oréal once again caught in the same controversy, the question is, where does the responsibility lie?

Brand adverts vs YouTube algorithms

The issue of who is at fault for ensuring ads don’t appear next to content the brand doesn’t want to associate with has once again come to the forefront of debate. 

Keith Weed, current president of the Ad Association and former CMO of Unilever, blamed the over-eagerness of the advertising industry. He told The Drum that it is firmly the responsibility of brands to “keep a close eye on where their advertising lands and indeed, what their advertising funds.” 

A spokeswoman for YouTube said: “our recommendations systems are not designed to filter or demote videos or channels based on specific perspectives. YouTube has strict ad policies that govern where ads can appear. We give advertisers tools to opt-out of content that doesn’t align with their brand.”

How can brands avoid this in the future?

Brands and their agencies have been getting better at looking after their “brand safety” since the 2017 scandal. However, the huge amount of content-policing required of digital platforms means the risk remains.

To combat this, Weed argued that every brand should have a “risk matrix” where an internal media team identifies risk to consumers, society, and their own businesses. He said the “worst thing” marketers can do once threats have been established is ignore them.

The word "caution" is painted on a concrete floor

Watchdog bodies such as the Advertising Standards Authority can help police misinformation. However, brand owners must take responsibility for their own ads, even if they have to pay a premium to do so. If brands are serious about their commitment to separating themselves from content they don’t align with or support, more action is needed from marketers before scandal hits.

Turtl