YouTube facing a full-scale advertising boycott

More companies are pulling advertising from YouTube over Google’s inability to ensure ads won’t appear next to hateful and offensive content. The Wall Street Journal reports that YouTube videos centered around racist, homophobic, and anti-Semitic views are still scooping up ads from brands like Coca-Cola, Amazon.com, and Microsoft. This is even after reports last week exposed the issue and led to mass advertising boycotts in the UK and now the US, prompting Google to promise companies it would solve the issue and implement better tools and moderation practices.

Following The Wall Street Journal’s findings, PepsiCo, Walmart, Dish, Starbucks, and GM all pulled their advertising, joining a growing list of dozens of companies in Europe and the US since The Times of London first shined a spotlight on the problem. A majority of these companies are pulling advertising from YouTube and sites that use Google’s ad exchange technology. That means not only is Google’s video arm taking a hit, but its broader network of web advertising is suffering, too, as companies are under the assumption that Google is incapable of policing neither YouTube videos nor third-party websites with its current blend of user flagging, human moderation, and algorithmic detection.

That leaves only targeted search advertising intact, which is when a company pays money to bid for placement of an ad on Google’s search results page when a user types in a certain combination of keywords. Even still, some companies like FX Networks have begun pulling all advertising from Google, including search ads, until it resolves the issues at hand, The Wall Street Journal reports.

Youtube-Ingles

This has become a rather precarious problem for YouTube and Google’s larger advertising operation, both of which rely on what’s called programmatic advertising that uses algorithms and not humans to dictate placement. For years, YouTube has championed itself as the destination for any and all video on the internet, with loose restrictions around graphic and offensive content and creators espousing views many traditional broadcasters would classify as hate speech. This has allowed YouTube to balloon in popularity. Even when it does veer into hosting illegal content, like copyrighted material or terrorist propaganda, the site is shielded by federal law from being legally held responsible.

The end result is that YouTube enjoys 400 hours of video uploaded every minute and 1 billion hours of video consumed every day, with around $11 billion in revenue last year. Yet a growing chunk of that video is the type of content advertisers want nothing to do with. And due to the current political climate, it’s become increasingly popular for content creators to make a living off hateful content that panders to bigots and fringe political groups like the alt-right. YouTube is now in the position of being structurally incapable of policing its platform and perhaps culturally hesitant even to do so with more heavy-handed moderation methods.

That statement followed a more direct plea to advertisers earlier this week in which Google promised it would be giving brands more direct control over and insight into where ads were placed on YouTube and Google-partnered third-party sites. The company also promised new tools like artificial intelligence-powered filtering that would detect offensive language and other contents within a video and flag it. Bloomberg reports that Google plans on implementing these new tools and changes as soon as this Sunday, according to an internal company memo. Yet it’s unclear just how effective Google’s new tools will be, and what it might take to bring advertisers back.

Advert