Automation, Machine Learning Key to YouTube Clean-Up

Source – lightreading.com

Responding to concerns from advertisers and politicians, YouTube Inc. has added new measures and improved existing ones to better regulate objectionable content uploaded on its platform. The company detailed new measures in a blog post, chief among which is the use of machine learning and automation to remove objectionable content from the site and also limit access to content that falls into a gray area.

YouTube has been under pressure to clean up hate speech and terrorist-related content on its site. Earlier this year, major global media buyer Havas Mediapulled all advertising off Google (Nasdaq: GOOG) and YouTube. Havas is estimated to spend about £175 million ($230 million) every year on behalf of its clients in the UK. The move followed other major advertisers including the Guardian, the BBC and Transport for London also pulling their advertising. In fact, Google was summoned by government ministers to explain why government advertising was being placed next to extremist content on YouTube.

The Internet giant promised to improve its ad placement, and announced a four-step strategy to combat extremist content a few months ago. These included: better detection and faster removal driven by machine learning, more experts to identify objectionable content, tougher standards for “borderline” videos that are controversial but don’t violate YouTube’s stated policies, and more counter-terrorism efforts.

The challenge for Google/YouTube is that digital advertising is increasingly sold programmatically. This term refers to the automated trading of advertising online. Media companies make advertising slots available via a programmatic system and advertisers and media buyers bid on these slots. The entire process is conducted using digital trading desks that match advertising to buyers using various demographic and contextual criteria.

Unfortunately, this can sometimes result in advertising showing up next to the absolutely wrong video. For example, a toy manufacturer might find its advertisement placed in an adult video, or a government agency could have its message inserted into a video from a hate preacher. This is a huge concern for advertisers. Prior reports found that message from advertisers such as UK broadcasters Channel 4 and the BBC, retailer Argos and cosmetics brand L’Oréal were slotted into extremist content on Google and YouTube.

Google claims it previously removed nearly 2 billion inappropriate advertisements from its platforms, more than 100,000 publishers from its AdSense program and blocked ads from more than 300 million YouTube videos. Examples of inappropriate content that was removed included videos of American white nationalists and extremist Islamist preachers.

Following the theory that the problems created by automation can also be solved by automation, YouTube has invested in using machine learning to try and regulate the content uploaded to the site. The scale of the service makes it impossible for a human-only solution anyway: 400 hours of video are uploaded to YouTube every minute, and 5 billion videos are viewed daily.

The key to Google approach is better detection and faster removal driven by machine learning. It has developed new machine learning technology in-house specifically to identify and remove violent extremism and terrorism-related content “in a scalable way.” These tools have now been rolled-out, and according to the company it is seeing “some positive progress” already.

It cites improvements in speed and efficiency — more than 75% of the videos removed for violent extremism in the previous month were pulled automatically, before being flagged by a single human. The system is also more accurate due to the new machine learning technology, with Google claiming in many cases it has proven more accurate than humans at flagging objectionable videos. And, lastly, Google points out that given the massive volumes of video uploaded to the site every day, it’s a significant challenge to root through them and find the problematic ones. But over the past month, the new machine learning technology has more than doubled not only the number of videos removed, but also the rate at which they have been taken down.

Google is also adding new sources of data and insight to increase the effectiveness of its technology, partnering with various NGOs and institutions through its “Trusted Flagger” program, such as the Anti-Defamation League, the No Hate Speech Movement, and the Institute for Strategic Dialogue. And it’s using the YouTube platform to push anti-extremist messages. When users conduct potentially extremist-related searches on YouTube, they are redirected to a playlist of curated YouTube videos that challenge and debunk messages of extremism and violence.

In addition, Google is targeting videos that are flagged by users as being objectionable but don’t cross the line on hate speech and violent extremism. These videos are placed in what Google calls a “limited state”: they are not recommended, monetized and users cannot like, suggest or add comments to them. This will be rolled out in coming weeks for desktops and subsequently for mobiles.

While Google appears to be making a significant effort to make YouTube less likely to be misused by hate groups, advertisers and government agencies will probably have to see the results to believe them. Still, this is likely to help alleviate some of the concerns that have been building up.

However, these efforts seem aimed only at hate speech and extremism. Google has done little to alleviate concerns from brands about context and placement outside of extremist content. Advertisers are concerned about having their brands appear to sponsor content that could be damaging for their brand image even if the content is not hate speech — like a message from a religious group placed in a video featuring a wild, drunken party.

In the recently concluded “upfronts,” an annual event where advertisers buy inventory upfront for the year from broadcasters, “brand safety” was an important selling point. NBCUniversal ad sales head Linda Yaccarino pretty much led with that in her address, underscoring the benefits of human ad placement in broadcast advertising.

Google — and others such as Facebook and Twitter — will need to develop ways to resolve these advertiser concerns, because the larger brands that control the bulk of advertising expenditure are increasingly worried about where their brands are showing up. If the machine learning technology applied to YouTube is effective then it must be extended to also address objectionable content beyond extremist videos. It should also be able to relate advertiser messages to the videos they are placed in, and create better matches. If Google is able to do that, it will take away one of the most effective selling points from broadcasters, and help shift ad spend towards online video even faster.

Related Posts

Subscribe
Notify of
guest
2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
2
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence