YouTube plans to hire 10,000 Content Moderators to stop its "child exploitation" problem


  • :belt_onion:

    Questions. I have so many questions.

    According to YouTube, it used machine learning to remove more than 150,000 videos for violent extremism in just the last 6 months; such an effort "would have taken 180,000 people working 40 hours a week".

    OK. So, why do you need to hire 10,000 people if your "machine learning" is so darn good?

    What is the long term effect (on these employees) of doing nothing but viewing deplorable content all day?

    Most of the examples given don't sound particularly exploitative to me. Stupid or dangerous maybe. And I guess they're being exploited to earn advertising money, but that's not really what I think of when somebody uses the phrase "exploiting children."

    Seems to me that if they're really concerned about children being used to generate revenue, then they ought to ban all videos of children... problem solved.


  • Banned

    @el_heffe it's PR stunt, plain and simple.



  • They're not hiring 10,000 moderators; they have some already, and they're hiring more. From TFA:

    Sources familiar with YouTube's workforce numbers say this represents a 25% increase

    ...so they have somewhere around 8,000, they plan to add around 2,000, which will make the total around 10,000.

    Content Moderators aren't just sitting around twiddling their thumbs, either; they're constantly reviewing stuff manually -- someone has to train the "machine learning" algorithms.


  • ♿ (Parody)

    @el_heffe TRWTF is Buzzfeed. Their site is literally unreadable since they assume you're on mobile and stretch everything out. Fuck them.

    I was actually interested in seeing what sort of stuff Youtube was worried about but the cancer scared me away. Anyone got a non-BF link?


  • ♿ (Parody)

    The Grauniad seems to say it's more about improving their kid filter:

    YouTube faced heightened scrutiny last month in the wake of reports that it was allowing violent content to slip past the YouTube Kids filter, which is supposed to block any content that is not appropriate to young users. Some parents recently discovered that YouTube Kids was allowing children to see videos with familiar characters in violent or lewd scenarios, along with nursery rhymes mixed with disturbing imagery, according to the New York Times.

    Other reports uncovered “verified” channels featuring child exploitation videos, including viral footage of screaming children being mock-tortured and webcams of young girls in revealing clothing.

    Yeah, not sure what to think about the second paragraph, but it certainly sounds like some unsavory stuff. Not that I know what a "verified" channel is.



  • @el_heffe Maybe we're going to enter a golden age where these big IT corporations realize you can't solve every problem with a piece of code, and you need actual human beings to be accountable.

    But probably not. Once the bad press over the beheading videos shown to 3-year-olds passes, these guys'll all get fired.



  • @boomzilla said in YouTube plans to hire 10,000 Content Moderators to stop its "child exploitation" problem:

    Yeah, not sure what to think about the second paragraph, but it certainly sounds like some unsavory stuff. Not that I know what a "verified" channel is.

    YouTube finally found out about Mackey Mouse and Bonald Duck:

    https://www.youtube.com/watch?v=rq1uiYXWIGY



  • @boomzilla

    Here's What YouTube Is Doing To Stop Its Child Exploitation Problem

    The company plans to have over 10,000 content moderators on staff by the end of 2018, YouTube CEO Susan Wojcicki said.
    Posted on December 4, 2017, at 8:00 p.m.

    YouTube is adding more human moderators and increasing its machine learning in an attempt to curb its child exploitation problem, the company's CEO Susan Wojcicki said in a blog post on Monday evening.

    The company plans to increase its number of content moderators and others addressing content that violates company rules to more than 10,000 employees in 2018 in order to help screen videos and train the platform's machine learning algorithms to spot and remove problematic children's content. Sources familiar with YouTube's workforce numbers say this represents a 25% increase from where the company is today.

    In the last two weeks, YouTube has removed hundreds of thousands of videos featuring children in disturbing and possibly exploitative situations, including being duct-taped to walls, mock-abducted, and even forced into washing machines. The company said it will employ the same approach it used this summer as it worked to eradicate violent extremist content from the platform.

    Though it's unclear whether machine learning can adequately catch and limit disturbing children's content — much of which is creepy in ways that may be difficult for a moderation algorithm to discern — Wojcicki touted the company's machine learning capabilities, when paired with human moderators, in its fight against violent extremism.

    According to YouTube, it used machine learning to remove more than 150,000 videos for violent extremism since June; such an effort "would have taken 180,000 people working 40 hours a week," according to the company. The company also claimed its algorithms were getting increasingly better at identifying violent extremism — in October the company said that 83% of its videos removed for extremist content were originally flagged by machine learning; just one month later, it says that number is now 98%.

    Wojcicki, on behalf of YouTube, also pledged to find a "new approach to advertising on YouTube" for both advertisers and content creators. In the last two weeks, YouTube said it has removed ads from nearly 2 million videos and more than 50,000 channels "masquerading as family-friendly content.” The crackdown has come after numerous media reports that revealed that many of the videos — often with millions of views — ran with pre-roll advertisements for major brands, a few of which have suspended advertising business with the platform in November.

    Though Wojcicki offered no concrete plans for advertising going forward, she said that the company would be "carefully considering which channels and videos are eligible for advertising." The blog post also said the company would "apply stricter criteria, conduct more manual curation, while also significantly ramping up our team of ad reviewers."

    It's unclear when the advertising changes will go into effect. For now, controversial videos still appear to be running alongside advertisements. In a review of videos masquerading as family friendly content, BuzzFeed News found advertisements running on a number of popular "flu shot" videos, a genre that typically features infants and young children screaming and crying.

    On Monday afternoon, two flu shot videos on a family account called "Shot Of The Yeagers" were found running advertisements for Lyft, Adidas — which had previously told the Times of London it had suspended advertising on the platform — Phillips, Pfizer, and others. When BuzzFeed News contacted Adidas and Lyft about their ads running near the videos, both companies said they would look into the matter.

    "A Lyft ad should not have been served on this video," a Lyft spokesperson told BuzzFeed News. "We have worked with platforms to create safeguards to prevent our ads from appearing on such content. We are working with YouTube to determine what happened in this case."

    Adidas offered BuzzFeed News a statement dated from Nov. 23 and added, "we recognize that this situation is clearly unacceptable and have taken immediate action, working closely with Google on all necessary steps to avoid any reoccurrences of this situation." Less than one hour after their initial response, the flu shot videos appeared to be deleted off YouTube entirely.

    UPDATE
    December 5, 2017, at 10:04 a.m.

    This piece has been updated to clarify that the 10,000 employees includes content moderators as well as others that will address content that violates the company's rules.



  • So many industries have been exploiting kids for decades.

    Mold 0.5€ worth of plastic into a toy. Put ads on TV during cartoons. Sell for 80€ to parents to get their kid to stop whining about it. Totally ethical.



  • @boomzilla said in YouTube plans to hire 10,000 Content Moderators to stop its "child exploitation" problem:

    Not that I know what a "verified" channel is.

    It's like the verified badge on Twitter: a human at YouTube looks at the channel and decides whether or not it is "official". Of course that's not really what it means because they reserve the right to remove it if you violate their community guidelines.


  • Considered Harmful

    I've never understood the whole advertising removal thing in the first place. If you are an advertiser, and a video is getting shitloads of views, you want your ad on that video. The content is completely irrelevant; if YouTube manages the ads, you release any culpability for your ad's 'association' to the video - nobody is saying you're associated with terrorism simply because a video discussing terrorists contains an ad for your product. It's just straight-up retarded.



  • @pie_flavor The issue is that people don't want to fund people with ad revenue if they make certain kinds of content. The ad and the content are irrelevant, but the company looks bad if it is paying someone that society doesn't like.



  • @lb_ not to mention that some people don't want to sell ad space to certain types of advertisers.



  • @anotherusername That's already a solved problem, you can just turn them off in your AdSense settings.


  • Considered Harmful

    @lb_ said in YouTube plans to hire 10,000 Content Moderators to stop its "child exploitation" problem:

    @pie_flavor The issue is that people don't want to fund people with ad revenue if they make certain kinds of content. The ad and the content are irrelevant, but the company looks bad if it is paying someone that society doesn't like.

    But they're not paying them. They're paying YouTube, and YouTube is paying them. Like I said, YouTube being the middleman absolves them completely of any culpability.


  • Banned

    @pie_flavor said in YouTube plans to hire 10,000 Content Moderators to stop its "child exploitation" problem:

    I've never understood the whole advertising removal thing in the first place. If you are an advertiser, and a video is getting shitloads of views, you want your ad on that video. The content is completely irrelevant; if YouTube manages the ads, you release any culpability for your ad's 'association' to the video - nobody is saying you're associated with terrorism simply because a video discussing terrorists contains an ad for your product. It's just straight-up retarded.

    There's one huge problem with your thinking - you assume that people who watch ads act rationally. Which isn't true, since they watch ads.


  • kills Dumbledore

    @el_heffe said in YouTube plans to hire 10,000 Content Moderators to stop its "child exploitation" problem:

    What is the long term effect (on these employees) of doing nothing but viewing deplorable content all day?

    They'll end up voting for Trump 🚎

    @pie_flavor said in YouTube plans to hire 10,000 Content Moderators to stop its "child exploitation" problem:

    YouTube being the middleman absolves them completely of any culpability

    Maybe technically, but in the mind of someone who watches a video with an advert on it, that advert is associated with the video.

    I'm not even talking about conscious association necessarily. If I go into a shop and see some brand that was advertised next to a video I have bad associations with, I might not think "oh, I remember that from the ad on the horrible video" but somewhere in my mind that association is going to happen and might leave me with a bad feeling for the brand that I can't pin down, so I'll skip it and go for the competitor



  • @el_heffe said in YouTube plans to hire 10,000 Content Moderators to stop its "child exploitation" problem:

    What is the long term effect (on these employees) of doing nothing but viewing deplorable content all day?

    There is a very high turnover in them, from what I saw on a TV report about this thing a few months ago. I don’t remember exactly how long they said people last on average, but it wasn’t more than a few months, as I recall. That alone should be enough of an indicator that it messes people up.



  • @lb_ said in YouTube plans to hire 10,000 Content Moderators to stop its "child exploitation" problem:

    @pie_flavor The issue is that people don't want to fund people with ad revenue if they make certain kinds of content. The ad and the content are irrelevant, but the company looks bad if it is paying someone that society doesn't like.

    Yeah. I once found a dubious(1) site getting banner ads from a variety of companies (via doubleclick before Google bought it, if memory serves). Notes made, web searched. Five different outfits, one with a real email address, three where I had to mail webmaster@ and/or postmaster@ and in both cases it black-holed, and one where both of those bounced. The guy with a real email address actually replied and thanked me for letting him know, and said that he would be getting his ads pulled.

    (1) Haxxor tools, not pronogaffy. But the guy was trying to make it look legit by presenting them as "for testing use only", while he himself hung out on haxxor newsgroups and had a circle of skiddie admirers, and worked as a sysadmin somewhere "normal". Grr.


  • Fake News

    @gurth said in YouTube plans to hire 10,000 Content Moderators to stop its "child exploitation" problem:

    @el_heffe said in YouTube plans to hire 10,000 Content Moderators to stop its "child exploitation" problem:

    What is the long term effect (on these employees) of doing nothing but viewing deplorable content all day?

    There is a very high turnover in them, from what I saw on a TV report about this thing a few months ago. I don’t remember exactly how long they said people last on average, but it wasn’t more than a few months, as I recall. That alone should be enough of an indicator that it messes people up.

    No wonder, just think of the quality of YouTube comments and videos. Now imagine what you would see without moderation.

    You're welcome.



  • @pie_flavor said in YouTube plans to hire 10,000 Content Moderators to stop its "child exploitation" problem:

    @lb_ said in YouTube plans to hire 10,000 Content Moderators to stop its "child exploitation" problem:

    @pie_flavor The issue is that people don't want to fund people with ad revenue if they make certain kinds of content. The ad and the content are irrelevant, but the company looks bad if it is paying someone that society doesn't like.

    But they're not paying them. They're paying YouTube, and YouTube is paying them. Like I said, YouTube being the middleman absolves them completely of any culpability.

    Except it doesn't because people are humans and can see through middlemen quite easily. Money doesn't just go into a black box and come out the other side to be given to random people for no reason. And even if it did, if some of those random people that get money are questionable, the whole black box becomes questionable and so does anyone that puts money into it.


  • BINNED

    @lb_ I don't have any experience with AdSense but I doubt there's a checkbox for kiddy diddlers. If there was one, it would mean Google can identify them already, which they apparently can't.



  • @blek said in YouTube plans to hire 10,000 Content Moderators to stop its "child exploitation" problem:

    @lb_ I don't have any experience with AdSense but I doubt there's a checkbox for kiddy diddlers. If there was one, it would mean Google can identify them already, which they apparently can't.

    I bet that if they did have a checkbox like that, a fair number would be stupid enough to click it. (And I don’t mean as a joke, but for real.)

    Like those green visa applications you got on flights to the USA a few decades ago; I don’t know this still happens, as I haven’t been there this millennium. Anyway, they had seven (IIRC) questions along the lines of, “Do you plan to break the law while in the USA?” First time I saw one I immediately wondered how many people they actually thought to catch at the border with this kind of questioning, and decided there are probably some who’d answer “yes” to that truthfully.



  • @blek You misunderstood. I said that people who sell ad space can decide which kinds of advertisements to sell space to.
    0_1512585367591_af317f32-07ce-4b65-af84-f457478547e7-image.png
    You can also block specific ads.



  • @gurth The purpose of "Do you plan to break the law while in the USA" and similar questions isn't to catch the people who check "Yes" (though I'm sure they do on occasion). The purpose is that for people who check "No", if they do subsequently break the law, it can then be claimed that they lied in order to gain admission to the country, so they can be deported easier. Those questions are all about forcing statements to be declared as part of entering the country, since a lie given to enter a country can be penalized more harshly than general lying to a government once in a country.



  • @pcooper I figured that out too, after the initial thought. “Not only have you committed a crime, but you also lied to us” — I don’t see how that makes the crime worse, but hey, I’m not an American lawmaker :)



  • @gurth said in YouTube plans to hire 10,000 Content Moderators to stop its "child exploitation" problem:

    “Do you plan to break the law while in the USA?”

    I think the primary purpose of this kind of question is to weed out all those iDtenTs who believe the more questions they answer with "yes" the more likely they were to get admission.



  • @pcooper said in YouTube plans to hire 10,000 Content Moderators to stop its "child exploitation" problem:

    a lie given to enter a country can be penalized more harshly than general lying to a government once in a country.

    Not necessarily more harshly. A lie given to enter a country can be more easily justified as cause for deportation than some other crime they committed that's not directly connected to their being given permission to enter the country.


  • kills Dumbledore

    @anotherusername Seems like a pretty easy defense.

    "Prove I was intending to commit this crime when I entered the country and answered that question. I decided to do it well after I arrived"



  • @jaloopa in a lot of cases, yes. They'd need to have evidence that proved that the crime was premeditated.

    I think it's partly also to stop the bleeding-heart liberal types from trying to claim that immigrants are punished more severely for the same exact crimes if they risk being deported in addition to whatever other punishment the law proscribes. Deportation isn't part of the punishment for the crime; it's the punishment for the separate crime of falsifying information on their immigration form.



  • @anotherusername said in YouTube plans to hire 10,000 Content Moderators to stop its "child exploitation" problem:

    @jaloopa in a lot of cases, yes. They'd need to have evidence that proved that the crime was premeditated.

    It may also be that the standard of proof (of premeditation) is much lower for deportation, since it is in effect an administrative decision, not a legal one.

    The administration gave you a permit to enter, not a judge, so they may be able to revoke it as soon as they have a reasonable suspicion that you lied (a visa is closer to a private contract with the administration than to a civil right enshrined in law or constitution!). In which case you are probably entitled to getting the decision reviewed by a judge, and they would have then to prove the premeditation more rigorously, but in the meantime you've already been deported. So not only you are outside the country until the judge rules on the issue, but you are also much less likely to actually ask a judge to review it (as you need legal representation in the US, money to pay for it etc.).

    Deportation isn't part of the punishment for the crime; it's the punishment for the separate crime of falsifying information on their immigration form.

    That's also possible. In most civilized places (and I assume in the US as well... 🚒) you can't be punished twice for the same offense, so unless the law would explicitly mention deportation as a possible punishment (and the judge would explicitly mention it when passing the sentence), getting deported for that same exact offense might be seen as double punishment.



  • @remi Deportation is not considered a punishment but a "collateral consequence". And although there have been several court cases about how not mentioning deportation as a possible collateral consequence is a violation of a defendant's due process rights since it's a significant deciding factor in the plea process, the rulings so far have been largely that the plea can't be reconsidered and the person is still deported.



  • @twelvebaud said in YouTube plans to hire 10,000 Content Moderators to stop its "child exploitation" problem:

    @remi Deportation is not considered a punishment but a "collateral consequence".

    That depends on where. In the US, maybe. In other countries, it has been argued that it was part of the punishment.

    I don't have a definitive legal opinion on the subtleties of the topic, I'm just saying that having the immigration questions allows to entirely side-step that discussion (by making lying on the form a different offense from the first one).


  • Discourse touched me in a no-no place

    @gurth said in YouTube plans to hire 10,000 Content Moderators to stop its "child exploitation" problem:

    Like those green visa applications you got on flights to the USA a few decades ago; I don’t know this still happens, as I haven’t been there this millennium.

    Up until about 2 years ago, yes, it still happened. Now you do all that stuff electronically (technically I only know this for entry via IAH; other places might not have updated yet) and it is quite a bit less annoying. Still the stupid questions, but the screen isn't green and bouncing around at 35k-feet…



  • @el_heffe said in YouTube plans to hire 10,000 Content Moderators to stop its "child exploitation" problem:

    Questions. I have so many questions.

    According to YouTube, it used machine learning to remove more than 150,000 videos for violent extremism in just the last 6 months; such an effort "would have taken 180,000 people working 40 hours a week".

    OK. So, why do you need to hire 10,000 people if your "machine learning" is so darn good?

    They're not only hired to screen the videos, but to also help training the said machine learning algorithm. (I imagine it's like marking sampled video as good or bad then use them to tune the accuracy)


Log in to reply