YouTube’s new Transparency Report reveals centrality of automated notices and automated takedowns

Over the past few years YouTube has relied on a combination of human intervention and technology to “flag” content that is considered inappropriate in light of YouTube’s Community Guidelines. In particular, content can be flagged by YouTube’s automated flagging systems, members of the Trusted Flagger programme (which includes NGOs, government agencies and individuals) or from simple users within the YouTube community.

Google/YouTube has recently released a new Transparency Report, that adds up to its reports on copyright, the 'right to be forgotten', and government requests.

It concerns flagging due to content that is: sexual, spam or misleading, hateful, abusive, violent or repulsive (it excludes requests due to copyright).

The report specifies that about 80% of videos that violated the site’s guidelines in 2017 had first been detected by Artificial intelligence (AI) machines. Furthermore, out of the 8,000,000 removed between October 2017 – December 2017 approximately 6,600,000 were first notified through automated flagging systems.

As opposed to human “flaggers”, AI machines enable YouTube to act more quickly and accurately to enforce its policies. Google states that: “These systems focus on the most egregious forms of abuse, such as child exploitation and violent extremism. Once potentially problematic content is flagged by our automated systems, human review of that content verifies that the content does indeed violate our policies and allows the content to be used to train our machines for better coverage in the future. For example, with respect to the automated systems that detect extremist content, our teams have manually reviewed over two million videos to provide large volumes of training examples, which improve the machine learning flagging technology.”

At a time in which discussion around notice and takedown, filtering, etc. has been heating up, also considering the EU value gap proposal (see the latest development here; IPKat posts here) and how it is possible to handle incredibly vast amounts of content that is made available and shared online every day, the current Transparency Report is particularly interesting for the absolute centrality of bots. This is true both in relation to takedown requests and the handling of such requests. It also raises the question of how much of the material taken down also stays down, but this may be a new chapter to the never-ending story of online rights enforcement …
YouTube’s new Transparency Report reveals centrality of automated notices and automated takedowns YouTube’s new Transparency Report reveals centrality of automated notices and automated takedowns Reviewed by Nedim Malovic on Saturday, April 28, 2018 Rating: 5

1 comment:

  1. A question for clarity:

    If an item is "taken down" before ANY views (and this can only be done by the provider, given the fact that no one has viewed the item to initiate a takedown process), is this really the same as "take down."

    I think a more clear thing to do would be to distinguish and use different names for different actions. Those items being scrutinized and NOT being put up at all (that is, "being taken down before any views") are merely being processed within a vetting system in the process of something actually being "put up." It is ONLY those things with views that evidence that something HAS BEEN posted that has passed the vetting process.

    I think it legally significant to distinguish an original vetting process from any TRUE take-down process - for a number of reasons, including, but not limited to, copyright compliance.

    ReplyDelete

All comments must be moderated by a member of the IPKat team before they appear on the blog. Comments will not be allowed if the contravene the IPKat policy that readers' comments should not be obscene or defamatory; they should not consist of ad hominem attacks on members of the blog team or other comment-posters and they should make a constructive contribution to the discussion of the post on which they purport to comment.

It is also the IPKat policy that comments should not be made completely anonymously, and users should use a consistent name or pseudonym (which should not itself be defamatory or obscene, or that of another real person), either in the "identity" field, or at the beginning of the comment. Current practice is to, however, allow a limited number of comments that contravene this policy, provided that the comment has a high degree of relevance and the comment chain does not become too difficult to follow.

Learn more here: http://ipkitten.blogspot.com/p/want-to-complain.html

Powered by Blogger.