YouTube will begin testing with small groups of users to eliminate the number of “non-fans” from videos in the coming weeks. This decision arises with the aim of taking care of the welfare of the creators as well as stopping the campaigns against certain videos of the platform.
The company announced on Twitter this Tuesday that some channels will change the interface of the video clips to only show the number of “liked” and not “not like”. In a post on their support page, they mentioned that they are testing a few different designs that will be possible to see in videos of small groups of users over the next several weeks, but the “Don’t Like” button will not disappear.
YouTube is taking this step because of the criticism it has received from content creators regarding its “wellness campaigns and targeted” hate “campaigns, which may be driven by public accountants.
The platform has been trying for years to solve a problem He hates mobsOrganized groups of people who interrupt videos are also called to disavow them en masse without even seeing them. Thus, in 2019 a series of measures are proposed, such as the inclusion of a questionnaire at the press of a button.
Deactivating the likes doesn’t mean the button has disappeared, that way, users who visit a segment continue to share their negative opinions about it. Additionally, creators will be able to follow the number of negative reviews of their content on YouTube Studio.
The company said it made the decision “to try to strike a balance between enhancing the creator’s experience” while ensuring that “viewers’ opinions are taken into account and shared with the creator. “
What does the platform take into consideration to delete the videos?
The company, through its transparency reports, reveals how many videos have been removed from the platform with the goal of curbing harmful content while always making sure to preserve the essence of the open platform that it maintains.
YouTube bases its liability policy on four pillars, Which consists of: a) removing all content that violates community policies; B) Reducing the spread of questionable content, which does not violate policies, but is not of good quality either; C) Raise the quality of content and trusted voices and d) reward content that meets or exceeds YouTube’s standards with monetization.
On the one hand, it should be noted that content moderation, which consists, among other things, of identifying which videos may or may not be present on the platform, is a joint effort between humans and machines.
Devices find content quickly and on a large scale, and this is important to work effectively, that is, to be able to remove or restrict content without delay, before users are exposed to it.
The point is, machines aren’t always as accurate as humans., Because they cannot understand all the subtleties of language or paradoxes. In this sense, they tend to act, in general, as candidates, that is, they identify potentially harmful content and then send it to the human so that he can finish making the final decision.
(With information from Portaltic)