Instagram is flipping the switch on a system that will automatically detect mean, offensive and harassing comments and make sure that people never see them. The new system is based on work that Facebook and Instagram have done with DeepText, a text classification engine meant to help machines interpret words as used in context to fight spam.
As Wired first reported, the Instagram system follows the social network’s successful use of the tech to fight spam, which began in October of last year. After training the system with human input on how to ID spam, the team liked the results, though they aren’t saying exactly how much its implementation reduced spam versus previous methods.
Based on the success of that system, the team wanted to see if it could tackle an even stickier issue: mean-spirited, hateful and harassing comments. Perhaps you’re familiar with the internet? If so, you might also be aware that it ends up providing the means of delivery for a lot of hurtful invective, hurled about with reckless abandon and little consideration for its ultimate impact.
Contractors trained up DeepText on identifying negative comments and categorizing them into broad segments like “bullying, racism, or sexual harassment,” according to Wired. The raters are said to have analyzed at least two million comments in total prior to today’s go-live for the software, and each of those has been rated at least twice to ensure correct classification.
Anecdotally, Instagram is already a social network a lot of friends go to because it’s friendlier than most other internet social forums; if this system proves effective, it could become even more of a refuge, which is likely good for user stickiness long term.
TechCrunch.com