Real-World Events Drive Increases In Online Hate Speech, Study Finds


Real-world events such as elections and protests can lead to a rise in online hate speech at mainstream and cross-border levels, a study published Wednesday in the journal PLOS ONE, with hateful posts rampant even as many social media platforms try to crack down.

Also Read :  Germany won't keep Poland from sending tanks to Ukraine – DW – 01/22/2023

Basic facts

Using machine learning analysis—a data analysis method that automates model building—researchers looked at seven types of online hate speech in 59 million posts by hate community users 1,150 online, online forums where hate speech may be used, including sites such as Facebook, Instagram, 4Chan and Telegram.

The total number of articles including hate speech in an average seven-day period rose during the study, which ran from June 2019 to December 2020, increasing by 67% from 60,000 to up to 100,000 per day.

At times, hate speech by social media users has escalated to include groups that were not involved in the world events of the time.

Among the phenomena noted by the researchers was the rise of religious hate speech and anti-Semitism after the US killing of Iranian general Qasem Soleimani in early 2020, and the rise of religious hate speech. -religion and gender after the election in the United States in November 2020, which included Kamala Harris. was elected the first female vice president.

Despite efforts by every sector to eliminate hate speech, online hate speech has persisted, researchers say.

Researchers have identified media attention as a major factor driving hate-related articles: For example, there was little media attention when the first police killing of Breonna Taylor, and so the researchers found that there was very little hate speech online, but when George Floyd was killed several months later. and media attention increased, so did hate speech.

Big Number

250%. This is how racial hate speech grew after the killing of George Floyd. This is the largest increase in hate speech researchers have seen in the study period.

Key Background

Hate speech has plagued social networks for years: platforms such as Facebook and Twitter has a policy banning hate speech and has pledged to remove offensive content, but this has not stopped the spread of these articles. Earlier this month, nearly two dozen independent human rights experts appointed by the United Nations called for more action from social media platforms to reduce the amount of hate speech online. And human rights experts aren’t alone in wanting social media companies to do more: A December USA Today-Suffolk University survey found that 52% of respondents said that social media platforms should limit hateful and inaccurate content, while 38% say websites should. open forum.


Days after billionaire Elon Musk closed a deal to buy Twitter last year, promising to loosen the site’s moderation policies, the site has seen an “increase in hateful behavior.” said Yoel Roth, Twitter’s former head of security and integrity. At the time Roth tweeted that the security team had removed more than 1,500 accounts for hateful behavior in the past three days. Musk has faced strong criticism from conservative groups who say that under Musk’s leadership, and with the loosening of language rules, the amount of hate speech on Twitter has increased dramatically, although he stressed Musk said the sentiment was dampened by hateful tweets.

Further reading

Twitter’s security chief admits ‘rise in hateful behaviour’ as it reportedly restricts access on modern devices (Forbes)

Some reservations about consistent demands for social media content planning decisions. (Forbes)

What should policymakers do to encourage better content planning? (Forbes)


Leave a Reply

Your email address will not be published.

Related Articles

Back to top button