Sunday, October 25, 2020

Social media and the spread of gruesome images



Gruesome images since the wake of violence due to the ongoing war in Cameroon has been displayed on several platforms. While some share for sensitization and awareness, others are doing so to show their might. What burns more is that someone would package these photographs together and that others would share them. Sure, some images are taken by professional journalists on the job, but most disturbing are the amateur snapshots taken by unprofessionals. Trying to horrify us? Not cool. Demoralize us? Done. Perpetuate cruelty? Not okay! Whatever the intention for sharing, we must all know the implications it has on the brain of the viewers.

Social media users must understand the implication and effects of the images they share online asking questions like what about a particular image makes it ripe for sharing? People who are not professional journalists upload much of gruesome content. It’s unmediated and free to stream across the internet. A major problem arises when these images jump outside  their original context. Not only does that practice open the door to resharing false information, but it transports graphic material that had a specific purpose: to energize, enrage, or educate a particular community.

The case of Kumba on the 24th of October is a glaring understanding of this context. We have seen horrible images of school children being killed in schools shared online with these kids lying in their own pull of blood. These images were captured and shared without taking into consideration the effects on the family and other surfers now and in decades to come. The sharing of such images poses so much psychological distress and trauma.

It's time for social media platforms to review their community standards and check if the content spread on this particular issue feeds within the eligibility criteria of their content liberty fronts. But we should avoid scapegoating the big platforms. All of them (Twitter, Facebook, YouTube, Google, Snapchat) are signed up to the European Commission’s #NoPlace4Hate programme. They are committed to removing illegal hateful content within 24 hours, a time period which is likely to come down to just one hour.



Aside from anything else, they are aware of the reputational risks of being associated with terrorism and other harmful content (such as pornography, suicide, paedophilia) and are increasingly devoting considerable resources to removing it. Within 24 hours of the Christchurch attack, Facebook had banned 1.5m versions of the attack video of which 1.2m it stopped from being uploaded at all.



Monitoring hateful content is always difficult and even the most advanced systems accidentally miss some. But during terrorist attacks the big platforms face particularly significant challenges. As research has shown, terrorist attacks precipitate huge spikes in online hate, overrunning platforms’ reporting systems. Lots of the people who upload and share this content also know how to deceive the platforms and get round their existing checks.



Indeed it isn't any bad thing to show solidarity with the affected families but looking beyond the now to see that in the future these memories are not revisited just by a click on the internet with painful images will go a long way to downplay the horrible effects on their brain and mental health as a whole.

Pedmia Shatu Tita

9 comments: