In the wake of the Christchurch tragedy, many questions are being asked of some of the world’s biggest tech companies, whose video-sharing platforms allowed the shooter to live-stream his attack in a 17-minute video which was uploaded to multiple social media sites and remained online well after the incident took place.
The attack – which was apparently deliberately carried out in a way that exploited today’s viral live-stream internet culture – highlighted the woeful inadequacy of authorities and tech companies to stop the proliferation of inappropriate content, and has prompted some to question whether stronger regulations should be placed on video providers to prevent incidents such as this from happening in the future.
In the aftermath of the shooting, major online platforms including YouTube, Facebook and Twitter scrambled to delete the thousands of versions of the video that continued to pop-up on their sites, releasing statements denouncing the videos and appealing to users to report any videos of the shooting on their platforms that had not yet been taken down. But just as quickly as versions of the video were removed, new versions were uploaded.
Although social media sites such as Facebook have recently made significant investments in improving their ability to identify and remove inappropriate content, these efforts will always be diametrically opposed to the business models of companies that rely heavily on the sheer volume of user-generated content, clicks and views to drive their revenues. On the internet, timeliness and shareability is currency.
The fact is anyone with a mobile phone and a social media account has a platform to instantly connect with a captive and potentially unlimited audience of viewers. Going “live” has led to the next generation of media consumption – with everyone from influencers to businesses and even media outlets getting in on the trend. The phenomenon has been seen by many as yet another breaking down of the established order – giving anyone, anywhere at any time the ability to broadcast to anyone in the world in a way that would have been unthinkable even 10 years ago.
In stark contrast to traditional forms of media distribution, however, in the live-streaming world there is no moderator sitting between the streamer and the viewer to weed out inappropriate content, and no one to exercise judgment over whether releasing the video to the masses is in the public interest. Events such as last Friday have shown us the dark path this phenomenon can take, as well as our inability to control how it is used. Since Friday’s events, lawmakers around the world have called for a deeper consideration of the medium, with some calling for regulations that would require platforms to delay broadcasting to ensure the appropriate moderation of content – as difficult and impractical as that may be.
If the history of the internet has taught us anything, it is that complex problems such as these don’t come with easy solutions – especially when the livelihood of many of the internet giants are riding on it.