Will social media and cyberbullying always go together?

Privacy, Crime, News

Cyberbullying appears to be nearing epidemic levels. The i-SAFE Foundation estimates that more than one in three young people have experienced cyberthreats, and more than half of adolescents and teens have been bullied online; the same number admit to engaging in cyberbullying themselves.

For teens on social media platforms like Twitter, Snapchat, and Facebook (yes, teens are still using Facebook), it doesn’t take long to notice some form of cyberbullying, whether directed at you or not. The CRC notes that cyberbullying affects all races, and victims are more likely to have low self-esteem and to consider suicide.

Parents definitely have to be on their guard in this climate and be willing to invade their teen’s privacy, but as several recent incidents show, cyberbullying isn’t just a phenomenon for teenagers. Plenty of adults are finding themselves going regrettably viral, suddenly famous enough for armies of trolls to find them, taunt them, and even violently threaten them.

Given the widespread incidence of cyberbullying, one would expect social media platforms to take measures to address it. And they are—to some extent. But are they doing enough to protect users from abusive taunts and outright threats?

Moral responsibility?

Professor Scott J. Shackelford, who teaches cybersecurity law and policy at the Indiana University Kelley School of Business, notes an emerging trend in corporate social responsibility. “I would argue that companies should treat the privacy of their customers as a moral imperative, potentially even a human right that deserves protection.”

In fact, as far as Shackelford is concerned, cyberbullying should be considered an aspect of cybersecurity, and thus an aspect of corporate social responsibility.

Both Facebook and Twitter have policies regarding cyberbullying. Facebook has created a Bullying Prevention Hub that includes tips for parents, teens, and educators. And Twitter’s online abuse policy warns users to take threats seriously, beyond unfollowing or blocking another account.

But enforcing these policies has proven challenging for the social media networks.

Viral threats

Consider the case of Ijeoma Oluo, who chronicled the cyberbullying she and her children were subjected to while on a road trip. As recounted in her story published on Medium, “Facebook’s Complicity in the Silencing of Black Women,” Oluo’s joke tweet about being the only black person in a Cracker Barrel restaurant led to a torrential backlash of hatred that extended beyond the Twittersphere.

Oluo became the target of death and rape threats and racial slurs, and even received threats against her children. The intimidation came via email as well as direct messaging. Oluo reported the abuse to Twitter, which she said was quick to respond, but when she shared her abusive treatment on Facebook, her experience was much different.

As the cyberbullying escalated on that social media channel, Oluo, who did not travel with her computer, was unable to report it via her mobile phone. So instead she posted screenshots of the abusive Facebook rants, and, in a bit of a catch 22, Facebook suspended her account for three days for posting the screenshots.

Another black woman, Francie Latour, had a similar experience with Facebook’s policies. Latour was at a suburban Boston grocery store with her children when a young white man unleashed a profanity-laced racist rant. She used her Facebook account to vent, explicitly recounting the rant, but within 20 minutes Facebook deleted her post, saying the content violated company standards.

For German-Israeli activist/comedian Shahak Shapira, it was Twitter that proved unresponsive after he was the subject of homophobic, racist, anti-Semitic, and sexist tweets. When Twitter failed to respond to his satisfaction, Shapira created large stencils of some of the tweets and used chalk-based spray paint to place them at the doorstep of Twitter’s Hamburg, Germany office. And he uploaded a video of his protest on YouTube, titled “#HEYTWITTER.”

Who is liable?

Moreover, while technology and social media networks seem to enable cyberbullying, they’re immune from liability. Shackelford cites Section 230 of the Communications Decency Act, which grants immunity from liability to an “interactive computer service” that publishes information provided by third parties.

But the CDA does not protect companies from liability of cyberbullying that occurs within their networks, such as intra-corporate email. Such threats can be treated as other types of written or verbal threats, Shackelford noted. “This is very fact specific and would depend a great deal on what was said, to whom, and why.”

Companies, he said, should do what they can to protect their customers and third parties from cyberbullying that may be taking place through their networks and platforms.

“But the key is enforcement,” Shackelford notes. “As technology advances, and with more resources, the letter of the law will hopefully catch up with digital realities.” Shackelford says companies should have a transparent policy allowing anyone experiencing bullying to anonymously inform a decision-maker with the power and authority to take immediate action.

If you’ve been victimized by cyberbullying to the extent you feel legal action is necessary, it’s best to contact an attorney for advice.  Additional guidance for parents and teens (and anyone else) can be found at bullyingstatistics.org.