Trying to Swim in a Sea of Social Media Invective

Photo
Credit Minh Uong/The New York Times

Over the last few months, I’ve watched friends and colleagues endure endless harassment on Twitter. Strangers have hurled offensive, racist names and gendered insults, relentlessly and with little fear of consequence. I’ve come across blog posts that capture similarly awful experiences.

In one, Imani Gandy, a lawyer and legal analyst, describes the harassment she receives on Twitter this way: “The hate-filled invective spewed by the dregs of society awaits you in your notifications. It’s personal and there’s no avoiding it.” In her five years on Twitter, she says, she has been called the N-word so many times that “it barely registers as an insult anymore.”

I’m outraged by these verbal assaults, but it’s not always easy to stop them. The problem is deep. Such hostile interactions have prompted discussions — and a Supreme Court case — about what constitutes free speech on the Internet, and whether the web breeds a culture of misogyny and virtual violence.

An October study from the Pew Research Internet Project found that 40 percent of adult Internet users had experienced some form of online harassment, including physical threats, and that this harassment had occurred on many platforms. Most social media services, including Facebook, Instagram, Tumblr and Yik Yak, a service on which people share posts anonymously, have struggled with these kinds of problems.

Much of the vile harassment I’ve noticed in recent weeks has appeared on Twitter, particularly in the wake of flash-point news events. These include the decisions by grand juries not to issue indictments in the deaths of Eric Garner on Staten Island and Michael Brown in Ferguson, Mo. There was also ugly fallout from GamerGate — an organized and relentless campaign to discredit and intimidate critics, mostly women, of the gaming industry and culture, which is largely male-dominated.

Twitter has not been blind to complaints and concerns about harassment. It recently updated its service to streamline features that let people block offensive users and report abusive tweets. In addition, to respond better to such abuses, the company recently partnered with Women, Action and the Media, a nonprofit organization that advocates balanced gender norms in the media.

It might be easy to say that the site processes hundreds of millions of tweets each day and that the company is doing its best. But over the course of its history, Twitter has been fairly agile about tweaking its services and rolling out new features — when it has chosen to do so.

For example, late last year, it began sending its users an alert when two people in their personal networks started commenting about a television show. The company also set up an experimental Twitter service called Magic Recs that highlights new and interesting accounts.

Given Twitter’s ability to be inventive and to start both those features in a short span of time, I wonder what else it could be doing to curb verbal abuse.

I contacted the company. It declined to comment about its current product policies. But in a recent blog post about its anti-harassment efforts, Twitter said: “We are nowhere near being done making changes in this area. In the coming months, you can expect to see additional user controls, further improvements to reporting and new enforcement procedures for abusive accounts. We’ll continue to work hard on these changes in order to improve the experience of people who encounter abuse on Twitter.”

Free speech is crucial, of course. And as much as I’d like them to intervene, social media companies are often reluctant to be arbiters of appropriate speech and to make judgment calls on what constitutes harassment — though they do make hard decisions on other sensitive issues. This month, Twitter blocked tweets in Pakistan that were considered “blasphemous,” at the request of an official in that country.

Some social media services say they are uncomfortable with imposing restraints on their users. One start-up, DormChat, a college-oriented messaging service, prefers to rely on “self-policing” rather than be “in the business of moderating comments,” said Adam Michalski, its founder.

It does allow users to flag objectionable content as inappropriate, which alerts the app’s moderators to review the post and decide if it should be deleted, though it said reviewing all comments would be difficult especially as the service grew larger. Mr. Michalski said he hoped users who came across “noise” would “supersede that content with better content.”

“We don’t want to play God and make decisions,” Mr. Michalski added. “We want the community to make decisions.”

Yet it’s hard for social media services to avoid playing an active role in these issues for long. Yik Yak, the anonymous message service that aims at the college set, has found itself at the center of controversies about the nasty tone of some messages that flow through the site. The company has responded by trying to limit its use among people under 18 by building virtual “geo-fences” around high schools, aimed at blocking underage students from using the service. And the service has hired outside firms to weed out unsettling posts.

Another service, Secret, which allows people to share gossipy tidbits anonymously, came under fire this year over meanspirited posts on its platform. It responded by delivering warnings whenever it detected someone trying to post a defamatory or rude message about another person.

Still, companies sometimes begin business without addressing such concerns. A new social network called Ello, for example, started with little in the way of privacy or user-blocking controls. It has since added such features.

Other start-ups are building security and anti-harassment policies into the infrastructure of their services.

VProud, a site started by Karen Cahn, a former YouTube executive, promotes itself as a kind of digital safe haven. It filters out posts with certain keywords — like those used in name-calling and racial epithets — automatically. It also offers ways for its user community to moderate discussions, allowing people to flag inflammatory remarks. This way, the service can tackle blatantly offensive comments as well as harassing and sexist remarks that may be cloaked in less-threatening language.

Jacob Hoffman-Andrews, an independent developer, has created a small piece of free software called Block Together that is meant to block attacks from people who’ve created multiple accounts for the sole purpose of harassing people online. Block Together has attracted close to 5,000 users.

Now the law is becoming involved, as a major case involving social media, harassment and free speech has made its way to the Supreme Court. The court will consider whether violent statements posted on social media are proof of intent to harm.

The case involves Anthony D. Elonis of Allentown, Pa., who began writing a series of dark posts on Facebook after his wife left him in 2010. In a district court, he was convicted of making threatening communications, and was sentenced to 44 months in prison.

Facebook itself has developed a rigorous set of community standards for what material is acceptable on its site. It counts on users to report offensive and bad behavior, then makes a judgment call on whether or it will remove the flagged comments.

In its ruling, the court could determine whether all online speech is covered by the First Amendment, even when such speech suggests an intent to cause harm.

In the meantime, the problem seems to grow only more urgent. There’s a ton of ugly vitriol being poured into social media. We need to keep the Internet free and open, but we also need it to be a civilized, hospitable place where all people can be comfortable and safe. Creative solutions are vital.