Free Speech on Private Networks


IMG_20171109_223514.jpgI joined Twitter in 2007. I come in with a Twitter ID below the 1 millionth user mark (678,213 to be precise). This doesn’t make me any better or worse at using the platform. In fact, I haven’t changed how I use it at all in the last decade (a reason why I don’t have much of a following). I still post whatever drivel comes to mind, like this picture my niece drew. Poorly thought jokes about politics or whatever in on my TV at the moment or (because I love cliches) what I’m currently eating fill my feed. Twitter, however, has changed around me.The introduced native linking to other profiles via the “@” sign. They introduce native retweeting so that we no longer had to wrap them with “RT @_____ ‘’.” Hashtag linking. Discussion threads. Native photo uploading. Native search. GIF searching and posting. Character limit increases throughout. All of these things layered on to the simple idea of microblogging. Subcultures blossomed. In 2007 it was all geeks and Leo Laporte was the most followed account. The geeks remain, and others followed. The idea that you only see posts from people you follow means that the uses for Twitter are almost infinite. It also means that you control your experience and what Twitter is. If you follow news outlets and news-lovers, it’s a place for breaking news. You can congregate around a cause and it becomes a tool for social movements.Here’s the thing. If you follow only alt-right news sources and a few nazis and senators peppered in there, you have also successfully used Twitter to form a community around something. Something terrible. Something like #gamergate. Something that is (rightfully so) looked down upon and scorned. I depart from the popular train of thought here, as I posit this: Twitter should not censor tweets of any kind. The whole thing about Twitter is that it is what you make it. It is a common carrier. Or, at least, it should be.Yes, there are limits. Direct threats, doxing, and the like should be policed. Illegal things should be reported to police. Otherwise, I feel that making tools available to block your viewing of offending material should be the approach. Blacklisted keywords. muting, and blocking. The con here is that it is asking something of the user (and possibly the victim depending on the situation). As such, Twitter should have a default similar to Google’s “safe-search.”This is actually already there--sort-of. Did you know Twitter is FULL of porn? You may not know this. Twitter attempts to block media on these accounts by default (as it should). It’s a thing that has plagued many platforms. Facebook/Instagram go so far as to accidentally censor pictures of breastfeeding. Snapchat and Tumblr both have what seems like a war on porn on their services. It’s a rather good analogy, actually. You have several options when dealing with objectionable content like porn. Banning accounts is an extreme one. Flagging posts and suspending the account for X number of hours is a bit less so. Flagging accounts so that they’re only seen when a user flips an NSFW switch in their settings is another.Let’s examine how a few other sites have handled porn and see if any would work for hate-speech on Twitter. Tumblr decided that objectionable content (porn) couldn’t be hosted on their servers. It was a weird move and now users that want to just embed offsite videos instead. A loophole. Because, there’s always a loophole. They took this action because of bad publicity. Copious numbers of articles about how “Tumblr is for porn”. That’s how they got in their war.Snapchat, too. “Teens are sending naked pictures to each other on Snapchat. Tonight at 10.” So, they too instituted a rule that public posts have to be kosher. Then, they added a feature for semi-private posts. Loophole. Again.Facebook, and its properties like Instagram, are strict to the point of accidental over-censorship, But, again, only on public and semi-public posts. Until recently, there were not many actions taken on content within private groups--again because of bad publicity about groups formed for bullying and revenge porn. What loopholes do these services have? Instead of a group, form a group-chat on on of their Messaging platforms.So, what about Twitter? I’m actually not sure. You can flick a switch to block explicit content. It applies mostly to accounts that post nudes or other R to X-rated sexual content. To be clear, I’m not saying Twitter isn’t broken when it comes to offensive and explicit content--it certainly is. II just don’t think the answer is to ban people from the service. The answer is to hide disagreeable content from the offended. And to do it better. Like the over-zealous filters that make Google Search’s autocomplete go blank. Let the terrible people bee terrible by themselves unless it crosses the line into the illegal. Be more of a common-carrier--but, a common-carrier with a conscience.Saying that something is a slippery slope is a logical fallacy. I have, however, seen people post about how Twitter should police every piece of content on their site and remove all of the objectionable stuff along with the related users. That represents the bottom of the slope to me. There will always be a loophole, too. “Slide into my DMs” comes to mind.This. All of this. It’s why--probably--Jack, Ev, Biz*, and the lot are having trouble pinning down how to approach all of this. User-generated reporting of tweets/accounts is unreliable and can be gamed (e.g. Rose McGowan’s suspension). Different ardent political factions find different things offensive, so even defining what to block out can be difficult. Illegal stuff is obvious. Explicit content is an extension of that. Obscenity is illegal, but good luck categorizing it since even the Supreme Court can’t. So, the beauty of Twitter it its downfall. The fact that different groups can congregate means that each specific group will find different things objectionable. This is how Twitter’s current model works alright-ish. You 1. don’t follow bad people, 2. mute/block the ones who invade your space by @ing you, 3. report offensive and illegal things so that you help make it better for others. Extending this further could help. Here is my (still imperfect) idea:

  • Hide any purportedly offensive or explicit material by default.
  • Flag accounts as offensive if reported by X number of users with some process for review that can be triggered by the owner.
  • Flag accounts as offensive even if the number of users reporting it is lower than the threshold after a review of the account--with an appeal process.
  • If the offensive content is purported to be illegal in nature (death threats, rape threats, child pornography, et c), have lower thresholds and quicker reviews. Obviously, report said users to the proper authorities as well.
  • Introduce a way to limit commenting/replies like on blogging platforms. Perhaps an option that falls between a locked account and an open one. Maybe only allow comments and retweets from people you follow. Similar to how the settings for notifications work. Tumblr has this for replies on posts with a nice feature that allows an in-between for people you don’t follow if they have been following you for X amount of time.
  • Archive banned accounts for both evidence and possible restoration if the accused is vindicated. This would allow two things: assuming the guilt of the accused to protect possible victims and a path back if they are found innocent.
  • Allow some reporting back to users that report accounts as to the outcome of their action.
  • Have some gradation to your safe modes and to your flags on accounts. Default to safest and allow users to choose how “safe” they want their experience to be.
  • Allow DMs to be reported like anything else.
  • Send notifications to users as their content is reported as a kind of warning, but making sure to archive their accounts in case they attempt to delete illegal content.

There are still loopholes. If you proactively block someone, you can defame them and the victim will never know. I can’t think of a work-around for this one. And, when things are policed too much, you end up like YouTube when all the LGBTQ accounts got their ads yanked .In the end, if an open forum feel is your thing, maybe or diaspora instances are for you? On the other hand, I hear Mastodon allows each grouping of people to make their own rules. Also, Ello keeps sending me emails but I haven’t logged in in a year. Are they ok? Are they well? Have they eaten recently?*I have no idea who does what but those are the founders.