Dating Just Got A Serious Security Upgrade
A group of hacker activists are developing a suite of tools to strike back at the culture of silence and isolation that surrounds harassment, coercion and assault. There are currently seven such tools, each of which focuses primarily on a dating site or social network (which, face it, most of us use as dating sites), though some target apps. Depending on the website or app that the tool is designed for, it has a different capability.
“Previous online attempts to address rape culture have done so by drawing on demonstrably-effective offline tactics,” writes one of the developers. “We see conversations about awareness-raising, education, giving the silenced a voice, protecting survivors from re-victimization and triggering situations, etc. that echo the efforts of hardworking feminist and sexual violence-prevention advocacy groups in the analog world. These tools for fighting sexualized violence have been developed over decades of trial and error. They still make a powerful impact when translated to a new medium. However, they’re not enough. […] What’s exciting about the Internet isn’t that it provides a platform for us to fight rape culture in the same ways we’re used to only bigger and faster and with more animated gifs. What’s important is that the Internet gives us opportunities to address sexual violence in ways we haven’t even imagined yet.”
OKCupid’s Match Questions As An Early-Warning System
On the dating site OKCupid, where users answer questions to enable the site to match them with other users, the tool (called Predator Alert Tool for OKCupid, or PAT-OKC) looks for users’ answers to questions that might be concerning. Because OKCupid has increasingly made user answers more and more public (pushing them to be entirely public in 2010 by default), PAT-OKC streamlines the process through which users get to know one another on the dating site, thus enabling people to take a closer look at “concerning” answers to determine for themselves whether they answers are, in fact, something they should be worried about.
Some of the questions that come baked into the tool as potential red flags are obvious, for example: “Have you ever been in a situation where you tried, but for various reasons did not succeed, in having sexual intercourse with an adult by using or threatening to use physical force (twisting their arm, holding them down, etc.) if they did not cooperate?” This question was famously used in the 2002 study Repeat Rape and Multiple Offending Among Undetected Rapists by David Lisak and Paul M. Miller, which found that if one describes a crime instead of using common terminology for it (such as “attempted rape,” in this case) perpetrators are more likely to admit to it. These findings were mirrored in a 2009 study, Reports of Rape Reperpetration by Newly Enlisted Male Navy Personnel by Stephanie K. McWhorter, et. al.
Of course, not all questions that seem concerning to the developers will be concerning to all users of PAT-OKC. For example, one of the questions the tool highlights as a red flag is “Have you ever choked someone who you were in some kind of intimate relationship with (e.g., you wrapped your hands or some object around their throat)?” Because the question is ambiguous in terms of consent (some people, after all, are fine with choking provided it is previously discussed, negotiated and consented to by all involved), it’s important for PAT-OKC users to take a look at all answers and make their own determination about each potential partner they encounter on the site.
And because the developers of these tools are aware that their ideas of what’s concerning are limited by their own experiences, they enable each PAT-OKC user to select their own “red flag” questions.
In a sense, PAT-OKC is an early-warning system that users can modify to suit their needs, streamlining the existing discovery dance enabled by OKCupid into a simple process that shifts the dating site model from flashy advertising slogans to actual product specs.
When asked whether this tool is dangerous because it might “teach” predators to lie, one of the developers pointed out that it isn’t necessarily a bad thing to signal to people that certain behaviors are so egregiously bad that people who engage in them should consider lying about them instead of blatantly admitting to them as though they’re merely sharing their favorite food.
“Rape culture is a culture where it’s actually okay not to lie about believing that someone owes you sex if you buy them dinner, for example. Part of how this culture sustains itself is by sending the message that it’s perfectly reasonable to believe that’s an ethical way to behave, because there are relatively few if any social consequences for behaving in that gross way,” writes one of the developers. “Predator Alert Tool for OKCupid makes it much harder to behave that way without social repercussions on OKCupid.com, thereby forcing fewer people who actually behave that way to admit it in public and, over the long haul, reducing the amount of cultural messages young people receive that indicate to them that it’s okay to behave that way.”
PAT-OKC is a tool to help people, but it’s also a tool to alter culture.
One important thing to note here is that because PAT-OKC uses your profile to access other people’s Match Question answers, these users will see that you have looked at their profile (as they would if you did this process manually). To get around this, the developers suggest using OKCupid’s anonymous browsing option (though keep in mind that no matter what, users that you check out will nevertheless show up on your “recently visited” list). With the anonymous browsing option, you’ll have a record of having looked at these profiles, but your visits will not be visible to them. These and other helpful hints and answers are given in the tool’s Read Me section. A complete installation guide can be found here.
Twitter Lists As A Tool For Sanity
Predator Alert Tool for Twitter, PAT-Twitter, is the first fully unhosted peer-to-peer browser add-on in the PAT toolkit. This tool basically supercharges Twitter’s official lists feature, which enables users to organize the feeds of people they follow (or simply wish to subscribe to without following). With PAT-Twitter, you’re given three additional listing options: a “blocklist,” “warnlists,” and a “Support Circle” list.
The blocklist shows all the people you have blocked on Twitter in one place. The warnlists are for you to populate yourself — either in one sitting or over time. When you add someone to a warnlist, you can even write a note detailing why you chose to do so (this note is private to you, unless you decide to share your list, but more on that later). Even people one has blocked or been blocked by can be added to a warnlist. Unlike the official Twitter lists, users cannot remove themselves from warnlists. They don’t even know they have been placed on a warnlist — and neither does Twitter. Your warnlists live in your browser — that’s what the term “unhosted” means.
If a Twitter account has been added to a warnlist (provided it has not also been blocked) and one of their tweets shows up on your stream, the tweet will appear with a red box around it, as a warning. That red box can be clicked to go over the details about why the user was warnlisted — this is useful not simply because sometimes we need to jog our memories, but because with PAT-Twitter, you can choose to share your warnlists with other users.
In order to do that, you need to run something called a “facilitator,” a simple server that enables this list (which otherwise lives only in your browser) to be shared. This process requires a little tech-savvy, so to make it easier, developers have released a plugin for use on sites that run WordPress, which essentially turns your blog into a facilitator in a few clicks.
By password-protecting your facilitator, only those with access can download your warnlists, and no one else. Of course, you can also make a publicly-accessible facilitator, which is kind of like creating your very own version of the Block Bot — only instead of blocking people without giving a reason, you put them in a warnlist for others to choose whether they want to block, and you provide comments as to why you placed them on a list for others to make their own choice.
The neat thing is that by creating your own warnlists and subscribing to the lists of people that you trust, you make sure that the lists you use cover people who most affect you and whose tweets truly merit warning according to you or your community’s idea of appropriate behavior. Through a facilitator, PAT-Twitter places the power in your hands to help make Twitter a better experience for people in your circles.
“What constitutes ‘predatory’ is entirely up to you; the software makes no claim as to what behavior hurts you,” developers write regarding warnlists. “Predator Alert Tool for Twitter can be used to, for example, flag trolls, warn your followers about rapists, or to expose cops and snitches who use social media.”
Lastly, PAT-Twitter enables you to create a list of people that you trust: the Support Circle. This nod to the excellent Circle of 6 app enables you to send everyone on that list list a quick direct message asking for backup in one click — a useful thing in case you ever find yourself the target of harassment or abuse.
You can read more about PAT-Twitter here.
Facebook As A De-Siloing Measure
The Predator Alert Tool for Facebook, called PAT-FB, was specifically designed to provide support for survivors of sexual assault and rape. It allows users to share information about people in their networks who may be dangerous, read warnings about people made by others, and connect with other survivors.
PAT-FB is comprised of two independent pieces of software. The first is a Facebook app that you install via the social network itself, which lets you share stories of negative experiences with other users on your network. You can share these stories openly, make it so only people in your own network can read them, or post so only other people who have issued reports on the same person can see yours.
The other half of this tool is a viewer that is installed on your browser. This viewer puts a red box around any posts that appear on your Newsfeed by people who have received negative reports, as a warning. You can click through the red box to the person’s profile to read the negative reports.
These two things function independently, so you do not need one to use the other. If this seems like a lot of work for you to install two things instead of just one, think about it this way: the Facebook app part of the tool that lets you create reports with privacy settings needs access to your Facebook network in order to work. Like any Facebook app, when you install this part of PAT-FB, you are giving developers some access to your Facebook account. Maybe you do this with CandyCrush Saga or Words With Friends and don’t think it’s a big deal — but it is. It opens the door to administrative abuse, and when you are creating tools to help people, anything that hands over people’s power is an incredibly big deal, even if that power is going to developers who want to do good things.
This is why the viewer is separate from the app. If you don’t feel comfortable giving the tool’s developers access to your account, that is absolutely your choice — you can still use the viewer to see if anyone has been publicly flagged in your network and access those reports. The only thing you lose out on by not having both aspects of the tool in place is the ability to see friends-only or other visibility-restricted reports.
As a whole, the developers of these tools are sensitive to the privacy concerns of survivors and go out of their way to be transparent about who can access what. This isn’t an issue for tools like PAT-OKC or PAT-Twitter, but it does come into play here on Facebook. This is a serious issue for the tool’s developers, which is why they’ve released all their tools to the public domain and encourage users copy the PAT-FB software entirely, install it for themselves and administer it on their own — or have someone they trust do it for them. They also have an open document discussing how they might decentralize control of the server to diffuse power and distribute better trust.
With that out of the way, let’s discuss some of the other concerns that come to mind when someone tells us that there is a text field on the internet specifically designed for people to share the bad experiences they’ve lived at the hands of other people.
To begin, the concern that someone may say or write bad things about us is completely legitimate. It indicates that we are social animals and that we are aware that our actions have consequences, and that by extension, reports about our actions have consequences. This is how social animals work. If you are concerned, all your systems are working.
Now, how do we know this? We know this because our meatspace social ties serve this function already. Researchers say that gossiping among humans is not unlike grooming behaviors among other primates, strengthening the bonds of those who participate in the exchange. But beyond bonding, this information exchange also has another function — it keeps us appraised of people in our community and lets us know whether there is anything we need to be aware of, from seemingly innocuous social infractions, to serious concerns.
We share information with our circles on a regular basis. Most of us like to think that we’re very discerning and never gossip, but in truth, we’re sharing information with one another about other people all the time. For example, in the last office where I worked, I knew that the head of human relations was completely oblivious about sexual harassment policy applying to men as much as it does women. I didn’t know this from first-hand experience: one friend in accounting and another in tech support had, on separate occasions, told me the clever excuses they had learned to deploy to evade her requests for shoulder rubs. When I went to management about an unspecified workplace issue, my boss’ response was “Is this about the shoulder rubs?” I asked him if anyone had ever filed a formal complaint about that. He said no. The creepy, inappropriate shoulder rubs were just common knowledge.
This happens all the time. I can’t think of a single circle of friends or conference I have attended where, after being initiated into the in-group, I wasn’t given access to information about other people. Some of it was pointless, some of it was malicious, and some of it was actually very useful. It’s the way we are.
If you have ever seen a carnivore seek a meal among a herd, you’ll quickly notice that it never rushes into its center. When we see a herd’s males fighting on the savanna, we often imagine that they’re fighting for access to females. That’s not wrong, but they’re also fighting for the space. A position in the center of the herd is a position of safety. It is where females want to give birth. It is where males want to be. It means surviving — surviving to mate, surviving to rear young. The center is safe. It’s the unfortunate ones in the periphery that will be picked off, one by one. In many herds, animals can be pushed to the periphery as a form of “punishment,” too.
Humans are not herd animals, but this is also a dynamic of social animals, and you can see it at play among us. We, too, cast members of our groups out. We, too, realize the importance of maintaining ourselves in the central safety zones. Unfortunately, as we become more and more nomadic culturally, we lose access to a lot of the ties that offer us information about our immediate surroundings. In some ways, this has been liberating. You don’t need to be a transgender rights advocate or fan of Edith Wharton to recall how destructive some community’s norms can be, or how any violation of these could render you in the margins with nowhere to go, and no access to any help or any information about potential risks. But in another and no less real sense, it has left every last one of us exposed to these very same risks because we’re simply not always part of every single in-group all the time.
Not being a part of the in-group doesn’t just keep information out of reach, it also makes us — if we should find the courage to share a negative experience — an attacker on the in-group’s very structure. At the heart of most silencing attempts of victims of any type of misbehavior, you will always encounter a circling of wagons, even among people who are aware that the member they’re protecting has a tendency to engage in bad behavior. The danger is known to them and they, as part of the in-group, have found a position of relative safety within, which enables them to overlook the behavior as an imminent threat, “yes, he’s very grabby, but he’s done a lot for the community,” or, “yes, she’ll never let you break up with her once you start dating, but she’s such a loyal friend.”
This is hugely problematic because it enables bad behavior to persist. While the ones who end up paying the price are most often those on the periphery, no one in the herd itself is truly immune, either.
Which brings us back to PAT-FB, a tool that centralizes the capabilities of any text field on the internet. Disclosures about bad behavior, when you think about it, already exist all over the internet. We take and are taken to task on just about every existing network. Some networks are diligent about censoring anyone who shares their experiences with too much identifiable information — Tumblr and Fetlife are notorious examples. But a good deal more let both real accounts by survivors, as well as malicious allegations meant to harass and bully, live on. What makes PAT-FB different is that it puts this information together at the “point of need,” that is — right on the site a person is already using.
Despite what the name may suggest, PAT-FB can’t actually tell you who is a bad human and who is 100 percent safe. In my office example, I spoke about a woman who never behaved in any questionable way with me. Life, you’ll find, is a lot more like Game of Thrones than it is like Harry Potter where everyone is either wholly good or wholly evil.
In reality, there is no good and evil binary and PAT-FB isn’t a sorting hat that places people into the Good House and the Evil House. It’s merely a tool to let you make your experiences accessible to people who may not be a part of the group among whom certain behaviors are common knowledge. As with PAT-OKC, each user must make a determination about whether the information being shared is relevant to them. And we know how to do this already, too — one of the most rewarding relationships I’ve had was with a man I was warned was a liar and a cheat. We discussed it after a few dates and I decided that I trusted him to uphold the relationship boundaries we’d negotiated. We had a great go, and broke up for unrelated reasons. No regrets.
“None of the Predator Alert Tools can tell you who is or is not a predator — that’s not a computer’s job, it’s a specific human’s job,” writes one of the developers. “More specifically, it’s everyone’s job to do for themselves. PAT-OKC is designed to make it easier for users to interrogate whether a specific person is dangerous to them. It’s not designed to do any thinking instead of them; whether the word is ‘abuse,’ ‘predator,’ or anything else, there is no ethical alternative to using your own judgement.” The same is true of PAT-FB.
Malicious accusations are something the developers are very aware can happen on this tool, but after interrogating themselves, they determined that it is not just to deny survivors a tool because someone may use it to make allegations simply to tarnish someone else’s reputation.
As one of the developers reiterated: “the solution to speech you disagree with is not censorship, but more speech.” They encourage people who see accusations about themselves to address them in their profiles and their conversations — and to question themselves and their behaviors.
“We are all complicit in abusive systems,” acknowledges one of the developers. “We are all a predator to somebody.”
And what about bullying? Most of us who spend time online have seen trolls descend on a post, filling it with things so unpleasant, even seasoned veterans of flame wars waste no time pulling out the ban-hammer.
“The problem with ‘building sexual violence prevention tools into every social network on the internet’ is that dominance (or even authorship) of those tools will be a prime target for some abusers,” warns Lisa Millbank.
The response from the developers is addressed in the FAQ:
Every technology has the potential to be used maliciously. By providing this tool to help connect survivors with each other, we may simultaneously be making it easier for some people to publicize harmful or insulting statements about others. The tool’s ability to be misused in this way is not a good enough reason to keep it out of the hands of the people who can be helped or kept safe by it.
Where conflicts of interest arise, PAT-Facebook is grounded in an ethic of prioritizing the needs of the most vulnerable. This means we prioritize the needs of survivors over the needs of people who might be falsely accused. It also means that when there is a conflict of interest between survivors, we will prioritize the needs of youth over the needs of adults, the needs of private individuals over the needs of public figures, and the needs of survivors who currently lack any support over the needs of survivors who have established support systems.
Right now, although this tool has the potential to do some harm, more survivors are harmed more often by the lack of tools like this one than they are by this tool’s existence.
But perhaps the most controversial aspect of the tool is the fact that submissions are not moderated. That means that harassment can occur, which we’ve discussed, and it also means that people may fill it with so much useless information that it becomes too difficult to wade through it all. This sort of intentional disruption is usually referred to as “griefing,” and it’s already happened with another one of the tools — one deployed for the kinky social network FetLife.
But as the attorney and feminist writer Thomas Macaulay Millar pointed out, “You might expect that people would spam the database to make it useless. There are many obvious griefing entries, just junk filled in with silly descriptions. But so what? In fact, sometimes, the patterns of those entries tell a story themselves. Someone named a British kinkster, and the response over the next two days was a flood of obvious griefer traffic, many of the reports made by people who identified themselves and were in fact friends of the guy identified as having violated consent. This is the community response to survivors’ stories, captured in real time, the support for the accused and pressure to shut down disclosure. That swarm had one other nugget in it, though: another report that the accused had violated someone’s consent.”
The developers agree with him. One of them tracked spam reports on this particular PAT, and found an incredible wealth of information.
“Bluntly, we can now observe, digitized in real time, what ‘a community closing ranks around an alleged abuser,’ looks like,” they write in a post that explains the data analysis of spam reports. “And more importantly, we can see how and through whom that behavior spreads. […] If this hypothesis can be proven, it may provide a far more reliable red flag for identifying social groups where consent violations are likely to be covered up rather than addressed constructively.”
But there are other reasons for this choice, and these have nothing to do with data analysis.
“We have to work within the constraints that we have, one of which is that it is not possible for this particular tool to be moderated,” writes one of the developers. “Moderation requires time, commitment, expertise, cognitive and emotional energy, and a certain amount of willingness to make arbitrary and relatively uninformed decisions about other peoples’ lives. Moderating spaces where people post extremely highly contested, emotionally and politically charged content requires all of these things to an even greater degree.”
Moderation is risky. It gives someone power. Who should hold that power? In most networks, we don’t even know.
Anyone who has experienced harassment to any social network knows what it feels like to report abuse only to hear that some unnamed team somewhere in the world reviewed the submitted post, tweet, account, and found it not in violation of their policies. It is the most soul-crushing feeling in the world. Sometimes, we know these decisions are wrong. We frequently question why the system is so broken. A system that takes down a picture of a new mom breastfeeding but leaves up communities that advocate sexual coercion and rape is a broken system.
But this does not render these networks completely useless.
In a recent conversation with someone about reporting features, I mentioned how Google Plus had a feedback feature that enables a user to take a screenshot of an issue to submit along with their trouble ticket. “Wouldn’t it be great if people could do this when they reported an abusive comment, and the screenshot — timestamped — went not only to Google, but also to a Drive folder in the event that the user needed to make a police report? As it is, when you report a comment, Google Plus essentially deletes it from your profile, meaning that getting it back would probably require a subpoena, or at least a whole lot of legwork.”
The person I was speaking with grew concerned, “but think of how people could abuse that!”
Yes. It can be abused. Everything can be abused. People even use the favoriting feature on Twitter as a form of intimidation. The favoriting feature. Nothing is sacred. That’s the thing about tools — it matters who holds them. A screwdriver can be used to build, but it can also be used to maim. Does that mean everyone should be denied screwdrivers?
Moderation is a compelling feature, and in the ideal world, it is a feature to believe in. But it is, by nature, not neutral, like a tool. It is like any user of the tool, capable of causing good or causing harm — often both at the same time. To illustrate, let me share the incident that drove it home for me recently: this year’s feminist speculative fiction conference WisCon.
As with most in-groups, it was “common knowledge” in the speculative fiction community that editor and then-Tor employee Jim Frenkel frequently engaged in harassing behavior toward women. I’ve mentioned before how information tends to stay in the in-group, but here I want to touch on how certain spaces vitally limit the flow of information with a pertinent quote from the novelist Cherie Priest:
“When we don’t report, when we don’t come forward in an Official Capacity, this is what we do instead. We form social antibodies. We inoculate our friends, the newer women who aren’t used to this shit yet. It feels like the only thing we can do — the only thing we can really do, since a Formal Report might be your word against his. A Formal Report might not believe us, and might even come back to bite us one day for all we know. It’s a small industry. People talk. We don’t want to look like a ‘problem.'”
If there were ever any question why developers decided to allow people to share negative experiences anonymously, there it is. But now, let me tell you a story: In 2013, at a WisCon event, a woman finally made a formal report to the conference’s Safety Committee, and wrote about it, prompting a series of other posts that revealed the extent of Frenkel’s behavior.
Despite that first and another formal report that followed, and despite receiving further information about the issue the following year, WisCon not only allowed Frenkel to return in 2014, they welcomed his participation as a conference volunteer. A few weeks after the conference, one of the women who’d filed a report in 2013 posted that WisCon had informed her that they had lost her report of harassment. The same organizer also told her that Frenkel had attended because her co-complainant had requested that Frenkel not be banned. Later, it would come out that this was a lie.
Days after that post, the first woman to file a report announced that WisCon had admitted that they had lost her harassment report as well. In response, the conference organizers explained that these complaints had not been properly maintained by former conference organizers. While possible, that didn’t explain why they had not acted on the information provided to them directly weeks before WisCon 2014.
The internet, as you know, is already full of tools. Blogs and social media opened up the backchannel to members of the community far beyond the reach of simple in-group dynamics. The resulting outcry forced WisCon to take action. Unfortunately for attendees, that action was the conference equivalent of sending Frenkel to the corner for two minutes.
The committee decided to ban Frenkel “provisionally” for four years — by “provisionally,” they meant that he would be allowed to return if he could present “substantive, grounded evidence of behavioral and attitude improvement between the end of WisCon 39 in 2015 and the end of the four-year provisional period.” Basically, he could return whenever, either by doing something or by hanging on for four more years. Additionally, they extended him the right to appeal their decision to WisCon’s governing body.
Not long after, it came out that the four year period imposed “provisionally” on Frenkel’s ban was inspired by a statement made by Frenkel to conference organizers about being under a gag-order by his employer regarding the harassment complaints, which effectively prevented him from apologizing for his behavior. The committee never thought to fact-check that statement. It wasn’t true.
Far from putting the situation to rest, the ensuing avalanche of tweets, blog posts and comments broke well past the confines of spec fic circles. And it enabled a lot of the pieces to come together — as it turns out, the committee wasn’t crazy. They were responding to a situation about which they had very little information. Not all were aware of what the in-group knew about Frenkel, and they had, in their midst, someone who actively worked to prevent them from accessing this information. The person who was appointed Member Advocate actively suppressed reports of harassment.
The committee was lazy and sloppy, as well, failing to solicit the input of women who’d come forward — but not the input of the person named as a harasser. And neither did they check with his employer about the nondisclosure agreement Frenkel claimed to be bound to, though this agreement became a critical aspect of their decision to “provisionally” ban him for four years.
“Fandom has traditionally depended on ‘back channels’ of information to warn others about problematic and potentially dangerous people in the community, but the thing about those ‘back channels’ is that they only protect you if you’re connected to people with experience in those communities who are willing to talk,” wrote Michi Trota in response to what happened. “People with little ‘name cred’ or connections, people who enter WisCon alone and without anyone they know are vulnerable because there’s no guarantee anyone will take them aside and say, ‘Hey, you’ll have a good time here, but keep an eye out for So-and-So and don’t be alone with them.’ My friends and I were extremely lucky because while we were WisCon first-timers, we were friends with con veterans who looked out for us, showed us the ropes and checked in regularly to make sure we were ok. Not everyone is so fortunate […] It also assumes that the people who are ‘in the know’ are also in positions of power and will be able to act as a shield for those who don’t. But that isn’t going to actually work if those in charge don’t actually know that vital information. Several members of the WisCon Subcommittee knew that Frenkel had an unsavory reputation but did not know that he had a history of harassment and ended up being played by a skilled manipulator. Whether through naivete, willful ignorance or casual dismissal of evidence, they failed the community they were meant to protect.”
Frenkel was eventually banned permanently from WisCon. The trust lost as a result of the episode will not be so easy to regain.
Moderation is hard. If there is an ounce of realism in you, you know that no one is completely unbiased. No matter how well intended, no one, no one, is above reproach. Biases exist, and so do episodes of bad behavior. This is what developers of the PATs mean when they say there is no abuse binary. And this is one of the central reasons why PAT-FB is unmoderated, why developers are so adamant about decentralizing control of the server for PAT-FB, and why they lobby for people to set up the tool on their own servers so that they can determine who gets to have administrative control, and why they have released all tools as open source software.
Because at the end of the day, even developers with the best intentions in the fight against rape culture can be somebody’s abuser.
“Supporting survivors means supporting all the survivors,” writes one of the developers. “Even the ones who make us uncomfortable or who we dislike.”
Tailoring To Users
“I don’t think any of these tools, or even all of them together, will put the nail in the coffin of rape culture,” writes one of the developers. “Like other kinds of abuse, rape culture adapts to new environments quickly. Activists need to stay on our games in order to keep exposing new forms of it as they appear. We need to keep experimenting, trying new things, and being creative with whatever resources we have available. What I find most powerful about these tools is the ways each seems tailored to the specific culture from which it emerged.”
She elaborates:
Predditors [the Tumblr that outed posters of the subreddit r/Creepshots] addresses rape culture on Reddit by retaliating against its perpetrators using technological savvy, counter-rhetoric about free speech and privacy, and a “troll the trolls” sort of strategy all suited to Reddit’s particular cultural sensibility. FAADE [now PAT-Fetlife], on the other hand, capitalizes on a mentality strongly espoused by FetLife users that the BDSM community is like a “small town” in which everyone is connected to everyone else by kinship ties. BDSMers often rely on personal references and a player’s public reputation to assess their safety, thus a database allowing FetLife profiles (the site of a player’s public reputation online) to be tagged with negative references from community members has a powerful impact on the sub-cultural consciousness.
The Predator Alert Tool for OKCupid (PAT-OKC) is particularly interesting because it interacts with a micro-culture that, unlike FetLife or Tumblr or even Reddit, doesn’t experience itself as a community. Certainly, there are some very involved OKCupid users who have personal relationships with each other, are invested in the success of the site, and have more of a group mentality about the user base. But I think, for most people, OKCupid is a place to meet and interact with strangers. Issues like community reputation are less salient. Instead, OKCupid culture revolves around sharing lots of personal information about oneself with strangers and filtering efficiently through personal information about free-floating others. This creates a very different set of rape culture issues than those addressed by “name and shame” strategies. PAT-OKC, rather than de-siloing information about potentially predatory users shared by others, begins by simply reporting what those users have told us about themselves.
In addition to these three tools, developers have also created — as mentioned previously — a PAT for the kinky social network FetLife, one for the dating site ChristianMingle, one for the hookup-facilitating app DOWN (formerly known as Bang With Friends), and one for the rate-your-ex app Lulu. Word is that they’re working on tools for JDate, Match and Tumblr next.
PAT-Fetlife runs on a transparent database meant to give users access to reports of other users’ bad experiences. It’s somewhat like the Facebook PAT in terms of mission, though in terms of privacy, PAT-Fetlife is essentially just a big spreadsheet that holds user-submitted information without requiring user credentials. Users can submit information anonymously, and the database is posted publicly to anyone who wants to look at it. There is nothing the administrator can see that a user cannot, and there is nothing else on the “spreadsheet” other than what users submit to it. The aspect of the tool that highlights “flagged” profiles while a user is browsing FetLife is part of the tool’s browser add-on that locally interacts with a user’s browser — it doesn’t enable the administrator to access the browser.
PAT-Lulu and PAT-BangWithFriends use the PAT-FB engine, which means that they both share the privacy model and privacy issues that PAT-FB does. PAT-ChristianMingle, and the upcoming PAT-JDate and PAT-Match take their framework from PAT-Fetlife, and their privacy model is the same as the latter. Basically, there are four core PATs — Twitter, Facebook, FetLife and OKCupid — and other tools developed in this suite apply their frameworks.
Earlier this year, developers connected several of the PATs to CreepShield.com. Now, when using PAT-OKC, PAT-ChristianMingle, or PAT-FetLife, every profile picture a user encounters will be scanned using facial-recognition software from CreepShield and compared with mugshots from the Sex Offender Registry to render potential matches. As with so many aspects of this tool suite, many of the developers involved in this project do agree that the Sex Offender Registry has serious issues, but they don’t think these issues mean access to that additional information should be denied to the users of their tools.
“There is no good excuse for not building sexual violence prevention tools into every social network on the Internet,” writes one of the PAT developers. “The Internet industry is in a unique position to effect arguably the most sweeping resistance to systemic sexual violence in history. Moreover, it wouldn’t even be technologically complex, or expensive. And we’ve already proved it’s possible.”
The total budget for all seven tools remains zero. The ten or so people participating in development, beta-testing, and documentation does so on a volunteer basis, and they welcome feedback and help from others. As mentioned, every tool is released into the public domain, meaning that anyone can take the code and alter it to suit their needs, or build on it to cover another aspect of the internet.
Header image by the United States Navy, (Flickr, CC BY-ND 2.0).
Pingback: The FetLife Meatlist: How A Social Network Failed Its Users - MiKandi Adult App Store()