Archive

Archive for the ‘social computing’ Category

On Immoderate Speech

May 1, 2016 6 comments

In my last post, I mentioned GamerGate, and tried to say some balanced things. A few people complained that I needed more evidence for one of my statements (and they’re right—I need to do more research), but most people were incredibly polite in their responses. I really appreciate that.

In the blog comments, a friend from grad school decided I had lost my mind, and let me know. That’s OK—we’ve been friends for over 25 years, and he’s a good guy to argue with over interesting things. I politely told him that I disagree, and that I have data to prove it. He is sticking to his view. I’m fine with that—we’ll agree to disagree.

After that, some folks who care about GamerGate attacked my friend in the blog comments. My friend was immoderate in his tone. Some of the replies were polite requests for facts. Others were insults with less substance behind them, and the intensity of the comments escalated. It was, uh, interesting to watch….

One of the fundamental disagreements on the Internet today is about the role of immoderate speech. Is it OK to call someone a rude name or use obscene language? Are the rules different if the person is a public figure?

There’s actually, believe it or not, a correct answer to this question: It depends on where you are on the internet. The internet is not one place. Social norms are local. What it’s OK to say on 4chan or 8chan is not OK to say on your work mailing list or on comments on a mainstream news site.

Social norms differ even on different parts of the same site. One team of students in my Design of Online Communities class this term studied Shirley Curry’s YouTube Channel. Shirley is a 79-year-old grandmother who plays Skyrim, and posts her unedited gaming streams. My students found that everyone is extremely polite on Shirley’s channel. The social norms are different on her channel than on the channels of anyone else streaming the same game.

None of this is new. I wrote about how social norms differ by site in the 1990s. But one new challenge for social norms of online interaction is Twitter. What neighborhood is Twitter in? It’s in all of them and none of them. What social norms apply? No one knows. And sometimes people who think they are interacting in a Shirley-like world end up in a conversation with people who think they are on 4chan. Oh dear. Neither side leaves that encounter happy. And that’s why a lot of online conflict starts on Twitter, and on other sites that don’t have clear social norms.

Regarding what sort of neighborhood this blog represents: I’ll post (almost) any comment, but I’d appreciate it if folks would keep things more Shirley-like. I don’t mind a bit of immoderate speech now and then. But the problem is that when you crank up the intensity, a significant group of people stop listening. Calm, polite discourse might actually influence people—we all might learn something.

The Rheingold Test

April 29, 2016 50 comments

In 1993, Howard Rheingold explained the new phenomenon of online communities to a skeptical public. To convince people that online communities are really communities, he told powerful stories of members of the California BBS The WELL supporting one another not just with words, but with their time and money. For example, WELL members sent books to a bibliophile who lost his library in a fire, and helped with medical evacuation for a member who became seriously ill while traveling.

I offer this definition:
An online group where members offer one another material support passes “the Rheingold Test.”

I’ve written before that it’s silly to argue about what “is a community.” We have different mental models for “community,” and online interaction can be like and unlike face-to-face groups in nuanced ways. But I will argue that when a group passes The Rheingold Test, something special is happening.

Each spring when I teach CS 6470 Design of Online Communities, I’m surprised by the groups my students discover that pass the Rheingold Test. Years ago, master’s student Vanessa Rood Weatherly observed members of the Mini Cooper brand community sending flowers to a member whose daughter had a miscarriage. It’s not what you’d immediately expect from a group of people brought together by a car brand. In our increasingly secular society, people are looking for a sense of belonging—and finding it in affinity groups.

This term, my students’ research projects found two more sites that pass the test when I wouldn’t have expected it. The first is Vampire Freaks, a site for Goth enthusiasts. In the press, Vampire Freaks is notorious for a few incidents where members have posted about violent acts and then gone ahead and committed them. But those incidents don’t characterize daily activity on the large and active site. Just like the Goth kids at your high school stuck together and would do anything for one another, the members of Vampire Freaks support one another in times of trouble. One member comments:

“I’ve helped quite a few of my friends [on Vampire Freaks] through a lot of hard times… family issues, losing parents, losing children, drug problems even. And just being there as someone that’s supportive, instead of putting them down. Even offering a place for people to come stay if they needed somewhere… I’ve had friends off this website that have actually stayed at my house… because they were traveling and didn’t have money for a hotel. So I’d known them for a few years and figured, it’s a weekend, I’ll be up anyways. Let them stay there and hang out.”

Grad students Drew Carrington, John Dugan, and Lauren Winston were so moved by the support they saw on the site that they called their paper “VampireFreaks: Prepare to be Assimilated into a Loving and Supportive Community.”

The second surprising example from this term is the subreddit Kotaku in Action (KIA), a place for supporters of GamerGate. Although the popular press portrays GamerGate as a movement of misogynist internet trolls, the truth is that the group is made up of a complex combination of members.  KIA includes many sincere (and polite) civil libertarians, people tired of excesses of political correctness, and people tired of the deteriorating quality of journalism and angry about the real-world impact of biased reporting. People who identify as GamerGaters also include people who dox people they disagree with (posting personal information online), send anonymous death and rape threats, and worse. (Those things are not allowed on the KIA subreddit, and moderation rules prevent them.) It’s a complicated new kind of social movement with its own internal dynamics. I’ll be writing a lot about them, but for now I just want to note that they have a strong sense of group identity, and help one another when in need. Posts on KIA show members donating money to a member in financial crisis, and one who needed unexpected major dental work. They also banded together to raise money for a charity that helps male abuse survivors. They are not a viper’s nest (though there are some vipers in the nest). And they care about one another in the classic way.

When a site passes The Rheingold Test, it means there is something interesting happening there—that the whole is more than the sum of its parts. Do you know a site that passes the test? Leave me a comment.

 

Notes/Clarifications:

  • “GamerGate” is a social movement centered around a Twitter hash tag among other things. GamerGate and the KIA subreddit are not the same thing.
  • Doxxing and threats have definitely occurred, but were sent by anonymous people. Whether or not those were “by people who affiliate with GamerGate” is disputable.
Categories: social computing

More on TOS: Maybe Documenting Intent Is Not So Smart

February 29, 2016 1 comment

In my last post, I wrote that “Reviewers should reject research done by violating Terms of Service (TOS), unless the paper contains a clear ethical justification for why the importance of the work justifies breaking the law.”  My friend Mako Hill (University of Washington) pointed out to me (quite sensibly) that that would get people in more trouble–it  asks people to document their intent to break the TOS.  He’s right.  If we believe that under some circumstances breaking TOS is ethical, then requiring researchers to document their intent is not strategic.

This leaves us in an untenable situation.  We can’t engage in a formal review of whether something breaking TOS is justified, because that would make potential legal problems worse. Of course we can encourage individuals to be mindful and not break TOS without good reason. But is that good enough?

Sometimes TOS prohibit research for good reason. For example, YikYak is trying to abide by user’s expectations of ephemerality and privacy. People participate in online activity with a reasonable expectation that the TOS are rules that will be followed, and they can rely on that in deciding what they choose to share. Is it fair to me if my content suddenly shows up in your research study, when it’s clearly prohibited by the TOS?  Do we really trust individual researchers to decide when breaking TOS is justified with no outside review?  When I have a tricky research protocol, I want review. Just letting each researcher decide for themselves makes no sense. The situation is a mess.

Legal change is the real remedy here–such as passing Aaron’s Law, and also possibly an exemption from TOS for researchers (in cases where user rights are scrupulously protected).

Do you have a better solution?  I hope so. Leave me a comment!

Should We Pay Less Social Media Attention to Violence? Lessons from WWII and Fu-Go

September 10, 2015 Leave a comment

On September 11th, 2001, I turned the television off. I knew what was happening was historic. But I knew my family in New York City were fine. Initial details were sketchy, and lots of misinformation was being reported. I figured, why listen to the blow-by-blow—I’ll get the real story later, right? Plus I didn’t see any point in wallowing in tragedy. So I did the only sensible thing I could think of—I got back to work. I had a paper deadline for the CHI conference coming up.

If part of the point of terrorism is to draw the public’s attention, what if we all simply refused to listen? Or if the media refused to publish the story? Of course that’s a silly suggestion. People want to know. In fact, some evolutionary biologists argue that we are hard-wired to be fascinated by danger—our fascination keeps us safe. Is turning news about violent attacks off either possible or desirable?

I was struck again by this idea when I listened to the Radio Lab podcast on the Fu-Go project. During World War II, Japan sent thousands of balloons carrying bombs to the US. The intent was to terrorize the American public. To prevent that outcome, the US government suppressed stories about the bombs. No one was told about them, and the public wasn’t terrorized at all. And that mostly worked out perfectly—with one exception. A group of children out on a church picnic found one, and all gathered around to see what it was—with disastrous consequences. All the other bombs exploded harmlessly. (Though there’s a chance that some are still out there, in remote places in the Pacific Northwest. One was found in 2014.)

The tradeoffs here are headache inducing. By suppressing freedom of speech and freedom of the press, the US government prevented national panic. And caused the deaths of six people.

I would never condone government suppression of free speech. And if bombs were floating around my area, I’d definitely want to know. But I do wonder if sharing stories about such acts is causing part of the problem. What if we all stopped forwarding the link about that crazy shooter? Would that make the next person less likely to shoot people for attention? What if we all just turned this kind of news off?

I don’t think it’s either possible or desirable to do that on a large scale. But maybe, just as an experiment, we could all try not posting/forwarding/retweeting stories that draw attention to someone who did something heinous in order to get attention.

Kudos to Radio Lab for a thought provoking (though depressing) podcast.

Categories: social computing Tags:

Hulk Hogan and Social Media as the New Big Brother

August 4, 2015 1 comment

Have you ever said anything you regret? Anything that is a little bit offensive? Anything that might get you fired? Ever get angry and vent inappropriately? Have you done that any time in the last eight years?

Last week, wrestler Hulk Hogan was released from his contract by the World Wresting Federation (WWE) because he had been recorded in a racist rant eight years earlier. Am I the only person disturbed by this? I don’t condone racism. And I’m not ready to invite Hogan over for tea. But was it a pattern of behavior over time, or just one rant? He apologized for the rant. Are we allowed to be wrong and grow and change any more? No matter what he said, should a person’s life be changed by one rant?

WWE is a publicly traded company, and they are within their rights to fire anyone for not meeting the terms of their contract. I don’t know what Hogan’s contract says, but I’m sure they have a lot of leeway. Hogan has moved over the years from being a feature attraction, to nice to have around, to now a liability—so they let him go. But this is representative of a broader trend: we are moving dangerously close to a world foretold by old science fiction novels. Where one angry moment, captured on video, changes your life. In Orwell’s 1984, it was the government that was responsible for surveillance. In 2015, it’s the public and the media—spread via social media. Social media is the new Big Brother.

OK, at least Hogan knew he was being recorded. That’s not true for the employees of Planned Parenthood who were surreptitiously recorded, and the results edited and shown out of context for political purposes. It’s not true of the employees of Acorn who similarly targeted in 2010. And it’s not true of LA Clippers owner Donald Sterling, who was recorded making racist remarks by his girlfriend in his own home. As a result of those remarks, Sterling was forced to sell the team. His case is arguably the most disturbing, because he was in his own home at the time the remarks were made. Lately it seems like we all might be recorded at any time. You have the right to remain silent. Anything you say may be used against you in the court of public opinion. 

Hulk Hogan might be a bit racist. A preponderance of reports confirms that Donald Sterling is definitely a racist—that tape was just the tip of the iceberg. But no matter how despicable Sterling is, I believe in his right to privacy in his own home. The source of these privacy violations is not the government, but the easy ability to record video and share it over social media. But the solution to our eroding privacy is not clear.

Categories: privacy, social computing Tags:

Foul Yet Profound Social Media: YikYak

October 17, 2014 Leave a comment

It’s a good morning if the students are chatting about squirrels. They seem obsessed with campus squirrels. Other mornings, they brag about sexual exploits (real or imagined), about how hard they are studying, and about how much they are not studying.  The site is YikYak, social media that lets you post anonymously to others in your local geographic area.

Amid the racist, sexist, and homophobic sewage stream of words come oddly profound moments. Students post about depression and stress, and I wonder whether they are just venting or really in need of help. A student posts about needing emergency contraception, and immediately receives replies with both practical information and encouragement. Another student asks, “Is it bad that all I’ve ever wanted to be is a mom & a wife?” Another says he/she was so nervous about a job interview that he/she was up all night.

yak-best-oct2014

Popular Yaks, October 2014

You can judge a campus’ mood from its Yak. A couple days ago there was a daytime robbery on campus, and our YikYak stream exploded with anxiety. My colleague Jessica Vitak at University of Maryland says her campus’ Yak yesterday was filled with panicky messages about ebola. (An ebola patient had just been transported to their area.) With the CDC located in Atlanta, GT students were a bit more blasé (one popular post: “GT student cures ebola, receives B.”) YikYak streams at different places feel different. You can tell when you’ve moved closer to a different school. And the feed in rural Georgia away from any school was so different (so depressing) it was stunning. GT students are clever and smart and mostly positive and respectful. If I was college shopping now, I’d look carefully at each campus’ YikYak.

I’ve spoken with campus administrators, and they are indeed watching YikYak. You can get real feedback about campus from it. One student posted with frustration that his/her appointment at the health center was canceled with no explanation, and 29 people up-voted the message. That suggests it’s not an isolated problem. Another posts that a professor made him/her feel stupid during office hours. I cajole him/her into at least sharing what department the class was in, and mail the department chair a friendly alert.

The administrators I’ve spoken with are worried about the foulness on YikYak creating a bad impression for their campus. I have a more pressing concern: What do you do when you’re worried an anonymous posting might be from someone who really needs help? Yesterday a colleague at Drexel University saw a posting that looked suspiciously like a suicide note. It could be from anyone at three nearby universities. What can anyone do?

For scholars of social media, YikYak is fascinating. Users can upvote or downvote each posting. Just five down votes makes a posting disappear from the feed. As a result, social norms quickly emerge that differ by location. There’s a great research project lurking in here. Maybe several.

If you’re a social media researcher or campus administrator interested in YikYak, email me if you’d like to be added to a Facebook group for discussing it, Meta-Yak.

Categories: social computing

Anyone feel a chill in here?

June 23, 2013 4 comments

I was about to follow a socialist acquaintance on Twitter this afternoon, and hesitated for a second. It wasn’t rational–it was just a feeling: Is this smart? Is someone watching who is watching radical people?

My own politics are pretty moderate, bordering on boring–I try to see both sides of issues. But I love far left folks–they make me think. And it appalls me that I hesitated with the “follow” button. This is the chilling effect of indiscriminate surveillance.

Categories: privacy, social computing
%d bloggers like this: