Archive

Archive for the ‘social computing’ Category

Is there any point in talking to “them”?

August 5, 2017 5 comments

In response to my last post, Mike Travers wrote:

I wish I retained more of the liberal faith in the power of conversation, but after many years of trying to engage with a variety of right-wing types on the net, I really don’t. Face to face conversation sometimes has the power to change minds, but it’s a decreasing proportion of human interaction, which may be one of the roots of our current troubles.

I believe Mike has zeroed in on the most important issue in this conversation: is talking to “them” even worth it?  If you believe it is not, then I can see why you might sink to calling the other side names or punching them. If you believe that conversation might help, then of course you wouldn’t.

It’s fascinating to me how many people on both sides say they have no interest in talking to the other side. I had a conversation a week ago with a team of brilliant people who told me that there was absolutely no point in having any conversation with people who are unsure about LGBT rights, vaccination, or climate change. I admit that I have strong views on all those issues and have trouble imaging a sincere conversation with someone who disagrees with me. But I’m willing to try.

The other side feels the same way. The term “social justice warrior” (“SJW”) has emerged to describe folks they hate. Urban Dictionary defines SJW as “a pejorative term for an individual who repeatedly and vehemently engages in arguments on social justice on the Internet, often in a shallow or not well-thought-out way.” Members of the alt-right and others who use that term believe there’s no point in even trying to talk with an SJW–SJWs have made up their mind, and are not listening to others. It’s shocking to me how many people on both sides are not willing to consider the idea that the other side might have something worthwhile to say.

I still have, as Mike says, liberal faith in the power of conversation. I agree with Mike that conversation works better in person. But could we have conversations online that bridge the political divide? What if more people said, “I don’t think we agree on much, but let’s talk–and I’ll try to keep an open mind.” Even people with the most diametrically opposite views I believe can find some common values.

Could we create an internet site to facilitate those conversations, across the political divide? My students and I talk about this all the time. If we can come up with a good idea, we are going to try it. Would it be a structured discussion forum with rules of engagement and scaffolding for finding common values and agreed-upon facts? Could it be a kind of ‘game with a purpose,’ where finding common values scores points? Would anyone bother to try such a system? What would make it worth their while?

If you’re interested in these ideas, I recommend the US & Them Podcast. If you have ideas about software design for understanding across the political divide, leave me a comment–maybe we’ll really try it!

More on Common-Sense Symmetry: Please don’t punch Nazis!

August 4, 2017 1 comment

Thank you for all the great comments on my last blog post. My favorite comment so far said (paraphrasing): “That was a pretty wordy way to say ‘double standards.'”  (Wow–yes, thank you!)

Another way to say the same thing: Please don’t punch Nazis, or exclude them from health clubs when all they are doing is lifting weights. Yes, I think the alt-right’s Richard Spencer is a sad excuse for a human being. If I am ever unlucky enough to meet him in person, I will tell him so, in detail. But would I punch him? Of course not. Punching the Richard Spencers of the world means we sink to their level. It means Spencer and his followers can describe their opposition as violent and irrational–and they’ll be telling the truth.

What I find incomprehensible is that nice people who I respect have told me that in their view, the person who punched Spencer did the right thing. How is it even possible to think that? How is it possible to not see the negative implications of sinking to their level?  By sinking to their level, we fuel their anger, relinquish our claim to the high ground, and lessen the (already slim) chances of achieving greater mutual understanding.

The more complicated question of course is whether striving for mutual understanding is always a desirable goal. In most cases, I believe it is. But are there groups so heinous that they don’t deserve an attempt at conversation? I personally don’t think so, but I understand that it’s complicated. I will say, regardless, please don’t punch them.

Common Sense Symmetry: Language and Political Correctness

July 31, 2017 4 comments

I’m part of a Facebook group for women, and one of the members recently posted a “news story” about a man who went sunbathing nude. According to the story, a bird of prey mistook his private parts for turtle eggs, and the man ended up in the hospital. There are so many problems with this posting that I hardly know where to start. First, it’s fake news. Second, it’s not funny. Third, if someone posted a similar story with the genders reversed, wouldn’t there be an outcry that it was offensive and sexist? Can you imagine the reaction to a story about a naked sunbathing woman being attacked in delicate places by a bird of prey?

Fortunately, few members of the group “liked” the posting, and several responded positively to my comment that perhaps this wasn’t appropriate. But I was still left shaking my head: why don’t people use common sense to see the symmetry? If you can’t say that about women, why can you say it about men?

Similarly, why is it OK for some people to complain about “white people” on Twitter? I was astonished recently to see a favorite fiction author going on tirades against white people. In my view it wouldn’t be acceptable to go on a tirade against “black people,” so why is it OK to complain about “white people”? Common sense symmetry: if you can’t reverse it, don’t say that. How about instead going on a tirade against “racists”? Or even about “white people who don’t recognize their implicit privilege,” or “white people who <any adjective you like can go here”>? Yes, we can and should talk about race. Yes, racism is a pervasive problem that is critical for the future of our society. But aren’t you reinforcing racism by complaining about “white people,” “black people,” “Asians” or any group as a whole? Even just adding an adjective like “some” (or even “most”) helps.

Some people argue that it is more acceptable for members of a comparatively disempowered group to be critical of other groups—i.e., it’s more acceptable for women to be critical of men, and African Americans to be critical of Caucasian Americans than the other way around. I don’t really understand that argument—rudeness is just rudeness. It’s particularly problematic because it adds fuel to the fire of the culture wars. Over the last two years, I have spent time online with groups of people (particularly members of the GamerGate movement) who among other things advocate for men’s rights. Their online discussion sites are filled with outrage at cases where common sense symmetry is not applied. They are justifiably outraged at tasteless cases like the tale of the “turtle eggs.” Going beyond that, some take the next step to argue that men are just as oppressed as women. We could have a long and complicated discussion about how to measure the relative oppression of various groups within society, but I’ll go on the record as saying that I believe that the statement that men are as disempowered as women is not supported by the facts no matter how you measure. But every time we tell turtle egg jokes or vent about “white people,” we give energy to groups that are not in favor of working towards embracing diversity and empowering all groups.

On Immoderate Speech

May 1, 2016 6 comments

In my last post, I mentioned GamerGate, and tried to say some balanced things. A few people complained that I needed more evidence for one of my statements (and they’re right—I need to do more research), but most people were incredibly polite in their responses. I really appreciate that.

In the blog comments, a friend from grad school decided I had lost my mind, and let me know. That’s OK—we’ve been friends for over 25 years, and he’s a good guy to argue with over interesting things. I politely told him that I disagree, and that I have data to prove it. He is sticking to his view. I’m fine with that—we’ll agree to disagree.

After that, some folks who care about GamerGate attacked my friend in the blog comments. My friend was immoderate in his tone. Some of the replies were polite requests for facts. Others were insults with less substance behind them, and the intensity of the comments escalated. It was, uh, interesting to watch….

One of the fundamental disagreements on the Internet today is about the role of immoderate speech. Is it OK to call someone a rude name or use obscene language? Are the rules different if the person is a public figure?

There’s actually, believe it or not, a correct answer to this question: It depends on where you are on the internet. The internet is not one place. Social norms are local. What it’s OK to say on 4chan or 8chan is not OK to say on your work mailing list or on comments on a mainstream news site.

Social norms differ even on different parts of the same site. One team of students in my Design of Online Communities class this term studied Shirley Curry’s YouTube Channel. Shirley is a 79-year-old grandmother who plays Skyrim, and posts her unedited gaming streams. My students found that everyone is extremely polite on Shirley’s channel. The social norms are different on her channel than on the channels of anyone else streaming the same game.

None of this is new. I wrote about how social norms differ by site in the 1990s. But one new challenge for social norms of online interaction is Twitter. What neighborhood is Twitter in? It’s in all of them and none of them. What social norms apply? No one knows. And sometimes people who think they are interacting in a Shirley-like world end up in a conversation with people who think they are on 4chan. Oh dear. Neither side leaves that encounter happy. And that’s why a lot of online conflict starts on Twitter, and on other sites that don’t have clear social norms.

Regarding what sort of neighborhood this blog represents: I’ll post (almost) any comment, but I’d appreciate it if folks would keep things more Shirley-like. I don’t mind a bit of immoderate speech now and then. But the problem is that when you crank up the intensity, a significant group of people stop listening. Calm, polite discourse might actually influence people—we all might learn something.

The Rheingold Test

April 29, 2016 50 comments

In 1993, Howard Rheingold explained the new phenomenon of online communities to a skeptical public. To convince people that online communities are really communities, he told powerful stories of members of the California BBS The WELL supporting one another not just with words, but with their time and money. For example, WELL members sent books to a bibliophile who lost his library in a fire, and helped with medical evacuation for a member who became seriously ill while traveling.

I offer this definition:
An online group where members offer one another material support passes “the Rheingold Test.”

I’ve written before that it’s silly to argue about what “is a community.” We have different mental models for “community,” and online interaction can be like and unlike face-to-face groups in nuanced ways. But I will argue that when a group passes The Rheingold Test, something special is happening.

Each spring when I teach CS 6470 Design of Online Communities, I’m surprised by the groups my students discover that pass the Rheingold Test. Years ago, master’s student Vanessa Rood Weatherly observed members of the Mini Cooper brand community sending flowers to a member whose daughter had a miscarriage. It’s not what you’d immediately expect from a group of people brought together by a car brand. In our increasingly secular society, people are looking for a sense of belonging—and finding it in affinity groups.

This term, my students’ research projects found two more sites that pass the test when I wouldn’t have expected it. The first is Vampire Freaks, a site for Goth enthusiasts. In the press, Vampire Freaks is notorious for a few incidents where members have posted about violent acts and then gone ahead and committed them. But those incidents don’t characterize daily activity on the large and active site. Just like the Goth kids at your high school stuck together and would do anything for one another, the members of Vampire Freaks support one another in times of trouble. One member comments:

“I’ve helped quite a few of my friends [on Vampire Freaks] through a lot of hard times… family issues, losing parents, losing children, drug problems even. And just being there as someone that’s supportive, instead of putting them down. Even offering a place for people to come stay if they needed somewhere… I’ve had friends off this website that have actually stayed at my house… because they were traveling and didn’t have money for a hotel. So I’d known them for a few years and figured, it’s a weekend, I’ll be up anyways. Let them stay there and hang out.”

Grad students Drew Carrington, John Dugan, and Lauren Winston were so moved by the support they saw on the site that they called their paper “VampireFreaks: Prepare to be Assimilated into a Loving and Supportive Community.”

The second surprising example from this term is the subreddit Kotaku in Action (KIA), a place for supporters of GamerGate. Although the popular press portrays GamerGate as a movement of misogynist internet trolls, the truth is that the group is made up of a complex combination of members.  KIA includes many sincere (and polite) civil libertarians, people tired of excesses of political correctness, and people tired of the deteriorating quality of journalism and angry about the real-world impact of biased reporting. People who identify as GamerGaters also include people who dox people they disagree with (posting personal information online), send anonymous death and rape threats, and worse. (Those things are not allowed on the KIA subreddit, and moderation rules prevent them.) It’s a complicated new kind of social movement with its own internal dynamics. I’ll be writing a lot about them, but for now I just want to note that they have a strong sense of group identity, and help one another when in need. Posts on KIA show members donating money to a member in financial crisis, and one who needed unexpected major dental work. They also banded together to raise money for a charity that helps male abuse survivors. They are not a viper’s nest (though there are some vipers in the nest). And they care about one another in the classic way.

When a site passes The Rheingold Test, it means there is something interesting happening there—that the whole is more than the sum of its parts. Do you know a site that passes the test? Leave me a comment.

 

Notes/Clarifications:

  • “GamerGate” is a social movement centered around a Twitter hash tag among other things. GamerGate and the KIA subreddit are not the same thing.
  • Doxxing and threats have definitely occurred, but were sent by anonymous people. Whether or not those were “by people who affiliate with GamerGate” is disputable.
Categories: social computing

More on TOS: Maybe Documenting Intent Is Not So Smart

February 29, 2016 1 comment

In my last post, I wrote that “Reviewers should reject research done by violating Terms of Service (TOS), unless the paper contains a clear ethical justification for why the importance of the work justifies breaking the law.”  My friend Mako Hill (University of Washington) pointed out to me (quite sensibly) that that would get people in more trouble–it  asks people to document their intent to break the TOS.  He’s right.  If we believe that under some circumstances breaking TOS is ethical, then requiring researchers to document their intent is not strategic.

This leaves us in an untenable situation.  We can’t engage in a formal review of whether something breaking TOS is justified, because that would make potential legal problems worse. Of course we can encourage individuals to be mindful and not break TOS without good reason. But is that good enough?

Sometimes TOS prohibit research for good reason. For example, YikYak is trying to abide by user’s expectations of ephemerality and privacy. People participate in online activity with a reasonable expectation that the TOS are rules that will be followed, and they can rely on that in deciding what they choose to share. Is it fair to me if my content suddenly shows up in your research study, when it’s clearly prohibited by the TOS?  Do we really trust individual researchers to decide when breaking TOS is justified with no outside review?  When I have a tricky research protocol, I want review. Just letting each researcher decide for themselves makes no sense. The situation is a mess.

Legal change is the real remedy here–such as passing Aaron’s Law, and also possibly an exemption from TOS for researchers (in cases where user rights are scrupulously protected).

Do you have a better solution?  I hope so. Leave me a comment!

Should We Pay Less Social Media Attention to Violence? Lessons from WWII and Fu-Go

September 10, 2015 Leave a comment

On September 11th, 2001, I turned the television off. I knew what was happening was historic. But I knew my family in New York City were fine. Initial details were sketchy, and lots of misinformation was being reported. I figured, why listen to the blow-by-blow—I’ll get the real story later, right? Plus I didn’t see any point in wallowing in tragedy. So I did the only sensible thing I could think of—I got back to work. I had a paper deadline for the CHI conference coming up.

If part of the point of terrorism is to draw the public’s attention, what if we all simply refused to listen? Or if the media refused to publish the story? Of course that’s a silly suggestion. People want to know. In fact, some evolutionary biologists argue that we are hard-wired to be fascinated by danger—our fascination keeps us safe. Is turning news about violent attacks off either possible or desirable?

I was struck again by this idea when I listened to the Radio Lab podcast on the Fu-Go project. During World War II, Japan sent thousands of balloons carrying bombs to the US. The intent was to terrorize the American public. To prevent that outcome, the US government suppressed stories about the bombs. No one was told about them, and the public wasn’t terrorized at all. And that mostly worked out perfectly—with one exception. A group of children out on a church picnic found one, and all gathered around to see what it was—with disastrous consequences. All the other bombs exploded harmlessly. (Though there’s a chance that some are still out there, in remote places in the Pacific Northwest. One was found in 2014.)

The tradeoffs here are headache inducing. By suppressing freedom of speech and freedom of the press, the US government prevented national panic. And caused the deaths of six people.

I would never condone government suppression of free speech. And if bombs were floating around my area, I’d definitely want to know. But I do wonder if sharing stories about such acts is causing part of the problem. What if we all stopped forwarding the link about that crazy shooter? Would that make the next person less likely to shoot people for attention? What if we all just turned this kind of news off?

I don’t think it’s either possible or desirable to do that on a large scale. But maybe, just as an experiment, we could all try not posting/forwarding/retweeting stories that draw attention to someone who did something heinous in order to get attention.

Kudos to Radio Lab for a thought provoking (though depressing) podcast.

Categories: social computing Tags:
%d bloggers like this: