Archive for May, 2014

More on anonymizing tweets and Internet research ethics

Twitter research ethics are complicated, and deserve a more nuanced treatment than my short post from yesterday. I’ll take a stab here at saying a bit more:

Question 1: Is analyzing Twitter “human subjects research”?

I want to start by looking at US law. (Note that this is only applicable in the US and only applies to federally funded research, though some companies chose to voluntarily follow these rules and most universities apply the rules to all research whether it is federally funded or not.)  The policy states that several categories of work are exempt from the rules, including:

(4) Research involving the collection or study of existing data, documents, records,  pathological specimens, or diagnostic specimens, if these sources are publicly  available or if the information is recorded by the investigator in such a manner that subjects cannot be identified, directly or through identifiers linked to the subjects.

It’s pretty clear that Twitter data (on open accounts) is existing data that is already publicly available. So legally speaking, I believe researchers are well within their rights to simply use it at will. It’s public, so you can use it. But should you?

Ethical is a higher standard than legal. As Jim Hudson and I found in our study of people chatting on Internet Relay Chat (IRC), people often misunderstand the public nature of online communications. This leads to my second question:

Question 2: If people have expectations of privacy that differ from expert opinion on what is “reasonable,” does that need to be taken into account?

I don’t think there’s a simple answer to that question. It probably has to be addressed on a case-by-case basis.  And if people’s expectations are persistent and continue to differ from the written rules, maybe the rules need to evolve.

If you do consider research on Twitter to be human subjects research, then you need to apply for IRB clearance, and you probably have good grounds to request a waiver of consent.  A waiver of consent is possible in these circumstances:

(d) An IRB may approve a consent procedure which does not include, or which alters,

some or all of the elements of informed consent set forth in this section, or waive the requirements to obtain informed consent provided the IRB finds and documents that:

1) The research involves no more than minimal risk to the subjects;

(2) The waiver or alteration will not adversely affect the rights and welfare of the subjects;

(3) The research could not practicably be carried out without the waiver or alteration; and

(4) Whenever appropriate, the subjects will be provided with additional pertinent information after participation.

In such a case, an IRB might request that the tweets be anonymized, and this would contribute to making the case that the work presents minimal risk. This sounds like a great approach for research on sensitive topics, like epidemiology for example.

Because part of my research is about people’s creative accomplishments online, I am more likely to encounter situations where anonymizing people is unethical because it denies them credit for their work.  We only name people in accounts at their written request, by marking that on a consent form.  And our projects generally use mixed methods—with a combination of analyzing people’s online postings and interviewing them.  I believe this mixed methods approach often gives better research results, and necessarily makes the work human subjects research rather than merely analysis of public information.

I personally prefer to view Twitter research as human subjects’ research and apply for a waiver of consent. Thinking through a formal IRB application and soliciting help from IRB members can help you to think through the details of how to treat your subjects in accordance with principles of beneficence, justice, and respect for persons.  Ethical is after all a higher standard than merely legal.

That said, the public nature of Twitter data is hard to deny.  Maybe the rule about pre-existing, public information needs to be rethought. Something more nuanced would serve us better.

Categories: Uncategorized Tags: ,

Do we need to anonymize tweets in published accounts?

In this article about tweets being made available to researchers, the authors quote two epidemiologists saying ethical use of Twitter should anonymize tweets:

Caitlin Rivers and Bryan Lewis, computational epidemiologists at Virginia Tech, published guidelines for the ethical use of Twitter data in February. Among other things, they suggest that scientists never reveal screen names and make research objectives publicly available. For example, although it is considered ethical to collect information from public spaces—and Twitter is a public space—it would be unethical to share identifying details about a single user without his or her consent. Rivers and Lewis argue that it is crucial for scientists to consider and protect users’ 

I disagree. Of course it may be more often true for epidemiology, but it really depends on what kind of study you’re doing. As Kurt Luther, Casey Fiesler, and I have written, sometimes anonymizing users may be morally wrong because you are denying them credit for their work. (“That tweet was really funny–I want my name on it!”) Twitter is public, published material. The contents of private Twitter feeds are for followers only, but the contents of public feeds arguably are as public as a newspaper article.  If you want to take extra precautions to anonymize people, that’s fine.  But to say it’s always necessary is ridiculous. It depends on the type of study you’re doing.

Jim Hudson and I empirically studied how people often misunderstand how public their communications are. The complicated question that follows is: if user expectations are out of line with what experts would call “reasonable,” how should the scholarly community proceed? Dealing with things on a case-by-case basis is the best we can do for now.

Categories: Uncategorized Tags: ,

Why teach about the USA Patriot Act if the government doesn’t even follow it?

Most fall semesters I teach CS 4001 “Computers, Society, and Professionalism.” I love the class–we cover ethics, argumentation, professionalism, and the social implications of technology. As part of the class, I always teach a lecture about the USA Patriot Act. It’s a labor of love–it takes me three or four times as long to prepare for that class than any other class in the semester, because it’s so complicated and there’s always new news to sort through. Were the “gag order” provisions found unconstitutional or not? What’s the difference between the Protect America Act (which expired) and its new incarnation in the the FISA Amendments Act?  The details go on and on.  I teach class in a studiously neutral way–There are tradeoffs between security and privacy, and where to draw the line is complicated.

PBS Frontline has come out with a new three-hour documentary “United States of Secrets” which takes on a lot of these issues. I highly recommend watching it.  What the US government has been actually recording goes well beyond what is authorized by the Patriot Act. But what I found most depressing about it was not that we are being spied on, but that some government officials apparently have been ignoring the rule of law. For example, the NSA constructed a tenuous theory to give them permission to record basically everything, and the US Attorney General signed off on it. OK, I don’t like the theory, but at least there was an attempt at legality.  But later when the Attorney General changed his mind and decided the program was illegal, the NSA just asked the White House Council to sign off on it instead. Really?  Mom said no so you ran and asked Dad?  (More like Mom said no so you ran and asked your uncle.)  And then there are the videos of the President and other officials flat-out lying to the public and to congress.  They didn’t say “I can’t discuss that”–they lied and said the surveillance wasn’t happening.

It was heartening to see the whistleblowers profiled in the film. There are plenty of good people who tried to speak up–going through every possible internal proper channel before finally going to the press. Our class covers ethical procedures for when and how to become a whistleblower, and the whistleblowers profiled followed those procedures impeccably. And these aren’t civil libertarian liberals–they are pro-defense conservatives who are appalled by what is going on. But in another depressing turn, the government then goes after the whistleblowers, turning their lives upside-down.

What’s the point of teaching students about a law if what the law says doesn’t change how the government actually operates?

Categories: Uncategorized Tags: ,

Bronies: Online Community and Why Some Grown Men Love My Little Pony

May 20, 2014 4 comments

The really fun part of teaching my graduate class Design of Online Communities is that I learn incredible things from the students’ empirical studies of online sites. In Spring 2013, one team of students (Patrick Mize, Michael Roberts, and Aditya Tirodkar) chose to study Equestria Daily, a site for bronies. Bronies are adult, male fans of the children’s television show My Little Pony: Friendship is Magic.  This raises two questions: First, why would adult men become fans of television show aimed at young girls?  Second, what interesting issues does the design of Equestria Daily raise?  I’ll tackle the second issue in another post. Here I want to talk about bronies.

When I first heard about bronies, I was fascinated.  More fascinated because I have a friend (a fellow CS faculty member) who is a brony.  The question of why someone would be a brony has two parts: First, why does anyone join any group? Second, why are some people attracted to brony culture in particular?

After reading my students’ paper (I hope they’ll polish and publish it) and all their interview transcripts, I also watched the documentary Bronies, The Extremely Unexpected Adult Fans of My Little Pony (available on Netflix streaming). And the more I learn, the less surprising it all becomes.

Why do people join any group? Sociologist Ray Oldenburg writes about how people need a third place, neither work nor home.  The full title of his book is “The Great Good Place, Cafés, coffee shops, community centers, beauty parlors, general stores, bars, hangouts and how they get you through the day.”  Oldenburg bemoans the fact that the invention of suburbia has made it harder to find places to casually socialize.  In a more quantitative vein, Robert Putnam notes that we are increasingly bowling alone—fewer people are joining voluntary associations where they can meet others.

My neighbors recently joined a fancy golf club.  At the club, they meet others who share their interests, values and worldview.  Adult club members have an opportunity to talk with friends while playing golf, and their kids meet one another while splashing in the pool. Then they all go to the clubhouse for lunch, and there are more opportunities to build and maintain social ties. Three factors help bring together people with things in common. First, the price of membership means members have a common socio-economic status. Second, a membership application is required, and current members choose to admit those who they feel will fit in.  Finally, self selection is probably the dominant filter—most people have a sense of whether this is the sort of place for them.

I’m not a golf club sort of person. I wish there were a place like that for me.  The golf club is a classic example of Oldenburg’s third place. Because it’s a physical place, members can drop by on a casual basis and meet others informally.  But what do you do if there is no such place nearby that suits you?

Most people, like me, have few third places readily available. Could something like a third place be mediated electronically?  Putnam dismisses that possibility, but he was writing a long time ago, and also doesn’t have empirical data on online communities and the value they bring to their members.  It’s important to note that most online communities are not solely online. As Barry Wellman and Milena Gulia point out, people who initially meet online often go to extraordinary lengths to meet in person, and face-to-face encounters help to strengthen ties.

All of this brings me back to the world of bronies. At brony conventions, pony fans get to meet other like-minded individuals. They meet electronically, enhance their ties in person, and then maintain them electronically until the next opportunity to meet in person arises. It’s not as nice as a golf club where you see your friends every weekend, but it serves a similar function in their lives.  I’m sure if they had the financial means and geographic density of bronies to create a pony club, they’d love it.  Like all fandoms I’ve observed, brony culture is creative. Bronies work in every possible creative medium, and especially make their own art and music inspired by the show.  A pony club would be a sort of maker space with a sound studio, 3D printers, digital and analog art tools, and a space for parties and dancing.  It’s a shame it’s not a more practical idea.

I hope I have given you some insights into why people might want to join a community of like-minded individuals. One mystery remains: Why My Little Pony?  In my students’ interviews and in the documentary, bronies talk about embracing values of kindness and friendship.  It is an explicit rejection of the cultural norm of competitive and aggressive masculinity. If NASCAR and American football repel you, what are you to embrace?  Ponies are the opposite.

There are indeed female bronies (often called “pegasisters”), but they’re less common because the values of the show are more in line with traditional femininity.  If joining a subculture is an act of identity construction that says “I am different,” being a fan of My Little Pony is a more defining statement for men.

Now that brony culture emerged, it’s easy to see why it would appeal to a certain class of incredibly nice guys.  An open question is: why this particular show?  I would love to see a cultural history of the origins of bronydom, and how the subculture initially took off.

My next post will be about what my students learned from studying the design of Equestria Daily.

Categories: Uncategorized

Small Printer Speaks to Large Issues: Online Reviews and Research Epistemology

May 14, 2014 1 comment

Are online reviews fair? Consider these reviews of a small printer, the Canon Pixma MG6320 on the Consumer Reports website. At the time I am writing, there are three reviews, and all three writers gave it one star out of a possible five—the worst possible rating. The review titles are:

  • “Piece of junk”
  • “Unreliable and unbelievably expensive”
  • “The worst printer ever.”

 On the other hand, on the same printer currently has 464 reviews, and it gets an average of four out of five stars. Sample review titles include:

  • “Amazing printer”
  • “Made a great gift”
  • “A very good buy”

There are also negative reviews of course (“I wish I could give it minus stars”), but the consensus is four-star positive.

What is going on here? You could speculate that it’s just a matter of randomness and numbers—the three reviews are too small of a sample to matter, and maybe the printer would drift towards something more consistent over time if more CR people reviewed it. Sinan Aral has also proved that the initial review of an item biases subsequent reviews in ways that affect the final outcome. But I will argue that there’s more than just small sample size involved. It’s quite often the case that CR reviews are dramatically more negative than those on Amazon. I selected this particular item randomly, and this printer was the first item I checked. You can find other items that don’t fit this pattern, and it would be worthwhile to do a systematic comparison and see to what extent it is true. But I believe the printer is not an outlier.

My suspicion is that it has to do with who goes to each site and why. Perhaps people log on to CR to review a product mainly when they are annoyed. Gilbert and Karrahalios studied why people write reviews on Amazon—particularly when an object has already been heavily reviewed and their review says the same thing as previous reviews. They found that some reviewers (“pros”) review for Amazon as a hobby, and take pride in the quality of their reviews. Others (“amateurs”) describe their reviews as “spontaneous” and “heartfelt”—they want to express how they feel about the product. Gilbert writes that Amazon reviews by amateur-style writers have a bimodal distribution—people write because they love a product or hate it. CR gets only one peak—the folks who hate it. The interesting question then becomes, why does CR get only one side of the story? What is it about the site design and its positioning that creates this effect? Further, is there any systematic way we can understand the bias valence of different review sites? Which sites tend to be biased in what ways, and why? Can this help site designers to create review infrastructures that are more useful to their customers? Can we help customers to be better readers, knowing which reviews to believe?

My initial question was, “Are online reviews fair?” I want to argue that that’s not a well-formed question. Better versions might be, “Under what conditions are online reviews of products and services more or less useful to consumers?” and “In what ways do design features of online sites affect reviews that users write?”

In his essay “Thick Description, Towards an Interpretive Theory of Culture,” Clifford Geertz wrote that “The locus of study is not the object of study. Anthropologists don’t study villages (tribes, towns, neighborhoods …); they study in villages.” Some of our loci of study are interesting in themselves—Amazon is Amazonian in size, and worth understanding. But I wonder how much we can develop more broadly relevant insights without comparing villages. It may be easier to understand Amazon when you have Consumer Reports for contrast. Though the two sites differ in so many ways that systematic comparison is a daunting task.

There is a need for good research at all different levels of specificity, from the absurdly general (“Are online reviews fair?”) to the absurdly specific (“In what ways do user reviews of inexpensive consumer printers help people to make good purchasing decisions?”) Researchers trying to build personal reputations tend to err towards making overly general claims—there’s more glory/credit in answering the big questions. But there’s more substance in making an appropriate level of claim for the significance of your findings.

In the same essay, Geertz writes, “Small facts speak to large issues, winks to epistemology, or sheep raids to revolution, because they are made to.” Geertz is a poet, and that line resonates in my mind with my stores of T.S. Eliot and Billy Collins. But I still wonder what it actually means.

I started writing this post for a reason that will seem unrelated. A friend asked how I reconcile the fact that Sherry Turkle and danah boyd are studying similar phenomena—changes in teenage life and family relationships in the presence of mobile and social computing—and coming to quite different conclusions. To unfairly paraphrase, Sherry believes that we are “alone together,” and the technology is changing human relationships for the worse. Danah believes that “the kids are alright”—that the kinds of things teens use these technologies for are quite similar to those same age-appropriate behaviors enacted with previous technologies, and teens are negotiating their stage of life just fine. My answer is that they are both right, and claims are being made for their work (by others more than by them) that are over-generalizing results. Metaphorically, one is studying Consumer Reports and the other is studying Amazon. com, and people are taking their results as being about online reviews in general. (See my post on smart phones and parenting for some examples of both good and bad changes catalyzed by this technology.)

The hard work still to be done is to integrate these two perspectives, and understand their relationship. The important work is to identify what key questions we haven’t yet asked—questions whose answers have actionable consequences. Whether we’re talking about kids and parents on cellphones at dinner or online reviews of which cellphones they should buy, researchers need to ask useful questions and draw conclusions at the right level of specificity.

Categories: Uncategorized
%d bloggers like this: