Archive

Archive for July, 2014

Annoying Internet Users in the Name of Science

July 8, 2014 8 comments

I deliberately annoyed Internet users without their consent in the name of Science, and I would do it again.

How do people on social media feel about being studied by researchers?  To shed some light on this topic, Jim Hudson and I conducted a study in which we entered chatrooms and told people we were studying them. Actually, what we were really studying was whether they kicked us out of the chatroom.

This is not something we did lightly—we got IRB approval for our study, and it went to full board review. For those of you keeping score, this was a full waiver of consent for a deceptive study (we said we were studying language online).  In the study, if we weren’t kicked out of the chatroom after five minutes, we posted a debriefing message about the real purpose of the study with a URL for more information. If we were already kicked out, we did not disturb participants further. The study was approved because our IRB felt there was minimal risk to participants and the study could not practicably be carried out without the waiver.

We used the user name “Chat_Study.” We eliminated chatrooms with names that suggested sensitive topics (like health support groups), and we chose only groups with a channel operator present who had the power to kick us out. In the remaining groups, we entered, waited one minute, and posted one of four messages:

  1. No message: None
  2. Recording message: We are researchers recording this chatroom for a study on language use in online environments. For questions or comments, email study@mail.chatstudy.cc.gatech.edu. Thank you!
  3. Opt out message: We are researchers recording this chatroom for a study on language use in online environments. If you don’t want to be recorded, please whisper “Chatstudy opt out” to us. For questions or comments, email study@mail.chatstudy.cc.gatech.edu. Thank you!
  4. Opt in message: We are researchers and would like to record this conversation for a study on language use in online environments. If we may record you, please whisper “Chatstudy volunteer” to us. For questions or comments, email study@mail.chatstudy.cc.gatech.edu. Thank you!

So what happened?  In short, almost no one opted in or out, and we were kicked out of the chatroom in less than five minutes in 63.3% of the time (in the conditions in which we posted a message). In the control condition in which we entered but posted no message, we were kicked out 29% of the time.  The paper includes many of the “boot messages” we received.  In addition to lots of messages like “go away” or “no spamming,” we received some more colorful ones. My favorite is “Yo momma so ugly she turned Medusa to stone!”

Intriguingly, for every additional 13 people in a chatroom, our chance of getting kicked out went down by half.  Our hunch is that the smaller the group, the more intimate it feels, and the more our presence felt like an intrusion. I.e. three friends discussing their favorite sports team is a quite different social context than 40 strangers playing a chatroom-based trivia game.

I believe this study was ethical.  What we learned from it—how people really feel about being studied, for that context—outweighs the annoyance we caused.  This is a judgment call, and something we considered carefully. The irony of annoying people to show how much what we were doing is annoying is not lost on me.  But would I do similar studies? Yes, if and only if there was benefit that outweighed the harm.

This data was collected in 2003 and the results were published in 2004. I bring this up now because of the recent uproar about a study published by Facebook researchers in PNAS in 2014. Our chatroom study makes the points that:

  • It is possible to do ethical research on Internet users without their consent.
  • It is possible to do ethical research that annoys Internet users.
  • The potential benefit of the study needs to outweigh the harm.
  • Annoying people is doing some harm, and most users are annoyed by being studied without their consent.

In my opinion it was questionable judgment for Kramer et al. to see if filtering people’s newsfeeds to contain fewer positive emotional words led them to post more content with fewer positive words themselves.  But here’s a harder question: If they had only done the happy case (filtering people’s newsfeeds to have fewer negative words), would you still have concerns about the study?  This separates the issue of potential harm from the issue of lack of control.  I believe we can have a more productive conversation about the issues if we separate these questions. I personally would have no objections to the study if it had been done only in the happy case.

I suspect that in part it’s the control issue that has provoked such strong reactions in people. But it’s not that users lost control in this one instance—it’s that they never had it in the first place. The Facebook newsfeed is curated by an algorithm that is not transparent. One interesting side effect of the controversy is that people are beginning to have serious conversations about the ways that we are giving up control to social media sites.  I hope that conversation continues.

If you’re annoyed by my claiming the right to (sometimes, in a carefully considered fashion) annoy you, leave me a comment!

 

(Edited to more precisely describe the Facebook study–thanks to Mor Naaman for prodding me to fix it.)

Advertisements
Categories: Uncategorized
%d bloggers like this: