Home > Uncategorized > Annoying Internet Users in the Name of Science

Annoying Internet Users in the Name of Science

I deliberately annoyed Internet users without their consent in the name of Science, and I would do it again.

How do people on social media feel about being studied by researchers?  To shed some light on this topic, Jim Hudson and I conducted a study in which we entered chatrooms and told people we were studying them. Actually, what we were really studying was whether they kicked us out of the chatroom.

This is not something we did lightly—we got IRB approval for our study, and it went to full board review. For those of you keeping score, this was a full waiver of consent for a deceptive study (we said we were studying language online).  In the study, if we weren’t kicked out of the chatroom after five minutes, we posted a debriefing message about the real purpose of the study with a URL for more information. If we were already kicked out, we did not disturb participants further. The study was approved because our IRB felt there was minimal risk to participants and the study could not practicably be carried out without the waiver.

We used the user name “Chat_Study.” We eliminated chatrooms with names that suggested sensitive topics (like health support groups), and we chose only groups with a channel operator present who had the power to kick us out. In the remaining groups, we entered, waited one minute, and posted one of four messages:

  1. No message: None
  2. Recording message: We are researchers recording this chatroom for a study on language use in online environments. For questions or comments, email study@mail.chatstudy.cc.gatech.edu. Thank you!
  3. Opt out message: We are researchers recording this chatroom for a study on language use in online environments. If you don’t want to be recorded, please whisper “Chatstudy opt out” to us. For questions or comments, email study@mail.chatstudy.cc.gatech.edu. Thank you!
  4. Opt in message: We are researchers and would like to record this conversation for a study on language use in online environments. If we may record you, please whisper “Chatstudy volunteer” to us. For questions or comments, email study@mail.chatstudy.cc.gatech.edu. Thank you!

So what happened?  In short, almost no one opted in or out, and we were kicked out of the chatroom in less than five minutes in 63.3% of the time (in the conditions in which we posted a message). In the control condition in which we entered but posted no message, we were kicked out 29% of the time.  The paper includes many of the “boot messages” we received.  In addition to lots of messages like “go away” or “no spamming,” we received some more colorful ones. My favorite is “Yo momma so ugly she turned Medusa to stone!”

Intriguingly, for every additional 13 people in a chatroom, our chance of getting kicked out went down by half.  Our hunch is that the smaller the group, the more intimate it feels, and the more our presence felt like an intrusion. I.e. three friends discussing their favorite sports team is a quite different social context than 40 strangers playing a chatroom-based trivia game.

I believe this study was ethical.  What we learned from it—how people really feel about being studied, for that context—outweighs the annoyance we caused.  This is a judgment call, and something we considered carefully. The irony of annoying people to show how much what we were doing is annoying is not lost on me.  But would I do similar studies? Yes, if and only if there was benefit that outweighed the harm.

This data was collected in 2003 and the results were published in 2004. I bring this up now because of the recent uproar about a study published by Facebook researchers in PNAS in 2014. Our chatroom study makes the points that:

  • It is possible to do ethical research on Internet users without their consent.
  • It is possible to do ethical research that annoys Internet users.
  • The potential benefit of the study needs to outweigh the harm.
  • Annoying people is doing some harm, and most users are annoyed by being studied without their consent.

In my opinion it was questionable judgment for Kramer et al. to see if filtering people’s newsfeeds to contain fewer positive emotional words led them to post more content with fewer positive words themselves.  But here’s a harder question: If they had only done the happy case (filtering people’s newsfeeds to have fewer negative words), would you still have concerns about the study?  This separates the issue of potential harm from the issue of lack of control.  I believe we can have a more productive conversation about the issues if we separate these questions. I personally would have no objections to the study if it had been done only in the happy case.

I suspect that in part it’s the control issue that has provoked such strong reactions in people. But it’s not that users lost control in this one instance—it’s that they never had it in the first place. The Facebook newsfeed is curated by an algorithm that is not transparent. One interesting side effect of the controversy is that people are beginning to have serious conversations about the ways that we are giving up control to social media sites.  I hope that conversation continues.

If you’re annoyed by my claiming the right to (sometimes, in a carefully considered fashion) annoy you, leave me a comment!


(Edited to more precisely describe the Facebook study–thanks to Mor Naaman for prodding me to fix it.)

Categories: Uncategorized
  1. July 8, 2014 at 3:13 pm

    Thanks for writing this, Amy!

    A thought experiment I’m trying to figure out: what if we expected the effect to go one direction (positive posts –> write more positive posts) and designed the experiment that way, but the result was that the effect was reversed (positive posts –> write more negative posts)? That might occur if the positive posts prompted social comparison.

    • July 8, 2014 at 3:17 pm

      It’s a good point Michael, and in fact the PNAS paper discusses the social comparison hypothesis as a key motivation. I can’t really wrap my mind around the idea of spreading sunshine as unethical, though. 🙂

      What I didn’t discuss in the post (didn’t want to squeeze in too much stuff pulling in different directions) is the whole issue of whether the study’s method is sound. A few folks have written about the limitations of LIWC for social media/short texts. And if the method is flawed then no degree of risk to users is justified, because there is no benefit!

      • July 9, 2014 at 11:53 am

        I agree, there has been surprisingly little discussion on the validity of the results. My personal view is that results are probably valid (in terms of sociolinguistics) but the effects on the construct of “emotions” have been wildly exaggerated by the media, and somewhat over-blown by the researchers. A longer discussion is how the pressures of PNAS and journals of its kind (Science) cause these types of over-blown takeaways from essentially modest studies.

    • July 9, 2014 at 10:31 am

      Michael, a further thought: Another potential harm even of filtering even for negative stuff is of missing something *important*. Ie, how come everyone but me knows that person x passed away? You could balance that by adding a threshold–if something gets a certain number of likes or comments, then it won’t get filtered even if the experiment wants to. But that argument is conceptualizing Facebook as a news source with a responsibility to inform. Which maybe it’s becoming but that’s certainly not how we position it generally…. Hmmn.

      • July 9, 2014 at 10:41 am

        Yes, I agree with the difficulty. One bright side is that I’m sure IRBs have norms for this kind of issue. e.g., if all animal trials suggest that my medical study will have a positive effect, but it has a negative side effect, what do we do? Cut off the study early?

        A side thought: if you had asked me a year ago, I would have been 100% certain that the social computing study that would become infamous would be because of third-party consent issues, not “first-party” consent. I’m still astounded by this. Or maybe that one is still to come.

  2. July 9, 2014 at 8:39 am

    Thanks for your article, Amy! I think your experiment was interesting, though awfully Ourosburos. In the case of the FB experiment, don’t you think a repeated measures study where they followed up a period of more negative posts with more positive posts or vice versa would have helped?

    • July 9, 2014 at 10:26 am

      Wendy, interesting question. I guess I’m nervous about the potential harm to someone who was already having a rough time at that moment in time. For example, my father-in-law just passed away…. If you had filtered the newsfeed to be more negative at that moment for myself or my husband, that would have been unfortunate. There will always be lots of people in that kind of situation. So I personally would hesitate to filter out happy content, even if you counter balance it. But it’s a judgment call.

      I had to look up Ouroboros. Heh. I love that! Yes, that does indeed describe it. I’ll have to mail that to Jim….

  1. December 1, 2014 at 3:49 pm

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: