My husband said at the breakfast table this morning:
I saw an article yesterday that Peyton Hillis is number two on the Giant’s depth chart. And you know what? I don’t care!
He grinned and we exchanged fist bumps. Hooray for not needing to know the Giants’ depth chart!
I have played fantasy football since 2001 and been commissioner of a league since 2002, and this year I quit. So did my husband. We’re relieved.
There’s a lot to like about fantasy football. I feel a genuine sense of comradery with the friends I play with. I love statistics, and pouring over charts to find the overlooked gem of a player is great fun. I’m not bad at it–I almost always make the playoffs (though I rarely actually win the league). But about three years ago, I stopped looking forward to my annual summer pre-draft research, and started dreading it.
Success at fantasy football is built on three things: knowledge, strategy, and luck. I am a bit deficient in the knowledge department (I like reading sports news, but I’m not obsessed with it), but I like to think that I make up for that in the strategy department. Which adds up to making me a pretty good player. But why did it stop being fun?
Fantasy football isn’t just something you do in addition to watching football–it transforms the entire viewing process. And that’s both good and bad. The good part is that I can be watching a game between two teams I don’t particularly care about and rejoice when a player on my fantasy team scores. The bad part is that I can be watching what is truly a great football game, but fail to see it. Instead of seeing the Broncos’ offense as a thing of beauty, I’m thinking “Oh no–don’t throw it to Wes, throw it to Demaryius!” In fact I’m not watching the real game at all–I’m watching the fantasy game, and whether Peyton gets the ball to my man Demaryius Thomas is the only thing I actually am seeing. Which is particularly bizarre if the Broncos happen to be playing my home team, the Atlanta Falcons. So we intercept a ball targeted at Demaryius and I’m sad? Wait, what am I cheering for–for my fantasy players to score, or for my real team to win? Which game am I even watching–the real one or the fantasy one? You’ll often find me in our seats at the Georgia Dome hitting reload on my phone–forget what’s on the field in front of me: how are my fantasy players doing?
Fantasy football also has a crazy frustration factor–injuries. Sometimes these are foreseeable–if you draft someone with a history of injuries in the past, you know you’re taking a risk. But some of them are just random. Even more random when the injury occurs off the football field.
I’ve gotten better over time about not being over invested in my fantasy team. Sometimes on a fall Sunday if we’re out for a hike, I actually can wait til we’re home to check my fantasy stats, instead of reloading them on the trail. But it’s still hard not to feel like you’re under a black cloud on Sunday if everything is going badly. Or to grin like a Cheshire cat if things are going well. But if my husband and I are both playing in the same league, how often is it that we’re both rejoicing at our fantasy football luck? Someone is usually fumbling their way through a weekend disappointment. Not that we care that much–we don’t–but it still can be dispiriting. So in the end the game does not improve our net household happiness.
For this year, I say goodbye to fantasy football and hello to real football. And maybe my former fantasy football buddies will watch a real game with me some time.
At this year’s CRA Snowbird conference (the every-other-year gathering of chairs of CS departments), I organized two panels on MOOCs and online education. While I’m told that Snowbird 2012 was dominated by hyperbole about MOOCs, our discussion this year was eminently sensible. In our panel “MOOCs and Online Education: The Evolving Big Picture,” Nelson Baker (Georgia Tech), John Mitchell (Stanford), and Marian Petre (The Open University) talked about the realities. There’s a lot we can do with online education. It’s wonderful that GT’s new online master’s of computer science is reaching working professionals who otherwise couldn’t pursue higher education. But how to do it well and how to make the bottom line add up are challenges. It’s not cheaper if you do it right. We had standing-room only for the panel, and both the positive hype and negative hype were absent. People were talking sense.
In our second panel, we discussed MOOCs and online education as active areas of computer science research. Marti Hearst (UC Berkeley), Scott Klemmer (UCSD), and Rob Miller (MIT) showed some current research in progress on how to design new software for online education inspired by good pedagogy. Right now we’re still in the horseless carriage stage of online ed–trying to understand the new medium in terms of the old one. How to do this well is an open area for research. And we need research done in both ischools, ed schools, and computer science departments. There is a complicated interaction between what the technology can do and what good pedagogy says we should do. Making those work together is a challenge. And department chairs and deans need to think hard about whether they are able to fully support faculty members doing such interdisciplinary work.
One thorny area that needs further community discussion is research ethics. Whenever you do research on students, you need to recognize that there is an unavoidable power relationship between faculty and students, particularly if investigators are doing research on their own classes. Petre emphasized that as faculty we have a duty of care. The rule book on the ethics of researching real students in online classes is still being written, and it has more nuance and complication than recent controversies about social network sites conducting research on their members.
What was noticeably absent from our online ed mini-track was hype–both positive and negative. The truth is somewhere in the middle, and is much more complicated than you might think. And we’re just at the beginning.
I had a funny conversation years ago with a faculty member at MIT who has taught artificial intelligence (AI) for many years. At the time, AI was unfashionable. And he said he liked that better, because when AI was trendy they got lots of shallow people going into the field for the wrong reasons–just because it was “hot.” During the “AI Winter” when it was unfashionable, he had a smaller influx of potential students–but nicer ones, who were more sincerely interested in the discipline. Thank heavens we have gotten past the latest hype bubble about online ed, and are left with sincere people working on some interesting and worthy problems!
I deliberately annoyed Internet users without their consent in the name of Science, and I would do it again.
How do people on social media feel about being studied by researchers? To shed some light on this topic, Jim Hudson and I conducted a study in which we entered chatrooms and told people we were studying them. Actually, what we were really studying was whether they kicked us out of the chatroom.
This is not something we did lightly—we got IRB approval for our study, and it went to full board review. For those of you keeping score, this was a full waiver of consent for a deceptive study (we said we were studying language online). In the study, if we weren’t kicked out of the chatroom after five minutes, we posted a debriefing message about the real purpose of the study with a URL for more information. If we were already kicked out, we did not disturb participants further. The study was approved because our IRB felt there was minimal risk to participants and the study could not practicably be carried out without the waiver.
We used the user name “Chat_Study.” We eliminated chatrooms with names that suggested sensitive topics (like health support groups), and we chose only groups with a channel operator present who had the power to kick us out. In the remaining groups, we entered, waited one minute, and posted one of four messages:
- No message: None
- Recording message: We are researchers recording this chatroom for a study on language use in online environments. For questions or comments, email firstname.lastname@example.org. Thank you!
- Opt out message: We are researchers recording this chatroom for a study on language use in online environments. If you don’t want to be recorded, please whisper “Chatstudy opt out” to us. For questions or comments, email email@example.com. Thank you!
- Opt in message: We are researchers and would like to record this conversation for a study on language use in online environments. If we may record you, please whisper “Chatstudy volunteer” to us. For questions or comments, email firstname.lastname@example.org. Thank you!
So what happened? In short, almost no one opted in or out, and we were kicked out of the chatroom in less than five minutes in 63.3% of the time (in the conditions in which we posted a message). In the control condition in which we entered but posted no message, we were kicked out 29% of the time. The paper includes many of the “boot messages” we received. In addition to lots of messages like “go away” or “no spamming,” we received some more colorful ones. My favorite is “Yo momma so ugly she turned Medusa to stone!”
Intriguingly, for every additional 13 people in a chatroom, our chance of getting kicked out went down by half. Our hunch is that the smaller the group, the more intimate it feels, and the more our presence felt like an intrusion. I.e. three friends discussing their favorite sports team is a quite different social context than 40 strangers playing a chatroom-based trivia game.
I believe this study was ethical. What we learned from it—how people really feel about being studied, for that context—outweighs the annoyance we caused. This is a judgment call, and something we considered carefully. The irony of annoying people to show how much what we were doing is annoying is not lost on me. But would I do similar studies? Yes, if and only if there was benefit that outweighed the harm.
This data was collected in 2003 and the results were published in 2004. I bring this up now because of the recent uproar about a study published by Facebook researchers in PNAS in 2014. Our chatroom study makes the points that:
- It is possible to do ethical research on Internet users without their consent.
- It is possible to do ethical research that annoys Internet users.
- The potential benefit of the study needs to outweigh the harm.
- Annoying people is doing some harm, and most users are annoyed by being studied without their consent.
In my opinion it was questionable judgment for Kramer et al. to see if filtering people’s newsfeeds to contain fewer positive emotional words led them to post more content with fewer positive words themselves. But here’s a harder question: If they had only done the happy case (filtering people’s newsfeeds to have fewer negative words), would you still have concerns about the study? This separates the issue of potential harm from the issue of lack of control. I believe we can have a more productive conversation about the issues if we separate these questions. I personally would have no objections to the study if it had been done only in the happy case.
I suspect that in part it’s the control issue that has provoked such strong reactions in people. But it’s not that users lost control in this one instance—it’s that they never had it in the first place. The Facebook newsfeed is curated by an algorithm that is not transparent. One interesting side effect of the controversy is that people are beginning to have serious conversations about the ways that we are giving up control to social media sites. I hope that conversation continues.
If you’re annoyed by my claiming the right to (sometimes, in a carefully considered fashion) annoy you, leave me a comment!
(Edited to more precisely describe the Facebook study–thanks to Mor Naaman for prodding me to fix it.)
My class CS 6470 Design of Online Communities is structured around having students do a qualitative study of an online community using participant observation and interviewing. Over the years since I first taught the class in 1998, the core assignment has evolved in a number of ways. One recent change that surprised me is the need rethink the assignment’s focus on a single site.
In spring 2013, students Patrick Mize, Michael Roberts and Aditya Tirodkar chose to study the site Equestria Daily, a blog for bronies. (I am writing about their work with their permission.) As I’ve written, bronies are adult, male fans of the television show My Little Pony: Friendship is Magic. As the students began to study the site, they quickly learned that it was impossible to understand Equestria Daily without understanding a constellation of other sites including Equestria After Dark, PonyChan, Brony forums on Deviant Art and Meetup, and others. They made this diagram:
Taken together, brony online activity forms a kind of ecosystem. Too many people talking about ponies on 4chan led the creation of Ponychan. A policy change in what sort of content is allowed on Equestria Daily changed what is posted on Equestria After Dark. Pony fans use all these different sites in a complimentary fashion, and user behavior is not confined to one site. In fact it’s impossible to tell the story of Equestria Daily without explaining its relationship to this ecosystem of pony activity.
As you’ve probably noticed, much of this activity is oriented towards “adult” content. Any cultural content that inspires dedicated fans can be repurposed towards erotica. It’s not surprising that a cultural meme that tends to appeal to individuals in a more typically libidinous stage of life would be used in this fashion. And as these things go, pony erotica tends to be relatively tame. I certainly see less potential harm in original art about ponies compared to adult content made from photography of real people who may or may not have been exploited in the taking or eventual use of their images.
The need to think about ecosystems of online sites is not specific to bronies. I was meeting recently with my colleague Alex Orso to discuss his online software engineering class in our online master of computer science degree program (OMS), and he lamented that supporting his course involves using half a dozen different tools. There’s our institutional grading software, our vendor’s class delivery platform, our normal class support software, a third-party class discussion tool, a software repository…. Saying you would study “the software” or “the website” for OMS is an anachronism. There are many platforms and tools, and the challenges are all at the seams between them. Similarly, my student Joseph Gonzales is studying The Greatest International Scavenger Hunt the World Has Ever Seen (GISHWHES), and finding that teams of hunters all use a host of different tools and change their tool use through different phases of their project.
It doesn’t surprise me that people use multiple tools and online sites. It does surprise me that this happens to a degree that it can be hard to even discuss one single site in isolation. Time to rewrite that class assignment!
Thanks to Patrick, Michael, and Aditya for a great project!
Twitter research ethics are complicated, and deserve a more nuanced treatment than my short post from yesterday. I’ll take a stab here at saying a bit more:
Question 1: Is analyzing Twitter “human subjects research”?
I want to start by looking at US law. (Note that this is only applicable in the US and only applies to federally funded research, though some companies chose to voluntarily follow these rules and most universities apply the rules to all research whether it is federally funded or not.) The policy states that several categories of work are exempt from the rules, including:
(4) Research involving the collection or study of existing data, documents, records, pathological specimens, or diagnostic specimens, if these sources are publicly available or if the information is recorded by the investigator in such a manner that subjects cannot be identified, directly or through identifiers linked to the subjects.
It’s pretty clear that Twitter data (on open accounts) is existing data that is already publicly available. So legally speaking, I believe researchers are well within their rights to simply use it at will. It’s public, so you can use it. But should you?
Ethical is a higher standard than legal. As Jim Hudson and I found in our study of people chatting on Internet Relay Chat (IRC), people often misunderstand the public nature of online communications. This leads to my second question:
Question 2: If people have expectations of privacy that differ from expert opinion on what is “reasonable,” does that need to be taken into account?
I don’t think there’s a simple answer to that question. It probably has to be addressed on a case-by-case basis. And if people’s expectations are persistent and continue to differ from the written rules, maybe the rules need to evolve.
If you do consider research on Twitter to be human subjects research, then you need to apply for IRB clearance, and you probably have good grounds to request a waiver of consent. A waiver of consent is possible in these circumstances:
(d) An IRB may approve a consent procedure which does not include, or which alters,
some or all of the elements of informed consent set forth in this section, or waive the requirements to obtain informed consent provided the IRB finds and documents that:
1) The research involves no more than minimal risk to the subjects;
(2) The waiver or alteration will not adversely affect the rights and welfare of the subjects;
(3) The research could not practicably be carried out without the waiver or alteration; and
(4) Whenever appropriate, the subjects will be provided with additional pertinent information after participation.
In such a case, an IRB might request that the tweets be anonymized, and this would contribute to making the case that the work presents minimal risk. This sounds like a great approach for research on sensitive topics, like epidemiology for example.
Because part of my research is about people’s creative accomplishments online, I am more likely to encounter situations where anonymizing people is unethical because it denies them credit for their work. We only name people in accounts at their written request, by marking that on a consent form. And our projects generally use mixed methods—with a combination of analyzing people’s online postings and interviewing them. I believe this mixed methods approach often gives better research results, and necessarily makes the work human subjects research rather than merely analysis of public information.
I personally prefer to view Twitter research as human subjects’ research and apply for a waiver of consent. Thinking through a formal IRB application and soliciting help from IRB members can help you to think through the details of how to treat your subjects in accordance with principles of beneficence, justice, and respect for persons. Ethical is after all a higher standard than merely legal.
That said, the public nature of Twitter data is hard to deny. Maybe the rule about pre-existing, public information needs to be rethought. Something more nuanced would serve us better.
In this article about tweets being made available to researchers, the authors quote two epidemiologists saying ethical use of Twitter should anonymize tweets:
Caitlin Rivers and Bryan Lewis, computational epidemiologists at Virginia Tech, published guidelines for the ethical use of Twitter data in February. Among other things, they suggest that scientists never reveal screen names and make research objectives publicly available. For example, although it is considered ethical to collect information from public spaces—and Twitter is a public space—it would be unethical to share identifying details about a single user without his or her consent. Rivers and Lewis argue that it is crucial for scientists to consider and protect users’
I disagree. Of course it may be more often true for epidemiology, but it really depends on what kind of study you’re doing. As Kurt Luther, Casey Fiesler, and I have written, sometimes anonymizing users may be morally wrong because you are denying them credit for their work. (“That tweet was really funny–I want my name on it!”) Twitter is public, published material. The contents of private Twitter feeds are for followers only, but the contents of public feeds arguably are as public as a newspaper article. If you want to take extra precautions to anonymize people, that’s fine. But to say it’s always necessary is ridiculous. It depends on the type of study you’re doing.
Jim Hudson and I empirically studied how people often misunderstand how public their communications are. The complicated question that follows is: if user expectations are out of line with what experts would call “reasonable,” how should the scholarly community proceed? Dealing with things on a case-by-case basis is the best we can do for now.