Home > ethics, Uncategorized > Do Researchers Need to Abide by Terms of Service (TOS)? An Answer.

Do Researchers Need to Abide by Terms of Service (TOS)? An Answer.

The TL;DR: Reviewers should reject research done by violating Terms of Service (TOS), unless the paper contains a clear ethical justification for why the importance of the work justifies breaking the law. No one should deliberately break the law without being aware of potential consequences.

Do social computing researchers need to follow Terms of Service (TOS)? Confusion on this issue has created an ethical mess. Some researchers choose not to do a particular piece of work because they believe they can’t violate TOS, and then another researcher goes and does that same study and gets it published with no objections from reviewers. Does this make sense?

It happened to me recently. The social app YikYak is based in Atlanta, so I’m pretty sure my students and I started exploring it before anyone else in the field. But we quickly realized that the Terms of Service prohibit scraping the site, so we didn’t do it. We talked with YikYak and started negotiations about getting permission to scrape, and we might’ve gotten permission if we had been more persistent. But I felt like I was bothering extremely busy startup employees with something not on their critical path. So we quietly dropped our inquiries. Two years later, someone from another university published a paper about YikYak like what we wanted to write, using scraped data. This kind of thing happens all the time, and isn’t fair.

There are sometimes good reasons why companies prohibit scraping. For example, YikYak users have a reasonable expectation that their postings are ephemeral. Having them show up in a research paper is not what they expect. Sometimes a company puts up research prohibitions because they’re trying to protect their users. Can a University IRB ever allow research that is prohibited in a site’s TOS?

Asking permission of a company to collect data is sometimes successful.  Some companies have shared huge datasets with researchers, and learned great things from the results. It can be a win-win situation. If it’s possible to request and obtain permission, that’s a great option—if the corporation doesn’t request control over what is said about their site in return. The tricky question is whether researchers will be less than honest in publishing findings in these situations, because they fear not getting more data access.

Members of the research community right now are confused about whether they need to abide by TOS. Reviewers are confused about whether this issue is in their purview—should they consider whether a paper abides by TOS in the review process? Beyond these confusions lurks an arguably more important issue: What happens when a risk-averse company prohibits reasonable research? This is of critical importance because scholars cannot cede control of how we understand our world to corporate interests.

Corporations are highly motivated to control what is said about them in public. When I was co-chair of CSCW in 2013, USB sticks of the proceedings were in production when I received an email from BigCompany asking that a publication by one of their summer interns be removed. I replied, “It’s been accepted for publication, and we’re already producing them.” They said, “We’ll pay to have them remade.” I said, “I need to get this request from the author of the paper, not company staff.” They agreed, and a few hours later a very sheepish former summer intern at BigCompany emailed me requesting that his paper be withdrawn. He had submitted it without internal approval. ACM was eager to make BigCompany happy, so it was removed. BigCompany says it was withdrawn because it didn’t meet their standards for quality research—and that’s probably true. But it’s also true that the paper was critical of BigCompany, and they want to control messaging about them. Companies do have a right to control publication by their employees, and the intern was out of line. Of course companies want to control what their employees say about them in public—that makes sense. But what are the broader implications if they also prohibit outside analysis?

If online sites both control what their employees say and prohibit independent analysis, how do we study their impact on the world? Can anyone criticize them? Can anyone help them to see how their design choices reshape our culture, and reshape people’s lives?

From this perspective, one might argue that it can be ethical to violate Terms of Service. But it’s still not legal. TOS can be legally binding, and simply clicking through them is typically interpreted as assent. They are rarely enforced, but when they are, the consequences can be devastating, as the sad death of Internet pioneer Aaron Swartz shows. Swartz was accused of little more than violating TOS, and charged with multiple felonies under the Computer Fraud and Abuse Act (CFAA). The stress of his prosecution led to his suicide.

For another example, my colleague Brian Larson pointed me to the case Palmer v. Kleargear. John Palmer and Jennifer Kulas left a negative review of Kleargear on the site Ripoff Report after the item they ordered was never sent. Kleargear tried to enforce an anti-disparagement clause in their TOS, and demanded the review be taken down. Since you can’t delete reviews on Ripoff Report without paying a $2000 fee, they declined. Which set in motion a multi-year legal battle. Pending legislation in California may make such clauses illegal. However, the consequences for individuals of violating terms—even absurd terms—remain potentially high.

TOS are contracts. Larry Lessig points out that “Contracts are important. Their breach must be remedied. But American law does not typically make the breach of a contract a felony.” Proposed legal changes, “Aaron’s Law,” would limit the scope of the CFAA so that breaches of contract are treated more like breaches of contract rather than felonies. Breaches of contract often have limited penalties if there is no real harm done. Researchers should keep on eye on this issue—unless Aaron’s Law passes, ridiculous penalties are still the law.

We’re in a quandary. We have compelling reasons why violating TOS is sometimes ethical, but it’s still not legal. So what are we as a field supposed to do? Here’s my answer:

If an individual chooses to take on the legal risk of violating TOS, they need to justify why. This is not something you can do lightly. In publishing work that comes from data obtained by violating TOS, this must be clearly acknowledged and justified. Work that breaks TOS with no justification should be rejected by reviewers. Reviewers should pro-actively review a site’s TOS, if it’s not explicitly discussed in the paper.

However, you should think carefully about doing work that violates TOS collaboratively with subordinates. Can you as faculty  take this risk yourself? Sure. But faculty are in a position of power over students, who may have difficulty saying no. Senior faculty also have power over junior faculty. If a group of researchers of differing seniority wants to do this kind of work, the more senior members need to be careful that there is no implicit coercion to participate caused by the unavoidable power relations among members of the research team.

I believe we need legal change to remedy this state of affairs. Until that happens, I would encourage friends to be cautious. Someone is going to get in hot water for this—don’t let it be you.

Do I like the fact that corporations have such control over what is said about them? I do not—not at all. But legal consequences are real, and people should take on that risk only when they have good reason and know what they are doing, without any implicit coercion.

In summary, I am proposing:

Reviewers should reject research done by violating Terms of Service (TOS), unless the paper contains a clear ethical justification for why the importance of the work justifies breaking the law. Reviewers’ should pro-actively check a site’s TOS if it is not discussed in the paper.

If a team of researchers who choose to violate TOS are of different academic ranks (i.e. tenured, pre-tenure, students), then the more senior authors should seriously consider whether more junior participants are truly consenting and not implicitly pressured.

Professionals societies like ACM and IEEE should advocate for legal reform to the Computer Fraud and Abuse Act (CFAA) such as the proposed Aaron’s Law.

Do you agree? Leave me a comment!

Thanks to Casey Fiesler and Brian Larson for helpful comments on this post.

  1. February 26, 2016 at 6:01 pm

    Really, really good stuff. Excellent argument.

    Josh

    ________________________________

  2. February 26, 2016 at 6:32 pm

    I’m a bit confused by this line: “This kind of thing happens all the time, and isn’t fair.”

    If I understand your argument, someone who breaks ToS and gets a paper accepted is gambling that they won’t get sued/into trouble for the ToS violation. So, is the unfairness because this other group didn’t get into trouble for the ToS violation?

    More broadly, I’m worried that your proposal places too much burden on the reviewers who are, for the most part, not prepared to read, understand, and interpret long, confusing, and probably overreaching ToS. Furthermore, for international academic communities, even the question of whether or not someone acted illegally might hard to determine due to jurisdiction, overarching legislation (e.g. EU directives for privacy), international law, etc.

    Shouldn’t those questions be addressed at the IRB level? Perhaps we need to make sure that IRBs are asking the right questions when reviewing these sorts of proposals – e.g. Does your research, to the best of your knowledge, violate any ToS? If the research is problematic to begin with, it’s best to “nip it in the bud” before it’s carried out, no?

    I’m also not sure that openly announcing that someone broke the law (and reasons why) is a good idea. It sounds like the sort of thing that could get you into much MORE hot water. As a related question, how do social scientists deal with disclosure in law-breaking situations? (e.g. ethnographic study of criminal gangs where researcher might bear witness to lots of law breaking that might be afoul of the law, or researcher consumes illegal drugs as part of the process of gaining trust, etc.)

    • February 28, 2016 at 2:04 pm

      Part of the problem with relying on IRBs to handle this is that IRBs generally treat data scraping as non-human subjects data, thus not needing IRB review. IRBs need to change the way they evaluate this kind of data, but many struggle with a lack of technical expertise on their staff, as many examples over the last decade (e.g., Taste, Ties & Time study) can attest. Therefore, we (researchers/online data experts) need to work more closely with IRBs to address this gray area in online data research.

  3. February 26, 2016 at 7:00 pm

    I see your point. I guess it’s reasonable to say, “I decided not to take the risk; someone else decided to take it. Those were our choices.”

    I do think there’s a problem when people do it without thinking about it.

    I see your point about announcing it in the paper, but “la la la, hope no one notices” doesn’t seem like particularly good protection….

    Lots to talk about!

  4. February 27, 2016 at 9:46 am

    On reflection, I’ve been wondering what the role of the IRB is here. It seems much of this research includes making observations about human participants (subjects), and it is consequently subject to (at least cursory) IRB review. At least some IRBs seem concerned about the question of obtaining consent for access to physical research sites–how do they react when it comes to consent for access to a virtual research site? In short, is the ethical principle of consent where access to a research site is concerned in tension with the ethical principles raised by suppressing research because consent is denied?

    I thought for a moment that what the peer reviewers should be looking for is a statement from the researcher(s) that the IRB was apprised of the fact that the researcher(s) would be violating the TOU and that the IRB permitted the research in spite of that. Of course, with risk averse IRBs, that might prevent any of this research being done, so it’s not much of a solution.

    • February 27, 2016 at 10:40 am

      Really interesting question. Someone should be checking, at some stage….

      I reviewed a controversial paper years ago where I said, “this is not ethical,” but other reviewers and program committee members responded saying, “an IRB approved it, so it’s not our place to inquire further.” That seems ridiculous to me. IRBs make mistakes, researchers make mistakes, reviewers make mistakes… we get there by double checking one another, right?

      That said, the inconsistency across IRBs is a huge issue. And the question of how much time we expect everyone to spend on all this is an issue as well….

  5. Jeremy B
    February 29, 2016 at 2:39 pm

    Thanks for writing this and thinking about it. Really interesting and raises some tricky issues. A few things come to mind for me.

    First, I’m inclined to agree with Jose’s comment above that this places a lot of burden on reviewers and maybe IRBs too to make decisions they aren’t equipped to make. If Terms are legal documents written by lawyers to be interpreted by other lawyers, my feeling is that it serves little good to have a room full of mostly-non-lawyer academics sit around and speculate about how lawyers might interpret them. That’s not to say that we should let lawyers make the decisions, but that we should ask them to help us understand how the Terms would actually be interpreted in a legal context and use that informed understanding to make our own educated decisions.

    Second, in making those decisions I wonder how we think about assessing the risk/reward/ethics tradeoffs in these cases. My guess is that this could also be aided by some legal advice, but what — more broadly — are the conditions under which we’d deem this ok? Would it be related to privacy (e.g., publicly visible vs. private content), user expectations, identifiers (what if we can’t ask for permission or consent bc people are anonymous, e.g.), etc? Lots of gray space here.

    Third, are we treating this as a special case simply because it’s a type of potentially criminal activity that we’re familiar with? Would we (or could we) reasonably expect others who commit potential crimes in a particular context to openly admit that they’ve done so? I’m thinking, for example, about repressive rules in certain government regimes, rules about trespassing or data privacy that may (deliberately or inadvertently) be violated in data gathering, etc. It seems in some ways odd to treat this as a special case when there are lots of parallel situations that we don’t always treat that way.

    • February 29, 2016 at 2:48 pm

      Lots of good issues there–thanks for the thoughts!

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: