Archive

Posts Tagged ‘ethics’

More on TOS: Maybe Documenting Intent Is Not So Smart

February 29, 2016 1 comment

In my last post, I wrote that “Reviewers should reject research done by violating Terms of Service (TOS), unless the paper contains a clear ethical justification for why the importance of the work justifies breaking the law.”  My friend Mako Hill (University of Washington) pointed out to me (quite sensibly) that that would get people in more trouble–it  asks people to document their intent to break the TOS.  He’s right.  If we believe that under some circumstances breaking TOS is ethical, then requiring researchers to document their intent is not strategic.

This leaves us in an untenable situation.  We can’t engage in a formal review of whether something breaking TOS is justified, because that would make potential legal problems worse. Of course we can encourage individuals to be mindful and not break TOS without good reason. But is that good enough?

Sometimes TOS prohibit research for good reason. For example, YikYak is trying to abide by user’s expectations of ephemerality and privacy. People participate in online activity with a reasonable expectation that the TOS are rules that will be followed, and they can rely on that in deciding what they choose to share. Is it fair to me if my content suddenly shows up in your research study, when it’s clearly prohibited by the TOS?  Do we really trust individual researchers to decide when breaking TOS is justified with no outside review?  When I have a tricky research protocol, I want review. Just letting each researcher decide for themselves makes no sense. The situation is a mess.

Legal change is the real remedy here–such as passing Aaron’s Law, and also possibly an exemption from TOS for researchers (in cases where user rights are scrupulously protected).

Do you have a better solution?  I hope so. Leave me a comment!

Do Researchers Need to Abide by Terms of Service (TOS)? An Answer.

February 26, 2016 8 comments

The TL;DR: Reviewers should reject research done by violating Terms of Service (TOS), unless the paper contains a clear ethical justification for why the importance of the work justifies breaking the law. No one should deliberately break the law without being aware of potential consequences.

Do social computing researchers need to follow Terms of Service (TOS)? Confusion on this issue has created an ethical mess. Some researchers choose not to do a particular piece of work because they believe they can’t violate TOS, and then another researcher goes and does that same study and gets it published with no objections from reviewers. Does this make sense?

It happened to me recently. The social app YikYak is based in Atlanta, so I’m pretty sure my students and I started exploring it before anyone else in the field. But we quickly realized that the Terms of Service prohibit scraping the site, so we didn’t do it. We talked with YikYak and started negotiations about getting permission to scrape, and we might’ve gotten permission if we had been more persistent. But I felt like I was bothering extremely busy startup employees with something not on their critical path. So we quietly dropped our inquiries. Two years later, someone from another university published a paper about YikYak like what we wanted to write, using scraped data. This kind of thing happens all the time, and isn’t fair.

There are sometimes good reasons why companies prohibit scraping. For example, YikYak users have a reasonable expectation that their postings are ephemeral. Having them show up in a research paper is not what they expect. Sometimes a company puts up research prohibitions because they’re trying to protect their users. Can a University IRB ever allow research that is prohibited in a site’s TOS?

Asking permission of a company to collect data is sometimes successful.  Some companies have shared huge datasets with researchers, and learned great things from the results. It can be a win-win situation. If it’s possible to request and obtain permission, that’s a great option—if the corporation doesn’t request control over what is said about their site in return. The tricky question is whether researchers will be less than honest in publishing findings in these situations, because they fear not getting more data access.

Members of the research community right now are confused about whether they need to abide by TOS. Reviewers are confused about whether this issue is in their purview—should they consider whether a paper abides by TOS in the review process? Beyond these confusions lurks an arguably more important issue: What happens when a risk-averse company prohibits reasonable research? This is of critical importance because scholars cannot cede control of how we understand our world to corporate interests.

Corporations are highly motivated to control what is said about them in public. When I was co-chair of CSCW in 2013, USB sticks of the proceedings were in production when I received an email from BigCompany asking that a publication by one of their summer interns be removed. I replied, “It’s been accepted for publication, and we’re already producing them.” They said, “We’ll pay to have them remade.” I said, “I need to get this request from the author of the paper, not company staff.” They agreed, and a few hours later a very sheepish former summer intern at BigCompany emailed me requesting that his paper be withdrawn. He had submitted it without internal approval. ACM was eager to make BigCompany happy, so it was removed. BigCompany says it was withdrawn because it didn’t meet their standards for quality research—and that’s probably true. But it’s also true that the paper was critical of BigCompany, and they want to control messaging about them. Companies do have a right to control publication by their employees, and the intern was out of line. Of course companies want to control what their employees say about them in public—that makes sense. But what are the broader implications if they also prohibit outside analysis?

If online sites both control what their employees say and prohibit independent analysis, how do we study their impact on the world? Can anyone criticize them? Can anyone help them to see how their design choices reshape our culture, and reshape people’s lives?

From this perspective, one might argue that it can be ethical to violate Terms of Service. But it’s still not legal. TOS can be legally binding, and simply clicking through them is typically interpreted as assent. They are rarely enforced, but when they are, the consequences can be devastating, as the sad death of Internet pioneer Aaron Swartz shows. Swartz was accused of little more than violating TOS, and charged with multiple felonies under the Computer Fraud and Abuse Act (CFAA). The stress of his prosecution led to his suicide.

For another example, my colleague Brian Larson pointed me to the case Palmer v. Kleargear. John Palmer and Jennifer Kulas left a negative review of Kleargear on the site Ripoff Report after the item they ordered was never sent. Kleargear tried to enforce an anti-disparagement clause in their TOS, and demanded the review be taken down. Since you can’t delete reviews on Ripoff Report without paying a $2000 fee, they declined. Which set in motion a multi-year legal battle. Pending legislation in California may make such clauses illegal. However, the consequences for individuals of violating terms—even absurd terms—remain potentially high.

TOS are contracts. Larry Lessig points out that “Contracts are important. Their breach must be remedied. But American law does not typically make the breach of a contract a felony.” Proposed legal changes, “Aaron’s Law,” would limit the scope of the CFAA so that breaches of contract are treated more like breaches of contract rather than felonies. Breaches of contract often have limited penalties if there is no real harm done. Researchers should keep on eye on this issue—unless Aaron’s Law passes, ridiculous penalties are still the law.

We’re in a quandary. We have compelling reasons why violating TOS is sometimes ethical, but it’s still not legal. So what are we as a field supposed to do? Here’s my answer:

If an individual chooses to take on the legal risk of violating TOS, they need to justify why. This is not something you can do lightly. In publishing work that comes from data obtained by violating TOS, this must be clearly acknowledged and justified. Work that breaks TOS with no justification should be rejected by reviewers. Reviewers should pro-actively review a site’s TOS, if it’s not explicitly discussed in the paper.

However, you should think carefully about doing work that violates TOS collaboratively with subordinates. Can you as faculty  take this risk yourself? Sure. But faculty are in a position of power over students, who may have difficulty saying no. Senior faculty also have power over junior faculty. If a group of researchers of differing seniority wants to do this kind of work, the more senior members need to be careful that there is no implicit coercion to participate caused by the unavoidable power relations among members of the research team.

I believe we need legal change to remedy this state of affairs. Until that happens, I would encourage friends to be cautious. Someone is going to get in hot water for this—don’t let it be you.

Do I like the fact that corporations have such control over what is said about them? I do not—not at all. But legal consequences are real, and people should take on that risk only when they have good reason and know what they are doing, without any implicit coercion.

In summary, I am proposing:

Reviewers should reject research done by violating Terms of Service (TOS), unless the paper contains a clear ethical justification for why the importance of the work justifies breaking the law. Reviewers’ should pro-actively check a site’s TOS if it is not discussed in the paper.

If a team of researchers who choose to violate TOS are of different academic ranks (i.e. tenured, pre-tenure, students), then the more senior authors should seriously consider whether more junior participants are truly consenting and not implicitly pressured.

Professionals societies like ACM and IEEE should advocate for legal reform to the Computer Fraud and Abuse Act (CFAA) such as the proposed Aaron’s Law.

Do you agree? Leave me a comment!

Thanks to Casey Fiesler and Brian Larson for helpful comments on this post.

Dear Nonprofits: Software Needs Upkeep (Why we need better education about software development and professional ethics)

March 30, 2015 5 comments

A friend who is president of a nonprofit came to see me last week with a problem: he doesn’t know how to maintain their mobile app. They worked hard to get a grant, and used the money to hire a web design firm to make them a mobile app. Seems like a nice idea, right? Except one problem: they don’t have ongoing funding for software updates and design changes. They had a one-time grant, and they spent it all on their first version. The first version is not bad–it works. But that’s kind of like saying “we made a version of Facebook that works years ago, so we’re done, right?” That doesn’t explain what all those employees in Mountain View are doing, working sixty-hour weeks.

Anyone who works in the software industry knows that software needs ongoing love and care. You’ll never get the functionality quite right–design has to evolve over time. And even if you do get it mostly right, there will be new releases of operating systems and new devices that break the old code. It will need to be updated.

Giving someone a first version of software and walking away is rather like selling them a horse knowing that they have no barn and no money for grooming or hay or vet bills. The upkeep is the issue, not the cost of the horse. The well-known web design firm that sold my friend a horse with no barn should be ashamed. Because they knew.

Nonprofits are particularly vulnerable when they have limited in-house technical capability. They are completely dependent on the vendor in every phase of the project. Dependent on the vendor’s honesty and forthrightness as well as the quality of the product they deliver.

This particular vendor just informed the nonprofit that they would not be able to support future software changes because “their business is going in a new direction.” Now there’s a line for you. They knew that supporting the nonprofit was a losing proposition, from a financial perspective. It’s the business equivalent of a one-night-stand: that was nice, but I don’t want to see you again.

For those of you running small organizations, please think hard about how you are going to maintain any software you buy. For those of you running web design firms, think hard about whether you are serving the best interests of your clients in the long run. I imagine the staff who sold my friend the app are thinking “we delivered what we agreed to,” and don’t see any issue. But you know better and need to hold yourselves to a higher standard.

This is not a new phenomenon. Cliff Lampe found the same thing in a study of three nonprofits. At the root of the problem is two shortcomings in education. So that more small businesses and nonprofits don’t keep making this mistake, we need education about the software development process as part of the standard high-school curriculum. There is no part of the working world that is not touched by software, and people need to know how it is created and maintained. Even if they have no intention of becoming a developer, they need to know how to be an informed software customer. Second, for the people at web design firms who keep taking advantage of customers, there seems to be a lack of adequate professional ethics education. I teach students in my Computers, Society, and Professionalism class that software engineers have a special ethical responsibility because the client may not understand the problem domain and is relying on the knowledge and honesty of the developer. More people need to get that message.

Responding to an earlier version of this post, Jill Dimond makes the excellent point that part of the problem is with funders of nonprofits. It’s more appealing to fund something new than to sustain something already funded.  Funders should take a lesson from Habitat for Humanity, who make sure to give people a house that they are financially able to maintain.  Most funders are acting more like reality television shows who give people a mansion they can’t afford. And then we all act surprised when the family loses the home in bankruptcy. Funders need to plan for the long-term, or else why bother at all?

Categories: ethics Tags: ,

Is Online Cheating Accelerating?

March 29, 2012 1 comment

Grad student teams in my Design of Online Communities class handed in super qualitative studies of seven online sites this term.  Grading the papers, I couldn’t help notice that three of the seven sites were fundamentally wrestling with issues of student cheating online. On OpenStudy and StackOverflow, students regularly post their homework questions and wait for others to answer. Neither site is quite sure what to do about the problem.  Answering questions is essential to their mission. How do you distinguish between getting legitimate help and outsourcing your work?

A team of students from Korea studied a site called GoHackers, which Korean students use to help with test preparation for study-abroad tests like the GRE and the TOEFL. The electronic version of the tests generally reuse questions from a pool. If each test taker remembers one test question, together students can quickly build a comprehensive database.   One interview subject had posted a particularly thorough test guide online, and another student asked him for his autograph. Our student researchers explicitly asked site members whether they had any ethical or legal qualms about the test prep site, and no one they interviewed was concerned at all. 

Of course it’s a coincidence that three of seven papers touched on this theme this term. And cheating is not a new phenomenon–far from it. But what is in fact new is the ease by which it can be accomplished.  It’s not simply a little easier–it’s a lot easier, and that is leading to a different magnitude and type of issue.

if there’s a silver lining, it’s that this trend may challenge us teachers to rethink our practices–to rethink what makes a good assignment or test.  To rethink what the purposes  of “homework” and “test’ are anyway and how those goals can better be met, perhaps with more authentic and contextual activities. And to pay more attention to ethics education and meta-cognitive awareness in our students: making sure we make it clear to students why they are doing what they are doing for school.

%d bloggers like this: