Archive

Archive for the ‘ethics’ Category

More on TOS: Maybe Documenting Intent Is Not So Smart

February 29, 2016 1 comment

In my last post, I wrote that “Reviewers should reject research done by violating Terms of Service (TOS), unless the paper contains a clear ethical justification for why the importance of the work justifies breaking the law.”  My friend Mako Hill (University of Washington) pointed out to me (quite sensibly) that that would get people in more trouble–it  asks people to document their intent to break the TOS.  He’s right.  If we believe that under some circumstances breaking TOS is ethical, then requiring researchers to document their intent is not strategic.

This leaves us in an untenable situation.  We can’t engage in a formal review of whether something breaking TOS is justified, because that would make potential legal problems worse. Of course we can encourage individuals to be mindful and not break TOS without good reason. But is that good enough?

Sometimes TOS prohibit research for good reason. For example, YikYak is trying to abide by user’s expectations of ephemerality and privacy. People participate in online activity with a reasonable expectation that the TOS are rules that will be followed, and they can rely on that in deciding what they choose to share. Is it fair to me if my content suddenly shows up in your research study, when it’s clearly prohibited by the TOS?  Do we really trust individual researchers to decide when breaking TOS is justified with no outside review?  When I have a tricky research protocol, I want review. Just letting each researcher decide for themselves makes no sense. The situation is a mess.

Legal change is the real remedy here–such as passing Aaron’s Law, and also possibly an exemption from TOS for researchers (in cases where user rights are scrupulously protected).

Do you have a better solution?  I hope so. Leave me a comment!

Do Researchers Need to Abide by Terms of Service (TOS)? An Answer.

February 26, 2016 8 comments

The TL;DR: Reviewers should reject research done by violating Terms of Service (TOS), unless the paper contains a clear ethical justification for why the importance of the work justifies breaking the law. No one should deliberately break the law without being aware of potential consequences.

Do social computing researchers need to follow Terms of Service (TOS)? Confusion on this issue has created an ethical mess. Some researchers choose not to do a particular piece of work because they believe they can’t violate TOS, and then another researcher goes and does that same study and gets it published with no objections from reviewers. Does this make sense?

It happened to me recently. The social app YikYak is based in Atlanta, so I’m pretty sure my students and I started exploring it before anyone else in the field. But we quickly realized that the Terms of Service prohibit scraping the site, so we didn’t do it. We talked with YikYak and started negotiations about getting permission to scrape, and we might’ve gotten permission if we had been more persistent. But I felt like I was bothering extremely busy startup employees with something not on their critical path. So we quietly dropped our inquiries. Two years later, someone from another university published a paper about YikYak like what we wanted to write, using scraped data. This kind of thing happens all the time, and isn’t fair.

There are sometimes good reasons why companies prohibit scraping. For example, YikYak users have a reasonable expectation that their postings are ephemeral. Having them show up in a research paper is not what they expect. Sometimes a company puts up research prohibitions because they’re trying to protect their users. Can a University IRB ever allow research that is prohibited in a site’s TOS?

Asking permission of a company to collect data is sometimes successful.  Some companies have shared huge datasets with researchers, and learned great things from the results. It can be a win-win situation. If it’s possible to request and obtain permission, that’s a great option—if the corporation doesn’t request control over what is said about their site in return. The tricky question is whether researchers will be less than honest in publishing findings in these situations, because they fear not getting more data access.

Members of the research community right now are confused about whether they need to abide by TOS. Reviewers are confused about whether this issue is in their purview—should they consider whether a paper abides by TOS in the review process? Beyond these confusions lurks an arguably more important issue: What happens when a risk-averse company prohibits reasonable research? This is of critical importance because scholars cannot cede control of how we understand our world to corporate interests.

Corporations are highly motivated to control what is said about them in public. When I was co-chair of CSCW in 2013, USB sticks of the proceedings were in production when I received an email from BigCompany asking that a publication by one of their summer interns be removed. I replied, “It’s been accepted for publication, and we’re already producing them.” They said, “We’ll pay to have them remade.” I said, “I need to get this request from the author of the paper, not company staff.” They agreed, and a few hours later a very sheepish former summer intern at BigCompany emailed me requesting that his paper be withdrawn. He had submitted it without internal approval. ACM was eager to make BigCompany happy, so it was removed. BigCompany says it was withdrawn because it didn’t meet their standards for quality research—and that’s probably true. But it’s also true that the paper was critical of BigCompany, and they want to control messaging about them. Companies do have a right to control publication by their employees, and the intern was out of line. Of course companies want to control what their employees say about them in public—that makes sense. But what are the broader implications if they also prohibit outside analysis?

If online sites both control what their employees say and prohibit independent analysis, how do we study their impact on the world? Can anyone criticize them? Can anyone help them to see how their design choices reshape our culture, and reshape people’s lives?

From this perspective, one might argue that it can be ethical to violate Terms of Service. But it’s still not legal. TOS can be legally binding, and simply clicking through them is typically interpreted as assent. They are rarely enforced, but when they are, the consequences can be devastating, as the sad death of Internet pioneer Aaron Swartz shows. Swartz was accused of little more than violating TOS, and charged with multiple felonies under the Computer Fraud and Abuse Act (CFAA). The stress of his prosecution led to his suicide.

For another example, my colleague Brian Larson pointed me to the case Palmer v. Kleargear. John Palmer and Jennifer Kulas left a negative review of Kleargear on the site Ripoff Report after the item they ordered was never sent. Kleargear tried to enforce an anti-disparagement clause in their TOS, and demanded the review be taken down. Since you can’t delete reviews on Ripoff Report without paying a $2000 fee, they declined. Which set in motion a multi-year legal battle. Pending legislation in California may make such clauses illegal. However, the consequences for individuals of violating terms—even absurd terms—remain potentially high.

TOS are contracts. Larry Lessig points out that “Contracts are important. Their breach must be remedied. But American law does not typically make the breach of a contract a felony.” Proposed legal changes, “Aaron’s Law,” would limit the scope of the CFAA so that breaches of contract are treated more like breaches of contract rather than felonies. Breaches of contract often have limited penalties if there is no real harm done. Researchers should keep on eye on this issue—unless Aaron’s Law passes, ridiculous penalties are still the law.

We’re in a quandary. We have compelling reasons why violating TOS is sometimes ethical, but it’s still not legal. So what are we as a field supposed to do? Here’s my answer:

If an individual chooses to take on the legal risk of violating TOS, they need to justify why. This is not something you can do lightly. In publishing work that comes from data obtained by violating TOS, this must be clearly acknowledged and justified. Work that breaks TOS with no justification should be rejected by reviewers. Reviewers should pro-actively review a site’s TOS, if it’s not explicitly discussed in the paper.

However, you should think carefully about doing work that violates TOS collaboratively with subordinates. Can you as faculty  take this risk yourself? Sure. But faculty are in a position of power over students, who may have difficulty saying no. Senior faculty also have power over junior faculty. If a group of researchers of differing seniority wants to do this kind of work, the more senior members need to be careful that there is no implicit coercion to participate caused by the unavoidable power relations among members of the research team.

I believe we need legal change to remedy this state of affairs. Until that happens, I would encourage friends to be cautious. Someone is going to get in hot water for this—don’t let it be you.

Do I like the fact that corporations have such control over what is said about them? I do not—not at all. But legal consequences are real, and people should take on that risk only when they have good reason and know what they are doing, without any implicit coercion.

In summary, I am proposing:

Reviewers should reject research done by violating Terms of Service (TOS), unless the paper contains a clear ethical justification for why the importance of the work justifies breaking the law. Reviewers’ should pro-actively check a site’s TOS if it is not discussed in the paper.

If a team of researchers who choose to violate TOS are of different academic ranks (i.e. tenured, pre-tenure, students), then the more senior authors should seriously consider whether more junior participants are truly consenting and not implicitly pressured.

Professionals societies like ACM and IEEE should advocate for legal reform to the Computer Fraud and Abuse Act (CFAA) such as the proposed Aaron’s Law.

Do you agree? Leave me a comment!

Thanks to Casey Fiesler and Brian Larson for helpful comments on this post.

Dear Nonprofits: Software Needs Upkeep (Why we need better education about software development and professional ethics)

March 30, 2015 5 comments

A friend who is president of a nonprofit came to see me last week with a problem: he doesn’t know how to maintain their mobile app. They worked hard to get a grant, and used the money to hire a web design firm to make them a mobile app. Seems like a nice idea, right? Except one problem: they don’t have ongoing funding for software updates and design changes. They had a one-time grant, and they spent it all on their first version. The first version is not bad–it works. But that’s kind of like saying “we made a version of Facebook that works years ago, so we’re done, right?” That doesn’t explain what all those employees in Mountain View are doing, working sixty-hour weeks.

Anyone who works in the software industry knows that software needs ongoing love and care. You’ll never get the functionality quite right–design has to evolve over time. And even if you do get it mostly right, there will be new releases of operating systems and new devices that break the old code. It will need to be updated.

Giving someone a first version of software and walking away is rather like selling them a horse knowing that they have no barn and no money for grooming or hay or vet bills. The upkeep is the issue, not the cost of the horse. The well-known web design firm that sold my friend a horse with no barn should be ashamed. Because they knew.

Nonprofits are particularly vulnerable when they have limited in-house technical capability. They are completely dependent on the vendor in every phase of the project. Dependent on the vendor’s honesty and forthrightness as well as the quality of the product they deliver.

This particular vendor just informed the nonprofit that they would not be able to support future software changes because “their business is going in a new direction.” Now there’s a line for you. They knew that supporting the nonprofit was a losing proposition, from a financial perspective. It’s the business equivalent of a one-night-stand: that was nice, but I don’t want to see you again.

For those of you running small organizations, please think hard about how you are going to maintain any software you buy. For those of you running web design firms, think hard about whether you are serving the best interests of your clients in the long run. I imagine the staff who sold my friend the app are thinking “we delivered what we agreed to,” and don’t see any issue. But you know better and need to hold yourselves to a higher standard.

This is not a new phenomenon. Cliff Lampe found the same thing in a study of three nonprofits. At the root of the problem is two shortcomings in education. So that more small businesses and nonprofits don’t keep making this mistake, we need education about the software development process as part of the standard high-school curriculum. There is no part of the working world that is not touched by software, and people need to know how it is created and maintained. Even if they have no intention of becoming a developer, they need to know how to be an informed software customer. Second, for the people at web design firms who keep taking advantage of customers, there seems to be a lack of adequate professional ethics education. I teach students in my Computers, Society, and Professionalism class that software engineers have a special ethical responsibility because the client may not understand the problem domain and is relying on the knowledge and honesty of the developer. More people need to get that message.

Responding to an earlier version of this post, Jill Dimond makes the excellent point that part of the problem is with funders of nonprofits. It’s more appealing to fund something new than to sustain something already funded.  Funders should take a lesson from Habitat for Humanity, who make sure to give people a house that they are financially able to maintain.  Most funders are acting more like reality television shows who give people a mansion they can’t afford. And then we all act surprised when the family loses the home in bankruptcy. Funders need to plan for the long-term, or else why bother at all?

Categories: ethics Tags: ,

Baths, Robots, and Agency

March 31, 2011 3 comments

Sherry Turkle gave a brilliant talk at the GVU Brown Bag at Georgia Tech today about her book Alone Together. Towards the end of the question session, she had a fascinating exchange with Carl DiSalvo about robots to bathe elders.  Sherry argued that people who no longer can bathe themselves should be bathed by caring humans.  (I can imagine the dialog: “Hello Mrs. Johnson! It’s time for your bath. I saw your son was here yesterday. Did you have a good visit?”)

Carl responded: We all agree that would be ideal. But in reality, the attendants are often workers paid minimum wage who are unkind to their charges. When you interview real nursing home patients on the subject, they all say “I’d rather be bathed by a robot who isn’t expected to care than by a human who fails to care.”

Here’s my thought in reply: What’s the difference between a robot that bathes you and one that you use to bathe yourself?  It’s a subtle point–a question of where the sense of agency resides.  (Of course when I’m done bathing myself, I’d also like a real human to ask how my visit with my son yesterday went.)

A hygiene-assist robot is an easier problem to solve than a sociable robot–one whose primary purpose is social or emotional.  Could we still keep the sense of agency with the person in those cases? It’s harder to understand what that might mean.  The problems Turkle raises in her book are serious.

This theme of agency and of designing to keep the sense of agency with the individual keeps cropping up in different areas of HCI. It feels to me like a core principle–something we should highlight in HCI classes and emphasize wherever possible in design. The more this technology pervades different aspects of life, the more human agency seems important.

 

Categories: Agency, computing, ethics

“Zones of Domination”

December 4, 2010 1 comment

In 2000, science fiction author Neal Stephenson gave an inspiring talk at the Computers, Freedom, and Privacy (CFP) conference. He entitled it “Zones of Domination.”  In the talk, he told the story of a whistleblower at the Hanford Nuclear Reactor. In the “big brother” model of authority, there is one entity and it is irredeemably evil. In Stephenson’s story, he followed our heroic whistleblower as forces from one federal government agency tried to frighten and falsely entrap him, but then the police and courts (local and federal) helped him resist and prevail. Stephenson’s point is that there is not one authority, but many. None are irredeemably evil. And the interesting activity is in the areas of overlap.

Roger Clarke posted some notes on the talk. He summarizes:

Big Brother Threat Model	The Domination Systems Threat Model

one threat			many threats
all-encompassing		has edges
personalised			impersonal
abstract			concrete
rare				ubiquitous
fictional			empirical
centralised			networked
20th century			21st century
irredeemable			redeemable
apocalyptic			realistic

(Roger Clarke, http://www.rogerclarke.com/DV/NotesCFP2K.html#Steph, 2000)

In much of the rhetoric about the Wikileaks incident, it seems to me that people are using a naive “Big Brother” model of government. The Government is one thing, and it is irredeemably evil. We can come to a more nuanced understanding of the situation by adopting a Zones of Domination model. There is not one univocal government–there are many interacting entities. None are irredeemable. The enemy is bureaucracy and opacity. The key to achieving just ends is increasing accountability and transparency within and between branches of government.

In the end, what we have is the hardest research problem in Computer-Supported Cooperative Work (CSCW) one could imagine. And the most important. How do we increase transparency within and between branches of government? How do we do that and at the same time keep sensitive information secure? The presence of the Bradley Manning’s of the world makes this critical problem orders of magnitude harder.

Categories: CSCW, ethics, privacy

Why Wikileaks is Wrong

December 2, 2010 19 comments

I’m surprised to see entirely reasonable people I know pondering whether the Wikileaks release of diplomatic cables was ethical. Is this like The Pentagon Papers, they ask? I’m puzzled because to me it’s obviously not. And anyone in my undergraduate “Computers, Society, and Professionalism” class can tell you why.

As part of our class discussion of professionalism for software engineers, we review criteria for whistleblowing.  Our textbook, Ethics for the Information Age by Michael Quinn, offers these suggestions and insights. My teaching notes say:

  1. Work within the system first–there’s usually another way.
  2. Misguided protests can be damaging too–make sure you’re sure.
  3. Help people save face.
  4. Think clearly about what really matters and look for compromise.

Quinn quotes Richard De George’s five questions to ask before whistleblowing:

  1. “Do you believe the problem may result in ‘serious and considerable harm to the public’?
  2. Have you told your manager your concerns about the potential harm?
  3. Have you tried every possible channel within the organization to resolve the problem?
  4. Have you documented evidence that would persuade a neutral outsider that your view is correct?
  5. Are you reasonably sure that if you do bring this matter to public attention, something can be done to prevent the anticipated harm?”

(De George, quoted in Quinn fourth edition, p. 429).

De George says that if you answer yes to the first three questions, you may consider whistleblowing. If you answer yes to all five, you may have an ethical obligation to whistleblow. Of course these are written from the point of view of an employee considering reporting wrong doing in their company to outsiders, but the criteria still hold.

What serious harm to the public is Wikileaks trying to prevent? In what ways have they tried to work within the system first? It all doesn’t add up. On the other hand, the release of The Pentagon Papers quite clearly meets these criteria.

There are all kinds of negative consequences of the release of this information. Ignoring political implications of the specific content, the most serious consequence is a likely decrease openness and sharing within the US government. People will spend more time being paranoid, waste effort on more elaborate security procedures, and be less able to collaboratively make sense of what is going on in the world, and develop a coherent strategy. I believe in the good faith of the US government and the sincere intentions of our civil servants to make the world a better place for all nations. But even assuming you are the deepest cynic who doubts the US’s basic intentions, you can’t seriously believe that an increase in our cluelessness will help, can you? Regardless of your political leanings or nationality, this is a negative outcome for everyone.

p.s. And I really wish they wouldn’t call it “wiki.” This has nothing to do with wikis.

 

Addendum: Wikileaks vs. Bradley Manning

As folks point out in the comments, I think my problem is more with Bradley Manning (the person who released the information) than with Wikileaks.

Categories: ethics
%d bloggers like this: