Social Media Insults and Donald Trump’s Hair

March 22, 2016 Leave a comment

Mocking someone’s appearance on social media is admitting rhetorical defeat. In a non-fashion context. If we’re at a fashion show in Milan, that’s another story. But if we’re talking about any other topic, I propose that if someone says you look like a donkey, then you should reply: “I win!” Because you have. Because making such an irrelevant remark means, “I have hostile feelings towards you, but I don’t have any substantive arguments.”

I was thinking about this last week when an acquaintance posted on Facebook a picture of Hilary Clinton next to one of Captain Kangaroo (in a similar outfit), with the caption “Who wore it better?” This from a democrat. One of the most accomplished and experienced women in the world is a candidate for president, and this is what you post?

Which brings me to Donald Trump’s hair. Dear fellow democrats: Please stop mocking Donald Trump’s hair. It’s not funny, and it’s not enlightening. And every time you do it, you are screaming to the world: “I give up. I have nothing of substance to say. But hey, here’s a hair insult.”

Someone did indeed call me a donkey on Twitter last week. I am honored—I actually earned being trolled! Actually, my great uncle wrote Francis the Talking Mule, so I think the troll is confused—I have more mule in my background than donkey.

I have been researching GamerGate lately, and finding that both sides have valid points to make. And both sides have a mix of nice, principled people and angry people who are spewing bitter nonsense in public. I’ll have a lot more to say about this in the future. But for now, my advice for both sides is to calm TF down, and keep your sense of humor. And remember that if someone throws an irrelevant insult at you (like your outfit looks like Captain Kangaroo), then just laugh and say, “I win!”

Categories: Uncategorized

Everyday Racism and Social Networks

March 9, 2016 2 comments

Everyone is a little bit racist, a little bit sexist. Mahzarin Benaji can prove it. When she asks people, “Do these two words go together?”, most people will click “yes” slightly quicker if shown “man” and “scientist” than “woman” and “scientist.” Even women scientists. You can do the same experiment for racism. It’s not that a few evil people are sexist or racist—we all are, to some degree.

Despite my awareness that everyone is a little bit racist, I am still astonished by the regular demonstration of that racism on the website Nextdoor.com. Nextdoor is a discussion site for people in a local neighborhood. Members share recommendations for plumbers, discuss traffic problems, and offer items for sale. It’s a nice site. A key topic of discussion is always about security. There have been a series of burglaries in my neighborhood recently, so residents are on the alert for “suspicious” people. And evidently any African American in our neighborhood may possibly be “suspicious,” according to my neighbors. Here’s yesterday’s example:

This morning was dog was ill. so I took her outside around 5:15 AM. I saw a car driving slowly … and stopping. The car stopped twice, a tall African American man wearing a dark sweatshirt dark pants and got out, kept his head lights on and walked up towards a house with his cell phone out. Then, walked back down to his car, got in, and continued driving slowly down the street. He kept his headlights on the entire time, even when the car was parked in the street. The car looked to be a beige/gold Mitsubishi. I was half asleep when I saw all of this and realized later I should have called 9-1-1. Just wanted folks to be aware.

Does that sound suspicious to you? Fortunately, another Nextdoor member pointed out:

Pretty sure he delivers the paper- i see him out several times a week- better safe than sorry though

I would laugh if I didn’t feel like crying. Because this happens all the time. Would people have worried that a man delivering papers was suspicious if he were white? I can’t prove that race was a factor here. But most of these incidents are about people of color. And it keeps happening.

In an incident last year, a mother posted an urgent alert that there was an attempted abduction of her seven-year-old daughter, who had been out walking the dog. There was a white van, following a white pickup truck. Right as her daughter was walking by, an African American man opened the door of the van and came towards her! Her daughter ran all the way home! The urgent alert received dozens of concerned replies. The police were called. And later that morning, I saw construction workers at a site three blocks away, with their white van and white pickup truck parked on the street.

It might help if people were simply more aware of this as a problem, but alerting people is hard. A couple weeks ago, a Nextdoor member in our neighborhood tried to draw attention to the problem of racism on the site, and got attacked by other participants. I waded in to merely say I think she might have a point, and I got attacked. The moderator shut down the discussion thread citing “policy violations on both sides.” So much for civil discourse.

The problem is not unique to Nextdoor—it’s just particularly easy to observe there. The site Hollaback takes an unusual approach to this problem—they discourage mentions of race. The purpose of Hollaback is to support discussion of street harassment. If someone cat calls you on the street or gropes you on the subway, you can go to Hollaback to share your experience—both to express your feelings, get support, and alert the community. But they discourage posters from mentioning the race of their harasser:

Due in part to prevalent stereotypes of men of color as sexual predators or predisposed to violence, HollaBackNYC asks that contributors do not discuss the race of harassers or include other racialized commentary

The more I see the everday racism of my neighbors on Nextdoor, the more I see the reasons for this policy. But it still feels like an extreme solution. (Someone groped me, and I can’t say what they looked like? I can hear the cries of political correctness gone mad.)

There really are (occasional) burglars in our neighborhood, and Nextdoor serves an important function by helping people alert one another. But is it possible to be a black man in our neighborhood and not be reported as suspicious?

The long-term solution is to all work to be less racist. To confront the tacit stereotypes we all hold,. In the short term, how do we stop social networks from making the problem worse? Leave me a comment.

Categories: Uncategorized

More on TOS: Maybe Documenting Intent Is Not So Smart

February 29, 2016 1 comment

In my last post, I wrote that “Reviewers should reject research done by violating Terms of Service (TOS), unless the paper contains a clear ethical justification for why the importance of the work justifies breaking the law.”  My friend Mako Hill (University of Washington) pointed out to me (quite sensibly) that that would get people in more trouble–it  asks people to document their intent to break the TOS.  He’s right.  If we believe that under some circumstances breaking TOS is ethical, then requiring researchers to document their intent is not strategic.

This leaves us in an untenable situation.  We can’t engage in a formal review of whether something breaking TOS is justified, because that would make potential legal problems worse. Of course we can encourage individuals to be mindful and not break TOS without good reason. But is that good enough?

Sometimes TOS prohibit research for good reason. For example, YikYak is trying to abide by user’s expectations of ephemerality and privacy. People participate in online activity with a reasonable expectation that the TOS are rules that will be followed, and they can rely on that in deciding what they choose to share. Is it fair to me if my content suddenly shows up in your research study, when it’s clearly prohibited by the TOS?  Do we really trust individual researchers to decide when breaking TOS is justified with no outside review?  When I have a tricky research protocol, I want review. Just letting each researcher decide for themselves makes no sense. The situation is a mess.

Legal change is the real remedy here–such as passing Aaron’s Law, and also possibly an exemption from TOS for researchers (in cases where user rights are scrupulously protected).

Do you have a better solution?  I hope so. Leave me a comment!

Do Researchers Need to Abide by Terms of Service (TOS)? An Answer.

February 26, 2016 8 comments

The TL;DR: Reviewers should reject research done by violating Terms of Service (TOS), unless the paper contains a clear ethical justification for why the importance of the work justifies breaking the law. No one should deliberately break the law without being aware of potential consequences.

Do social computing researchers need to follow Terms of Service (TOS)? Confusion on this issue has created an ethical mess. Some researchers choose not to do a particular piece of work because they believe they can’t violate TOS, and then another researcher goes and does that same study and gets it published with no objections from reviewers. Does this make sense?

It happened to me recently. The social app YikYak is based in Atlanta, so I’m pretty sure my students and I started exploring it before anyone else in the field. But we quickly realized that the Terms of Service prohibit scraping the site, so we didn’t do it. We talked with YikYak and started negotiations about getting permission to scrape, and we might’ve gotten permission if we had been more persistent. But I felt like I was bothering extremely busy startup employees with something not on their critical path. So we quietly dropped our inquiries. Two years later, someone from another university published a paper about YikYak like what we wanted to write, using scraped data. This kind of thing happens all the time, and isn’t fair.

There are sometimes good reasons why companies prohibit scraping. For example, YikYak users have a reasonable expectation that their postings are ephemeral. Having them show up in a research paper is not what they expect. Sometimes a company puts up research prohibitions because they’re trying to protect their users. Can a University IRB ever allow research that is prohibited in a site’s TOS?

Asking permission of a company to collect data is sometimes successful.  Some companies have shared huge datasets with researchers, and learned great things from the results. It can be a win-win situation. If it’s possible to request and obtain permission, that’s a great option—if the corporation doesn’t request control over what is said about their site in return. The tricky question is whether researchers will be less than honest in publishing findings in these situations, because they fear not getting more data access.

Members of the research community right now are confused about whether they need to abide by TOS. Reviewers are confused about whether this issue is in their purview—should they consider whether a paper abides by TOS in the review process? Beyond these confusions lurks an arguably more important issue: What happens when a risk-averse company prohibits reasonable research? This is of critical importance because scholars cannot cede control of how we understand our world to corporate interests.

Corporations are highly motivated to control what is said about them in public. When I was co-chair of CSCW in 2013, USB sticks of the proceedings were in production when I received an email from BigCompany asking that a publication by one of their summer interns be removed. I replied, “It’s been accepted for publication, and we’re already producing them.” They said, “We’ll pay to have them remade.” I said, “I need to get this request from the author of the paper, not company staff.” They agreed, and a few hours later a very sheepish former summer intern at BigCompany emailed me requesting that his paper be withdrawn. He had submitted it without internal approval. ACM was eager to make BigCompany happy, so it was removed. BigCompany says it was withdrawn because it didn’t meet their standards for quality research—and that’s probably true. But it’s also true that the paper was critical of BigCompany, and they want to control messaging about them. Companies do have a right to control publication by their employees, and the intern was out of line. Of course companies want to control what their employees say about them in public—that makes sense. But what are the broader implications if they also prohibit outside analysis?

If online sites both control what their employees say and prohibit independent analysis, how do we study their impact on the world? Can anyone criticize them? Can anyone help them to see how their design choices reshape our culture, and reshape people’s lives?

From this perspective, one might argue that it can be ethical to violate Terms of Service. But it’s still not legal. TOS can be legally binding, and simply clicking through them is typically interpreted as assent. They are rarely enforced, but when they are, the consequences can be devastating, as the sad death of Internet pioneer Aaron Swartz shows. Swartz was accused of little more than violating TOS, and charged with multiple felonies under the Computer Fraud and Abuse Act (CFAA). The stress of his prosecution led to his suicide.

For another example, my colleague Brian Larson pointed me to the case Palmer v. Kleargear. John Palmer and Jennifer Kulas left a negative review of Kleargear on the site Ripoff Report after the item they ordered was never sent. Kleargear tried to enforce an anti-disparagement clause in their TOS, and demanded the review be taken down. Since you can’t delete reviews on Ripoff Report without paying a $2000 fee, they declined. Which set in motion a multi-year legal battle. Pending legislation in California may make such clauses illegal. However, the consequences for individuals of violating terms—even absurd terms—remain potentially high.

TOS are contracts. Larry Lessig points out that “Contracts are important. Their breach must be remedied. But American law does not typically make the breach of a contract a felony.” Proposed legal changes, “Aaron’s Law,” would limit the scope of the CFAA so that breaches of contract are treated more like breaches of contract rather than felonies. Breaches of contract often have limited penalties if there is no real harm done. Researchers should keep on eye on this issue—unless Aaron’s Law passes, ridiculous penalties are still the law.

We’re in a quandary. We have compelling reasons why violating TOS is sometimes ethical, but it’s still not legal. So what are we as a field supposed to do? Here’s my answer:

If an individual chooses to take on the legal risk of violating TOS, they need to justify why. This is not something you can do lightly. In publishing work that comes from data obtained by violating TOS, this must be clearly acknowledged and justified. Work that breaks TOS with no justification should be rejected by reviewers. Reviewers should pro-actively review a site’s TOS, if it’s not explicitly discussed in the paper.

However, you should think carefully about doing work that violates TOS collaboratively with subordinates. Can you as faculty  take this risk yourself? Sure. But faculty are in a position of power over students, who may have difficulty saying no. Senior faculty also have power over junior faculty. If a group of researchers of differing seniority wants to do this kind of work, the more senior members need to be careful that there is no implicit coercion to participate caused by the unavoidable power relations among members of the research team.

I believe we need legal change to remedy this state of affairs. Until that happens, I would encourage friends to be cautious. Someone is going to get in hot water for this—don’t let it be you.

Do I like the fact that corporations have such control over what is said about them? I do not—not at all. But legal consequences are real, and people should take on that risk only when they have good reason and know what they are doing, without any implicit coercion.

In summary, I am proposing:

Reviewers should reject research done by violating Terms of Service (TOS), unless the paper contains a clear ethical justification for why the importance of the work justifies breaking the law. Reviewers’ should pro-actively check a site’s TOS if it is not discussed in the paper.

If a team of researchers who choose to violate TOS are of different academic ranks (i.e. tenured, pre-tenure, students), then the more senior authors should seriously consider whether more junior participants are truly consenting and not implicitly pressured.

Professionals societies like ACM and IEEE should advocate for legal reform to the Computer Fraud and Abuse Act (CFAA) such as the proposed Aaron’s Law.

Do you agree? Leave me a comment!

Thanks to Casey Fiesler and Brian Larson for helpful comments on this post.

It’s Time to Cancel College Football

November 9, 2015 10 comments

I love football. My husband Pete played right guard and defensive end in high school, and a picture of him in his uniform (#67) is on my dresser. Pete and I were season ticket holders for The Atlanta Falcons for more than a decade. Before that, we were season ticket holders for the Georgia Tech Yellow Jackets. I was commissioner of a fantasy football league for more than a decade. I love football, but I have mostly given it up, because it is hurting our athletes.

The evidence is unmistakable: football players can develop traumatic brain injuries decades later, even if they never had a concussion. The NFL agrees that a third of former players will develop serious brain injuries. In 79 brains of former players studied by the nation’s largest brain bank, 76 had traumatic brain injuries. Suicides among former players suffering from brain injuries are rising.

When our oldest son was born, Pete and I had several conversations about whether we would let him play football. Pete showed me his slightly crooked left pinky finger, noted that a jammed finger was his only real injury, and argued that the sport is perfectly safe at the high-school level. That was ten years ago. These days, with every new news report about chronic traumatic encephalitis (CTE), he cowers. He is worried that all those hits as a lineman will catch up with him—and he only played through high school. Not the big hits, but the accumulation of small hits. When our now 11-year-old begged to join a rugby team this fall, it was Pete who said no.

Every time this topic comes up, the phrase “an inconvenient truth” comes to mind. The facts are extremely inconvenient. But the evidence is so clear at this point that it seems irresponsible to continue with the status quo. At minimum, it’s time for major rule changes. But I’m skeptical that rule changes can fix the problem. I’m wondering if it’s time for us to cancel football. Especially college football. As a university, our mission is to nurture our students—to help prepare them for productive and healthy lives as members of society. All of our students—including members of our football team. Can we really say right now that we are putting their best interests first?

Pro football players are adults, and they make their own choices. But in college football, the students have been put in our care. Our responsibilities as a university are different.

Yes, I know what I’m suggesting would cause a firestorm of unprecedented proportions. Yes, I know the alumni will riot. But should we refuse to do the right thing because it’s inconvenient or unpopular?

I admire Chris Borland, who left the NFL after one year out of concern for his health. I admire my colleague Janet Murray, who turned down an invitation to be guest coach for our football team because she feels the damage the sport is doing to our students’ long-term health is unjustifiable. She explained this in a letter to our football coach. More people need to stand up. It’s time for things to change.

It seems likely that over the next several years a series of high-profile lawsuits will lead to multi-million dollar judgments for former players, both college and pro. I wouldn’t buy stock in a company that insures NFL teams. As a state school, Georgia Tech typically doesn’t buy insurance—we self insure, or rely on our sovereign immunity from lawsuits. I don’t understand the legal nuances here, but I wonder what’s going to happen. If state sovereign immunity holds up in court, will our former players get no compensation? If it doesn’t hold up, will we have fewer science labs and student lounges because all our money is going to cover football liability? Neither option is appealing.

After the first round of lawsuits, no doubt the rules of football will be changed to make it safer. I’ll speculate that a few years later, there will be more lawsuits saying, “Sorry, we’re still getting hurt.” And then the rules will change more. And onwards, until eventually the game will be unrecognizable from what it is today. But do we really need to let this whole process take decades? Given that the end seems inevitable, can we speed things up a bit by doing the right thing now?

Georgia Tech’s mission statement says, “We will be leaders in improving the human condition in Georgia, the United States, and around the globe.” I hope we have the courage to lead on this issue. It would certainly make a statement if we said, “We are cancelling football, because it’s not safe.” We can have our homecoming celebration at a basketball or baseball game. They are also fine traditions.

We gave up our NFL season tickets this year, and I don’t play fantasy football any more. I do sometimes still watch Falcons and Yellow Jackets games on television, but feel guilty even about that.

To everyone reading this, especially my fellow faculty members at Georgia Tech, and others schools: If you agree, I hope you’ll say something. Publicly. We need all of us to speak up, if change is to happen sooner rather than later. Before another generation of players suffer the consequences.

Categories: academia, sports Tags:

Teaching about Privacy and Surveillance: Real Life is not an Episode of ‘24’

November 5, 2015 Leave a comment

Privacy is an increasingly important social implication of technology, and we spend quite a bit of time about it in our required undergraduate ethics and social implications of technology class, CS 4001. Since we’re talking about privacy, it makes sense to talk about surveillance. Since 2004, I’ve taught a class about The USA PATRIOT Act, and more recently I’ve added a class on information revealed by Edward Snowden. I spend more time preparing for those classes than for any other two or three put together—it’s confusing and complicated. There are provisions of the Patriot Act that are absolutely essential—like broadening the jurisdiction of warrants to tap phones to the entire country (rather than making you get a warrant in each state). And others that are egregious violations of our liberty—like the section 215 provision that lets the government get the records of any organization without a warrant or probable cause and bars the organization from acknowledging the search. The FBI can simply demand the membership list of a mosque—and they have done so. For the last two years, I’ve assigned my students to watch the PBS Frontline documentary United States of Secrets, about US warrantless surveillance (“The Program”) and information leaked by Edward Snowden. In our class discussion, we don’t focus on Snowden, but on other people—like NSA analyst Thomas Drake—and the tough decisions they had to make. After class on Tuesday where I carefully spell out what’s allowed under the Patriot Act and the Foreign Intelligence Surveillance Act (FISA), I feel like a bit of fool on Thursday when we discuss The Program and the fact that all those rules aren’t really followed anyway.

I do my best not to express any opinions to my class—I present the facts, and ask them what they think. And as much as possible, I emphasize tradeoffs and try to show the issues as complicated. And then I walk back from class and scratch my head—what do I actually think?

After class last week, two things became clearer in my mind. The first is about checks and balances. My children are learning about checks and balances in elementary school social studies class. Checks and balances are fundamental to how our government works. And it suddenly became evident to me that most cases of the system going too far are situations where checks and balances are not occurring. You don’t need a court order to get records with a National Security Letter (NSL). Why not? A secret court like the FISA court could do the job. And if it’s urgent, the review could take place within a reasonable time after the fact (as FISA mandates for surveillance.) It’s too much to ask any one branch of government to police themselves. The FBI needs to pursue things as aggressively as they dare, and the judiciary needs to say, “You may go this far and no farther.” Parts of the Patriot Act removed checks and balances, and procedures without checks and balances are where we get into trouble. Everything you need to know we all learned in elementary school—but somehow, we’ve forgotten it.

The second thought is about means and ends. It is possible for me to describe a fictional situation in which reasonable people would agree that that the ends justify evil means—like recording everyone all the time, or torturing someone for information. If you don’t agree with that statement, make the situation more extreme until you do. But in real life, the evidence for the need is almost never that compelling. If you demand an iron-clad case, you’ll (almost) never say the ends justify evil means in real situations. Real life is not an episode of ’24’.

Should We Pay Less Social Media Attention to Violence? Lessons from WWII and Fu-Go

September 10, 2015 Leave a comment

On September 11th, 2001, I turned the television off. I knew what was happening was historic. But I knew my family in New York City were fine. Initial details were sketchy, and lots of misinformation was being reported. I figured, why listen to the blow-by-blow—I’ll get the real story later, right? Plus I didn’t see any point in wallowing in tragedy. So I did the only sensible thing I could think of—I got back to work. I had a paper deadline for the CHI conference coming up.

If part of the point of terrorism is to draw the public’s attention, what if we all simply refused to listen? Or if the media refused to publish the story? Of course that’s a silly suggestion. People want to know. In fact, some evolutionary biologists argue that we are hard-wired to be fascinated by danger—our fascination keeps us safe. Is turning news about violent attacks off either possible or desirable?

I was struck again by this idea when I listened to the Radio Lab podcast on the Fu-Go project. During World War II, Japan sent thousands of balloons carrying bombs to the US. The intent was to terrorize the American public. To prevent that outcome, the US government suppressed stories about the bombs. No one was told about them, and the public wasn’t terrorized at all. And that mostly worked out perfectly—with one exception. A group of children out on a church picnic found one, and all gathered around to see what it was—with disastrous consequences. All the other bombs exploded harmlessly. (Though there’s a chance that some are still out there, in remote places in the Pacific Northwest. One was found in 2014.)

The tradeoffs here are headache inducing. By suppressing freedom of speech and freedom of the press, the US government prevented national panic. And caused the deaths of six people.

I would never condone government suppression of free speech. And if bombs were floating around my area, I’d definitely want to know. But I do wonder if sharing stories about such acts is causing part of the problem. What if we all stopped forwarding the link about that crazy shooter? Would that make the next person less likely to shoot people for attention? What if we all just turned this kind of news off?

I don’t think it’s either possible or desirable to do that on a large scale. But maybe, just as an experiment, we could all try not posting/forwarding/retweeting stories that draw attention to someone who did something heinous in order to get attention.

Kudos to Radio Lab for a thought provoking (though depressing) podcast.

Categories: social computing Tags:
%d bloggers like this: