Activity Balance: An Alternative Approach to Manage Kids’ Screen Time

May 11, 2016 3 comments

Our boys (ages 10 and 12) love video games. And following the truism that every generation has media choices that baffle their parents, they also love watching videos of other people playing video games. They would play and watch all day, if we let them. On weekdays, by the time they get home from school and finish their homework, we don’t mind if they spend the free time that remains playing games. On weekends, we have always limited their screen time.

This policy has always chafed. A few months ago, our twelve-year-old protested, exasperated, “Do you have any evidence that too much video games is bad for you?” I patiently explained, “It’s not that video games are bad for you. It’s that we want you to have a balanced life—read a little, get some exercise, play some video games, practice your saxophone. If you did any one of those activities to the exclusion of others, we’d ask you to balance more: ‘Put down that book and go play a video game! You can’t read all day!’”

Five months ago, it occurred to me: Why not make the policy better match the rationale? Instead of limiting our kids’ screen time, we started requiring them to do a variety of activities each weekend day: read, exercise, and practice their musical instrument. As long as those things are done at some point during the day, they can have as much screen time as they like.

So far, the policy is a huge improvement. There is much less grumbling, and better balance in their weekend days. When asked how the policy is going so far, our twelve-year-old explained that he agrees that reading and exercise are important. (He’s less sure about music practice!) He also finds the new policy makes for a more relaxing weekend day. Our ten-year-old comments, “I like it better. The point is so that I do other things with my day, and I think it’s fair.”

The day-to-day implementation is not without challenges. We still need to remind them, “Did you exercise yet today?” And if the reminder comes too late in the day, it’s just not going to happen. If we forget to remind them and monitor, the new system deteriorates to a full day of screen time. But then again, the old system did too (“Did you forget to turn the timer on? How long have you been playing?”)

It’s encouraging to me that our kids have embraced the values that underlie this system—that you must make choices about how you spend your time, certain activities are important, and balance is important.

What approach does your family use? Leave me a comment!

Categories: balance, games, kids

On Immoderate Speech

May 1, 2016 6 comments

In my last post, I mentioned GamerGate, and tried to say some balanced things. A few people complained that I needed more evidence for one of my statements (and they’re right—I need to do more research), but most people were incredibly polite in their responses. I really appreciate that.

In the blog comments, a friend from grad school decided I had lost my mind, and let me know. That’s OK—we’ve been friends for over 25 years, and he’s a good guy to argue with over interesting things. I politely told him that I disagree, and that I have data to prove it. He is sticking to his view. I’m fine with that—we’ll agree to disagree.

After that, some folks who care about GamerGate attacked my friend in the blog comments. My friend was immoderate in his tone. Some of the replies were polite requests for facts. Others were insults with less substance behind them, and the intensity of the comments escalated. It was, uh, interesting to watch….

One of the fundamental disagreements on the Internet today is about the role of immoderate speech. Is it OK to call someone a rude name or use obscene language? Are the rules different if the person is a public figure?

There’s actually, believe it or not, a correct answer to this question: It depends on where you are on the internet. The internet is not one place. Social norms are local. What it’s OK to say on 4chan or 8chan is not OK to say on your work mailing list or on comments on a mainstream news site.

Social norms differ even on different parts of the same site. One team of students in my Design of Online Communities class this term studied Shirley Curry’s YouTube Channel. Shirley is a 79-year-old grandmother who plays Skyrim, and posts her unedited gaming streams. My students found that everyone is extremely polite on Shirley’s channel. The social norms are different on her channel than on the channels of anyone else streaming the same game.

None of this is new. I wrote about how social norms differ by site in the 1990s. But one new challenge for social norms of online interaction is Twitter. What neighborhood is Twitter in? It’s in all of them and none of them. What social norms apply? No one knows. And sometimes people who think they are interacting in a Shirley-like world end up in a conversation with people who think they are on 4chan. Oh dear. Neither side leaves that encounter happy. And that’s why a lot of online conflict starts on Twitter, and on other sites that don’t have clear social norms.

Regarding what sort of neighborhood this blog represents: I’ll post (almost) any comment, but I’d appreciate it if folks would keep things more Shirley-like. I don’t mind a bit of immoderate speech now and then. But the problem is that when you crank up the intensity, a significant group of people stop listening. Calm, polite discourse might actually influence people—we all might learn something.

The Rheingold Test

April 29, 2016 48 comments

In 1993, Howard Rheingold explained the new phenomenon of online communities to a skeptical public. To convince people that online communities are really communities, he told powerful stories of members of the California BBS The WELL supporting one another not just with words, but with their time and money. For example, WELL members sent books to a bibliophile who lost his library in a fire, and helped with medical evacuation for a member who became seriously ill while traveling.

I offer this definition:
An online group where members offer one another material support passes “the Rheingold Test.”

I’ve written before that it’s silly to argue about what “is a community.” We have different mental models for “community,” and online interaction can be like and unlike face-to-face groups in nuanced ways. But I will argue that when a group passes The Rheingold Test, something special is happening.

Each spring when I teach CS 6470 Design of Online Communities, I’m surprised by the groups my students discover that pass the Rheingold Test. Years ago, master’s student Vanessa Rood Weatherly observed members of the Mini Cooper brand community sending flowers to a member whose daughter had a miscarriage. It’s not what you’d immediately expect from a group of people brought together by a car brand. In our increasingly secular society, people are looking for a sense of belonging—and finding it in affinity groups.

This term, my students’ research projects found two more sites that pass the test when I wouldn’t have expected it. The first is Vampire Freaks, a site for Goth enthusiasts. In the press, Vampire Freaks is notorious for a few incidents where members have posted about violent acts and then gone ahead and committed them. But those incidents don’t characterize daily activity on the large and active site. Just like the Goth kids at your high school stuck together and would do anything for one another, the members of Vampire Freaks support one another in times of trouble. One member comments:

“I’ve helped quite a few of my friends [on Vampire Freaks] through a lot of hard times… family issues, losing parents, losing children, drug problems even. And just being there as someone that’s supportive, instead of putting them down. Even offering a place for people to come stay if they needed somewhere… I’ve had friends off this website that have actually stayed at my house… because they were traveling and didn’t have money for a hotel. So I’d known them for a few years and figured, it’s a weekend, I’ll be up anyways. Let them stay there and hang out.”

Grad students Drew Carrington, John Dugan, and Lauren Winston were so moved by the support they saw on the site that they called their paper “VampireFreaks: Prepare to be Assimilated into a Loving and Supportive Community.”

The second surprising example from this term is the subreddit Kotaku in Action (KIA), a place for supporters of GamerGate. Although the popular press portrays GamerGate as a movement of misogynist internet trolls, the truth is that the group is made up of a complex combination of members.  KIA includes many sincere (and polite) civil libertarians, people tired of excesses of political correctness, and people tired of the deteriorating quality of journalism and angry about the real-world impact of biased reporting. People who identify as GamerGaters also include people who dox people they disagree with (posting personal information online), send anonymous death and rape threats, and worse. (Those things are not allowed on the KIA subreddit, and moderation rules prevent them.) It’s a complicated new kind of social movement with its own internal dynamics. I’ll be writing a lot about them, but for now I just want to note that they have a strong sense of group identity, and help one another when in need. Posts on KIA show members donating money to a member in financial crisis, and one who needed unexpected major dental work. They also banded together to raise money for a charity that helps male abuse survivors. They are not a viper’s nest (though there are some vipers in the nest). And they care about one another in the classic way.

When a site passes The Rheingold Test, it means there is something interesting happening there—that the whole is more than the sum of its parts. Do you know a site that passes the test? Leave me a comment.

 

Notes/Clarifications:

  • “GamerGate” is a social movement centered around a Twitter hash tag among other things. GamerGate and the KIA subreddit are not the same thing.
  • Doxxing and threats have definitely occurred, but were sent by anonymous people. Whether or not those were “by people who affiliate with GamerGate” is disputable.
Categories: social computing

Social Media Insults and Donald Trump’s Hair

March 22, 2016 Leave a comment

Mocking someone’s appearance on social media is admitting rhetorical defeat. In a non-fashion context. If we’re at a fashion show in Milan, that’s another story. But if we’re talking about any other topic, I propose that if someone says you look like a donkey, then you should reply: “I win!” Because you have. Because making such an irrelevant remark means, “I have hostile feelings towards you, but I don’t have any substantive arguments.”

I was thinking about this last week when an acquaintance posted on Facebook a picture of Hilary Clinton next to one of Captain Kangaroo (in a similar outfit), with the caption “Who wore it better?” This from a democrat. One of the most accomplished and experienced women in the world is a candidate for president, and this is what you post?

Which brings me to Donald Trump’s hair. Dear fellow democrats: Please stop mocking Donald Trump’s hair. It’s not funny, and it’s not enlightening. And every time you do it, you are screaming to the world: “I give up. I have nothing of substance to say. But hey, here’s a hair insult.”

Someone did indeed call me a donkey on Twitter last week. I am honored—I actually earned being trolled! Actually, my great uncle wrote Francis the Talking Mule, so I think the troll is confused—I have more mule in my background than donkey.

I have been researching GamerGate lately, and finding that both sides have valid points to make. And both sides have a mix of nice, principled people and angry people who are spewing bitter nonsense in public. I’ll have a lot more to say about this in the future. But for now, my advice for both sides is to calm TF down, and keep your sense of humor. And remember that if someone throws an irrelevant insult at you (like your outfit looks like Captain Kangaroo), then just laugh and say, “I win!”

Categories: Uncategorized

Everyday Racism and Social Networks

March 9, 2016 2 comments

Everyone is a little bit racist, a little bit sexist. Mahzarin Benaji can prove it. When she asks people, “Do these two words go together?”, most people will click “yes” slightly quicker if shown “man” and “scientist” than “woman” and “scientist.” Even women scientists. You can do the same experiment for racism. It’s not that a few evil people are sexist or racist—we all are, to some degree.

Despite my awareness that everyone is a little bit racist, I am still astonished by the regular demonstration of that racism on the website Nextdoor.com. Nextdoor is a discussion site for people in a local neighborhood. Members share recommendations for plumbers, discuss traffic problems, and offer items for sale. It’s a nice site. A key topic of discussion is always about security. There have been a series of burglaries in my neighborhood recently, so residents are on the alert for “suspicious” people. And evidently any African American in our neighborhood may possibly be “suspicious,” according to my neighbors. Here’s yesterday’s example:

This morning was dog was ill. so I took her outside around 5:15 AM. I saw a car driving slowly … and stopping. The car stopped twice, a tall African American man wearing a dark sweatshirt dark pants and got out, kept his head lights on and walked up towards a house with his cell phone out. Then, walked back down to his car, got in, and continued driving slowly down the street. He kept his headlights on the entire time, even when the car was parked in the street. The car looked to be a beige/gold Mitsubishi. I was half asleep when I saw all of this and realized later I should have called 9-1-1. Just wanted folks to be aware.

Does that sound suspicious to you? Fortunately, another Nextdoor member pointed out:

Pretty sure he delivers the paper- i see him out several times a week- better safe than sorry though

I would laugh if I didn’t feel like crying. Because this happens all the time. Would people have worried that a man delivering papers was suspicious if he were white? I can’t prove that race was a factor here. But most of these incidents are about people of color. And it keeps happening.

In an incident last year, a mother posted an urgent alert that there was an attempted abduction of her seven-year-old daughter, who had been out walking the dog. There was a white van, following a white pickup truck. Right as her daughter was walking by, an African American man opened the door of the van and came towards her! Her daughter ran all the way home! The urgent alert received dozens of concerned replies. The police were called. And later that morning, I saw construction workers at a site three blocks away, with their white van and white pickup truck parked on the street.

It might help if people were simply more aware of this as a problem, but alerting people is hard. A couple weeks ago, a Nextdoor member in our neighborhood tried to draw attention to the problem of racism on the site, and got attacked by other participants. I waded in to merely say I think she might have a point, and I got attacked. The moderator shut down the discussion thread citing “policy violations on both sides.” So much for civil discourse.

The problem is not unique to Nextdoor—it’s just particularly easy to observe there. The site Hollaback takes an unusual approach to this problem—they discourage mentions of race. The purpose of Hollaback is to support discussion of street harassment. If someone cat calls you on the street or gropes you on the subway, you can go to Hollaback to share your experience—both to express your feelings, get support, and alert the community. But they discourage posters from mentioning the race of their harasser:

Due in part to prevalent stereotypes of men of color as sexual predators or predisposed to violence, HollaBackNYC asks that contributors do not discuss the race of harassers or include other racialized commentary

The more I see the everday racism of my neighbors on Nextdoor, the more I see the reasons for this policy. But it still feels like an extreme solution. (Someone groped me, and I can’t say what they looked like? I can hear the cries of political correctness gone mad.)

There really are (occasional) burglars in our neighborhood, and Nextdoor serves an important function by helping people alert one another. But is it possible to be a black man in our neighborhood and not be reported as suspicious?

The long-term solution is to all work to be less racist. To confront the tacit stereotypes we all hold,. In the short term, how do we stop social networks from making the problem worse? Leave me a comment.

Categories: Uncategorized

More on TOS: Maybe Documenting Intent Is Not So Smart

February 29, 2016 Leave a comment

In my last post, I wrote that “Reviewers should reject research done by violating Terms of Service (TOS), unless the paper contains a clear ethical justification for why the importance of the work justifies breaking the law.”  My friend Mako Hill (University of Washington) pointed out to me (quite sensibly) that that would get people in more trouble–it  asks people to document their intent to break the TOS.  He’s right.  If we believe that under some circumstances breaking TOS is ethical, then requiring researchers to document their intent is not strategic.

This leaves us in an untenable situation.  We can’t engage in a formal review of whether something breaking TOS is justified, because that would make potential legal problems worse. Of course we can encourage individuals to be mindful and not break TOS without good reason. But is that good enough?

Sometimes TOS prohibit research for good reason. For example, YikYak is trying to abide by user’s expectations of ephemerality and privacy. People participate in online activity with a reasonable expectation that the TOS are rules that will be followed, and they can rely on that in deciding what they choose to share. Is it fair to me if my content suddenly shows up in your research study, when it’s clearly prohibited by the TOS?  Do we really trust individual researchers to decide when breaking TOS is justified with no outside review?  When I have a tricky research protocol, I want review. Just letting each researcher decide for themselves makes no sense. The situation is a mess.

Legal change is the real remedy here–such as passing Aaron’s Law, and also possibly an exemption from TOS for researchers (in cases where user rights are scrupulously protected).

Do you have a better solution?  I hope so. Leave me a comment!

Do Researchers Need to Abide by Terms of Service (TOS)? An Answer.

February 26, 2016 8 comments

The TL;DR: Reviewers should reject research done by violating Terms of Service (TOS), unless the paper contains a clear ethical justification for why the importance of the work justifies breaking the law. No one should deliberately break the law without being aware of potential consequences.

Do social computing researchers need to follow Terms of Service (TOS)? Confusion on this issue has created an ethical mess. Some researchers choose not to do a particular piece of work because they believe they can’t violate TOS, and then another researcher goes and does that same study and gets it published with no objections from reviewers. Does this make sense?

It happened to me recently. The social app YikYak is based in Atlanta, so I’m pretty sure my students and I started exploring it before anyone else in the field. But we quickly realized that the Terms of Service prohibit scraping the site, so we didn’t do it. We talked with YikYak and started negotiations about getting permission to scrape, and we might’ve gotten permission if we had been more persistent. But I felt like I was bothering extremely busy startup employees with something not on their critical path. So we quietly dropped our inquiries. Two years later, someone from another university published a paper about YikYak like what we wanted to write, using scraped data. This kind of thing happens all the time, and isn’t fair.

There are sometimes good reasons why companies prohibit scraping. For example, YikYak users have a reasonable expectation that their postings are ephemeral. Having them show up in a research paper is not what they expect. Sometimes a company puts up research prohibitions because they’re trying to protect their users. Can a University IRB ever allow research that is prohibited in a site’s TOS?

Asking permission of a company to collect data is sometimes successful.  Some companies have shared huge datasets with researchers, and learned great things from the results. It can be a win-win situation. If it’s possible to request and obtain permission, that’s a great option—if the corporation doesn’t request control over what is said about their site in return. The tricky question is whether researchers will be less than honest in publishing findings in these situations, because they fear not getting more data access.

Members of the research community right now are confused about whether they need to abide by TOS. Reviewers are confused about whether this issue is in their purview—should they consider whether a paper abides by TOS in the review process? Beyond these confusions lurks an arguably more important issue: What happens when a risk-averse company prohibits reasonable research? This is of critical importance because scholars cannot cede control of how we understand our world to corporate interests.

Corporations are highly motivated to control what is said about them in public. When I was co-chair of CSCW in 2013, USB sticks of the proceedings were in production when I received an email from BigCompany asking that a publication by one of their summer interns be removed. I replied, “It’s been accepted for publication, and we’re already producing them.” They said, “We’ll pay to have them remade.” I said, “I need to get this request from the author of the paper, not company staff.” They agreed, and a few hours later a very sheepish former summer intern at BigCompany emailed me requesting that his paper be withdrawn. He had submitted it without internal approval. ACM was eager to make BigCompany happy, so it was removed. BigCompany says it was withdrawn because it didn’t meet their standards for quality research—and that’s probably true. But it’s also true that the paper was critical of BigCompany, and they want to control messaging about them. Companies do have a right to control publication by their employees, and the intern was out of line. Of course companies want to control what their employees say about them in public—that makes sense. But what are the broader implications if they also prohibit outside analysis?

If online sites both control what their employees say and prohibit independent analysis, how do we study their impact on the world? Can anyone criticize them? Can anyone help them to see how their design choices reshape our culture, and reshape people’s lives?

From this perspective, one might argue that it can be ethical to violate Terms of Service. But it’s still not legal. TOS can be legally binding, and simply clicking through them is typically interpreted as assent. They are rarely enforced, but when they are, the consequences can be devastating, as the sad death of Internet pioneer Aaron Swartz shows. Swartz was accused of little more than violating TOS, and charged with multiple felonies under the Computer Fraud and Abuse Act (CFAA). The stress of his prosecution led to his suicide.

For another example, my colleague Brian Larson pointed me to the case Palmer v. Kleargear. John Palmer and Jennifer Kulas left a negative review of Kleargear on the site Ripoff Report after the item they ordered was never sent. Kleargear tried to enforce an anti-disparagement clause in their TOS, and demanded the review be taken down. Since you can’t delete reviews on Ripoff Report without paying a $2000 fee, they declined. Which set in motion a multi-year legal battle. Pending legislation in California may make such clauses illegal. However, the consequences for individuals of violating terms—even absurd terms—remain potentially high.

TOS are contracts. Larry Lessig points out that “Contracts are important. Their breach must be remedied. But American law does not typically make the breach of a contract a felony.” Proposed legal changes, “Aaron’s Law,” would limit the scope of the CFAA so that breaches of contract are treated more like breaches of contract rather than felonies. Breaches of contract often have limited penalties if there is no real harm done. Researchers should keep on eye on this issue—unless Aaron’s Law passes, ridiculous penalties are still the law.

We’re in a quandary. We have compelling reasons why violating TOS is sometimes ethical, but it’s still not legal. So what are we as a field supposed to do? Here’s my answer:

If an individual chooses to take on the legal risk of violating TOS, they need to justify why. This is not something you can do lightly. In publishing work that comes from data obtained by violating TOS, this must be clearly acknowledged and justified. Work that breaks TOS with no justification should be rejected by reviewers. Reviewers should pro-actively review a site’s TOS, if it’s not explicitly discussed in the paper.

However, you should think carefully about doing work that violates TOS collaboratively with subordinates. Can you as faculty  take this risk yourself? Sure. But faculty are in a position of power over students, who may have difficulty saying no. Senior faculty also have power over junior faculty. If a group of researchers of differing seniority wants to do this kind of work, the more senior members need to be careful that there is no implicit coercion to participate caused by the unavoidable power relations among members of the research team.

I believe we need legal change to remedy this state of affairs. Until that happens, I would encourage friends to be cautious. Someone is going to get in hot water for this—don’t let it be you.

Do I like the fact that corporations have such control over what is said about them? I do not—not at all. But legal consequences are real, and people should take on that risk only when they have good reason and know what they are doing, without any implicit coercion.

In summary, I am proposing:

Reviewers should reject research done by violating Terms of Service (TOS), unless the paper contains a clear ethical justification for why the importance of the work justifies breaking the law. Reviewers’ should pro-actively check a site’s TOS if it is not discussed in the paper.

If a team of researchers who choose to violate TOS are of different academic ranks (i.e. tenured, pre-tenure, students), then the more senior authors should seriously consider whether more junior participants are truly consenting and not implicitly pressured.

Professionals societies like ACM and IEEE should advocate for legal reform to the Computer Fraud and Abuse Act (CFAA) such as the proposed Aaron’s Law.

Do you agree? Leave me a comment!

Thanks to Casey Fiesler and Brian Larson for helpful comments on this post.

Follow

Get every new post delivered to your Inbox.

Join 4,296 other followers

%d bloggers like this: