It’s about Underpayment in (Game) Journalism

June 4, 2016 12 comments

 

A Twitter acquaintance shared this video with me last night: Buzzfeed’s Color Cabal Conspiracy – Harmful News. In it, the narrator critiques a Buzzfeed article where a naïve writer takes the words of trolls as truth, and Buzzfeed publishes it (with an added footnote later that it might not be true).

I’ve been studying members of the #GamerGate movement, and I’ve seen some awful stuff posted online: misogyny, rape threats, racism, and more. But at the same time, I also see that a subset of GamerGate supporters are reasonable people, and the movement has some valid points. One point is that journalism is in crisis.

The tag line for GamerGate is “It’s about ethics in game journalism.” I object to the use of the word “ethics.” Using that word implies that people are deliberately writing incorrect things. I think that’s giving the writers too much credit, assuming they know the truth and are deliberately subverting it. I’m sure there are cases where that is true, but I will argue that in the overwhelming majority of cases, Hanlon’s Razor comes into play: never attribute to malice to what can be explained by simple incompetence.

Changing business models have created problems for the current state of journalism—all the incentives are out of whack. If a freelance writer is paid a couple hundred dollars for a story, how much time can they afford to spend on it? I used to write short articles for Wired when I was a graduate student, and made enough for a bit of extra spending money—like going out to dinner or on a weekend trip. But for an adult with rent to pay, it can’t even scratch the surface. And payment has dropped dramatically over the last several years.

If you pay people pocket money, you get amateurs. Even worse, if you pay per click, you get writers pandering to prurient interests. Jack Murtha writes in the Columbia Journalism Review:

[Pay-per-click] was once the crown jewel of content-heavy startups like Gawker, where young writers typed dozens of articles each week, aggregating and snarking their way to a digital-media empire. Now it’s something of a financial loophole used by content mills that prey on desperate young journalists, who scrape together clickbait in exchange for pennies.

Contrast the situation of a pay-per-click writer to a salaried journalist. The person on salary is rewarded for careful work, and is assigned to cover topics based on their importance, rather than self selecting what they think will earn clicks. In his foundational work on the nature of peer production, Harvard law professor Yokai Benkler notes that a strength of peer production is that individuals self identify for tasks they are qualified for. That works pretty well for things like open-source software and Wikipedia. And it even works pretty well for unpaid writing—expert bloggers often self-identify to write pieces on topics they care about and are knowledgeable about. But where it doesn’t work is when in journalism the peer production economy overlaps with the micropayment economy, and we get, as Murtha notes, clickbait in exchange for pennies.

Instead of saying “It’s about ethics in game journalism,” I suggest that GamerGate folks say, “It’s about underpayment in game journalism.” And we might as well remove the word “game”: It’s about underpayment in journalism. I will argue that the gaming press is a bellwether for the rest of the industry. Because game journalism is arguably less important than political or business journalism, it is leading the way in de-professionalization.

Fortunately, the solution to all this is pretty easy: Be willing to pay for quality news. If you care about game journalism or journalism more generally, find a venue that pays a living wage to talented professionals, and be willing to pay for it.

 

Addendum: Since a few people were confused, I am not a journalist. I teach and do research at Georgia Tech.

Activity Balance: An Alternative Approach to Manage Kids’ Screen Time

May 11, 2016 3 comments

Our boys (ages 10 and 12) love video games. And following the truism that every generation has media choices that baffle their parents, they also love watching videos of other people playing video games. They would play and watch all day, if we let them. On weekdays, by the time they get home from school and finish their homework, we don’t mind if they spend the free time that remains playing games. On weekends, we have always limited their screen time.

This policy has always chafed. A few months ago, our twelve-year-old protested, exasperated, “Do you have any evidence that too much video games is bad for you?” I patiently explained, “It’s not that video games are bad for you. It’s that we want you to have a balanced life—read a little, get some exercise, play some video games, practice your saxophone. If you did any one of those activities to the exclusion of others, we’d ask you to balance more: ‘Put down that book and go play a video game! You can’t read all day!’”

Five months ago, it occurred to me: Why not make the policy better match the rationale? Instead of limiting our kids’ screen time, we started requiring them to do a variety of activities each weekend day: read, exercise, and practice their musical instrument. As long as those things are done at some point during the day, they can have as much screen time as they like.

So far, the policy is a huge improvement. There is much less grumbling, and better balance in their weekend days. When asked how the policy is going so far, our twelve-year-old explained that he agrees that reading and exercise are important. (He’s less sure about music practice!) He also finds the new policy makes for a more relaxing weekend day. Our ten-year-old comments, “I like it better. The point is so that I do other things with my day, and I think it’s fair.”

The day-to-day implementation is not without challenges. We still need to remind them, “Did you exercise yet today?” And if the reminder comes too late in the day, it’s just not going to happen. If we forget to remind them and monitor, the new system deteriorates to a full day of screen time. But then again, the old system did too (“Did you forget to turn the timer on? How long have you been playing?”)

It’s encouraging to me that our kids have embraced the values that underlie this system—that you must make choices about how you spend your time, certain activities are important, and balance is important.

What approach does your family use? Leave me a comment!

Categories: balance, games, kids

On Immoderate Speech

May 1, 2016 6 comments

In my last post, I mentioned GamerGate, and tried to say some balanced things. A few people complained that I needed more evidence for one of my statements (and they’re right—I need to do more research), but most people were incredibly polite in their responses. I really appreciate that.

In the blog comments, a friend from grad school decided I had lost my mind, and let me know. That’s OK—we’ve been friends for over 25 years, and he’s a good guy to argue with over interesting things. I politely told him that I disagree, and that I have data to prove it. He is sticking to his view. I’m fine with that—we’ll agree to disagree.

After that, some folks who care about GamerGate attacked my friend in the blog comments. My friend was immoderate in his tone. Some of the replies were polite requests for facts. Others were insults with less substance behind them, and the intensity of the comments escalated. It was, uh, interesting to watch….

One of the fundamental disagreements on the Internet today is about the role of immoderate speech. Is it OK to call someone a rude name or use obscene language? Are the rules different if the person is a public figure?

There’s actually, believe it or not, a correct answer to this question: It depends on where you are on the internet. The internet is not one place. Social norms are local. What it’s OK to say on 4chan or 8chan is not OK to say on your work mailing list or on comments on a mainstream news site.

Social norms differ even on different parts of the same site. One team of students in my Design of Online Communities class this term studied Shirley Curry’s YouTube Channel. Shirley is a 79-year-old grandmother who plays Skyrim, and posts her unedited gaming streams. My students found that everyone is extremely polite on Shirley’s channel. The social norms are different on her channel than on the channels of anyone else streaming the same game.

None of this is new. I wrote about how social norms differ by site in the 1990s. But one new challenge for social norms of online interaction is Twitter. What neighborhood is Twitter in? It’s in all of them and none of them. What social norms apply? No one knows. And sometimes people who think they are interacting in a Shirley-like world end up in a conversation with people who think they are on 4chan. Oh dear. Neither side leaves that encounter happy. And that’s why a lot of online conflict starts on Twitter, and on other sites that don’t have clear social norms.

Regarding what sort of neighborhood this blog represents: I’ll post (almost) any comment, but I’d appreciate it if folks would keep things more Shirley-like. I don’t mind a bit of immoderate speech now and then. But the problem is that when you crank up the intensity, a significant group of people stop listening. Calm, polite discourse might actually influence people—we all might learn something.

The Rheingold Test

April 29, 2016 48 comments

In 1993, Howard Rheingold explained the new phenomenon of online communities to a skeptical public. To convince people that online communities are really communities, he told powerful stories of members of the California BBS The WELL supporting one another not just with words, but with their time and money. For example, WELL members sent books to a bibliophile who lost his library in a fire, and helped with medical evacuation for a member who became seriously ill while traveling.

I offer this definition:
An online group where members offer one another material support passes “the Rheingold Test.”

I’ve written before that it’s silly to argue about what “is a community.” We have different mental models for “community,” and online interaction can be like and unlike face-to-face groups in nuanced ways. But I will argue that when a group passes The Rheingold Test, something special is happening.

Each spring when I teach CS 6470 Design of Online Communities, I’m surprised by the groups my students discover that pass the Rheingold Test. Years ago, master’s student Vanessa Rood Weatherly observed members of the Mini Cooper brand community sending flowers to a member whose daughter had a miscarriage. It’s not what you’d immediately expect from a group of people brought together by a car brand. In our increasingly secular society, people are looking for a sense of belonging—and finding it in affinity groups.

This term, my students’ research projects found two more sites that pass the test when I wouldn’t have expected it. The first is Vampire Freaks, a site for Goth enthusiasts. In the press, Vampire Freaks is notorious for a few incidents where members have posted about violent acts and then gone ahead and committed them. But those incidents don’t characterize daily activity on the large and active site. Just like the Goth kids at your high school stuck together and would do anything for one another, the members of Vampire Freaks support one another in times of trouble. One member comments:

“I’ve helped quite a few of my friends [on Vampire Freaks] through a lot of hard times… family issues, losing parents, losing children, drug problems even. And just being there as someone that’s supportive, instead of putting them down. Even offering a place for people to come stay if they needed somewhere… I’ve had friends off this website that have actually stayed at my house… because they were traveling and didn’t have money for a hotel. So I’d known them for a few years and figured, it’s a weekend, I’ll be up anyways. Let them stay there and hang out.”

Grad students Drew Carrington, John Dugan, and Lauren Winston were so moved by the support they saw on the site that they called their paper “VampireFreaks: Prepare to be Assimilated into a Loving and Supportive Community.”

The second surprising example from this term is the subreddit Kotaku in Action (KIA), a place for supporters of GamerGate. Although the popular press portrays GamerGate as a movement of misogynist internet trolls, the truth is that the group is made up of a complex combination of members.  KIA includes many sincere (and polite) civil libertarians, people tired of excesses of political correctness, and people tired of the deteriorating quality of journalism and angry about the real-world impact of biased reporting. People who identify as GamerGaters also include people who dox people they disagree with (posting personal information online), send anonymous death and rape threats, and worse. (Those things are not allowed on the KIA subreddit, and moderation rules prevent them.) It’s a complicated new kind of social movement with its own internal dynamics. I’ll be writing a lot about them, but for now I just want to note that they have a strong sense of group identity, and help one another when in need. Posts on KIA show members donating money to a member in financial crisis, and one who needed unexpected major dental work. They also banded together to raise money for a charity that helps male abuse survivors. They are not a viper’s nest (though there are some vipers in the nest). And they care about one another in the classic way.

When a site passes The Rheingold Test, it means there is something interesting happening there—that the whole is more than the sum of its parts. Do you know a site that passes the test? Leave me a comment.

 

Notes/Clarifications:

  • “GamerGate” is a social movement centered around a Twitter hash tag among other things. GamerGate and the KIA subreddit are not the same thing.
  • Doxxing and threats have definitely occurred, but were sent by anonymous people. Whether or not those were “by people who affiliate with GamerGate” is disputable.
Categories: social computing

Social Media Insults and Donald Trump’s Hair

March 22, 2016 Leave a comment

Mocking someone’s appearance on social media is admitting rhetorical defeat. In a non-fashion context. If we’re at a fashion show in Milan, that’s another story. But if we’re talking about any other topic, I propose that if someone says you look like a donkey, then you should reply: “I win!” Because you have. Because making such an irrelevant remark means, “I have hostile feelings towards you, but I don’t have any substantive arguments.”

I was thinking about this last week when an acquaintance posted on Facebook a picture of Hilary Clinton next to one of Captain Kangaroo (in a similar outfit), with the caption “Who wore it better?” This from a democrat. One of the most accomplished and experienced women in the world is a candidate for president, and this is what you post?

Which brings me to Donald Trump’s hair. Dear fellow democrats: Please stop mocking Donald Trump’s hair. It’s not funny, and it’s not enlightening. And every time you do it, you are screaming to the world: “I give up. I have nothing of substance to say. But hey, here’s a hair insult.”

Someone did indeed call me a donkey on Twitter last week. I am honored—I actually earned being trolled! Actually, my great uncle wrote Francis the Talking Mule, so I think the troll is confused—I have more mule in my background than donkey.

I have been researching GamerGate lately, and finding that both sides have valid points to make. And both sides have a mix of nice, principled people and angry people who are spewing bitter nonsense in public. I’ll have a lot more to say about this in the future. But for now, my advice for both sides is to calm TF down, and keep your sense of humor. And remember that if someone throws an irrelevant insult at you (like your outfit looks like Captain Kangaroo), then just laugh and say, “I win!”

Categories: Uncategorized

Everyday Racism and Social Networks

March 9, 2016 2 comments

Everyone is a little bit racist, a little bit sexist. Mahzarin Benaji can prove it. When she asks people, “Do these two words go together?”, most people will click “yes” slightly quicker if shown “man” and “scientist” than “woman” and “scientist.” Even women scientists. You can do the same experiment for racism. It’s not that a few evil people are sexist or racist—we all are, to some degree.

Despite my awareness that everyone is a little bit racist, I am still astonished by the regular demonstration of that racism on the website Nextdoor.com. Nextdoor is a discussion site for people in a local neighborhood. Members share recommendations for plumbers, discuss traffic problems, and offer items for sale. It’s a nice site. A key topic of discussion is always about security. There have been a series of burglaries in my neighborhood recently, so residents are on the alert for “suspicious” people. And evidently any African American in our neighborhood may possibly be “suspicious,” according to my neighbors. Here’s yesterday’s example:

This morning was dog was ill. so I took her outside around 5:15 AM. I saw a car driving slowly … and stopping. The car stopped twice, a tall African American man wearing a dark sweatshirt dark pants and got out, kept his head lights on and walked up towards a house with his cell phone out. Then, walked back down to his car, got in, and continued driving slowly down the street. He kept his headlights on the entire time, even when the car was parked in the street. The car looked to be a beige/gold Mitsubishi. I was half asleep when I saw all of this and realized later I should have called 9-1-1. Just wanted folks to be aware.

Does that sound suspicious to you? Fortunately, another Nextdoor member pointed out:

Pretty sure he delivers the paper- i see him out several times a week- better safe than sorry though

I would laugh if I didn’t feel like crying. Because this happens all the time. Would people have worried that a man delivering papers was suspicious if he were white? I can’t prove that race was a factor here. But most of these incidents are about people of color. And it keeps happening.

In an incident last year, a mother posted an urgent alert that there was an attempted abduction of her seven-year-old daughter, who had been out walking the dog. There was a white van, following a white pickup truck. Right as her daughter was walking by, an African American man opened the door of the van and came towards her! Her daughter ran all the way home! The urgent alert received dozens of concerned replies. The police were called. And later that morning, I saw construction workers at a site three blocks away, with their white van and white pickup truck parked on the street.

It might help if people were simply more aware of this as a problem, but alerting people is hard. A couple weeks ago, a Nextdoor member in our neighborhood tried to draw attention to the problem of racism on the site, and got attacked by other participants. I waded in to merely say I think she might have a point, and I got attacked. The moderator shut down the discussion thread citing “policy violations on both sides.” So much for civil discourse.

The problem is not unique to Nextdoor—it’s just particularly easy to observe there. The site Hollaback takes an unusual approach to this problem—they discourage mentions of race. The purpose of Hollaback is to support discussion of street harassment. If someone cat calls you on the street or gropes you on the subway, you can go to Hollaback to share your experience—both to express your feelings, get support, and alert the community. But they discourage posters from mentioning the race of their harasser:

Due in part to prevalent stereotypes of men of color as sexual predators or predisposed to violence, HollaBackNYC asks that contributors do not discuss the race of harassers or include other racialized commentary

The more I see the everday racism of my neighbors on Nextdoor, the more I see the reasons for this policy. But it still feels like an extreme solution. (Someone groped me, and I can’t say what they looked like? I can hear the cries of political correctness gone mad.)

There really are (occasional) burglars in our neighborhood, and Nextdoor serves an important function by helping people alert one another. But is it possible to be a black man in our neighborhood and not be reported as suspicious?

The long-term solution is to all work to be less racist. To confront the tacit stereotypes we all hold,. In the short term, how do we stop social networks from making the problem worse? Leave me a comment.

Categories: Uncategorized

More on TOS: Maybe Documenting Intent Is Not So Smart

February 29, 2016 1 comment

In my last post, I wrote that “Reviewers should reject research done by violating Terms of Service (TOS), unless the paper contains a clear ethical justification for why the importance of the work justifies breaking the law.”  My friend Mako Hill (University of Washington) pointed out to me (quite sensibly) that that would get people in more trouble–it  asks people to document their intent to break the TOS.  He’s right.  If we believe that under some circumstances breaking TOS is ethical, then requiring researchers to document their intent is not strategic.

This leaves us in an untenable situation.  We can’t engage in a formal review of whether something breaking TOS is justified, because that would make potential legal problems worse. Of course we can encourage individuals to be mindful and not break TOS without good reason. But is that good enough?

Sometimes TOS prohibit research for good reason. For example, YikYak is trying to abide by user’s expectations of ephemerality and privacy. People participate in online activity with a reasonable expectation that the TOS are rules that will be followed, and they can rely on that in deciding what they choose to share. Is it fair to me if my content suddenly shows up in your research study, when it’s clearly prohibited by the TOS?  Do we really trust individual researchers to decide when breaking TOS is justified with no outside review?  When I have a tricky research protocol, I want review. Just letting each researcher decide for themselves makes no sense. The situation is a mess.

Legal change is the real remedy here–such as passing Aaron’s Law, and also possibly an exemption from TOS for researchers (in cases where user rights are scrupulously protected).

Do you have a better solution?  I hope so. Leave me a comment!

Follow

Get every new post delivered to your Inbox.

Join 4,318 other followers

%d bloggers like this: