Archive for the ‘peer-production of content’ Category

It’s about Underpayment in (Game) Journalism

June 4, 2016 13 comments


A Twitter acquaintance shared this video with me last night: Buzzfeed’s Color Cabal Conspiracy – Harmful News. In it, the narrator critiques a Buzzfeed article where a naïve writer takes the words of trolls as truth, and Buzzfeed publishes it (with an added footnote later that it might not be true).

I’ve been studying members of the #GamerGate movement, and I’ve seen some awful stuff posted online: misogyny, rape threats, racism, and more. But at the same time, I also see that a subset of GamerGate supporters are reasonable people, and the movement has some valid points. One point is that journalism is in crisis.

The tag line for GamerGate is “It’s about ethics in game journalism.” I object to the use of the word “ethics.” Using that word implies that people are deliberately writing incorrect things. I think that’s giving the writers too much credit, assuming they know the truth and are deliberately subverting it. I’m sure there are cases where that is true, but I will argue that in the overwhelming majority of cases, Hanlon’s Razor comes into play: never attribute to malice to what can be explained by simple incompetence.

Changing business models have created problems for the current state of journalism—all the incentives are out of whack. If a freelance writer is paid a couple hundred dollars for a story, how much time can they afford to spend on it? I used to write short articles for Wired when I was a graduate student, and made enough for a bit of extra spending money—like going out to dinner or on a weekend trip. But for an adult with rent to pay, it can’t even scratch the surface. And payment has dropped dramatically over the last several years.

If you pay people pocket money, you get amateurs. Even worse, if you pay per click, you get writers pandering to prurient interests. Jack Murtha writes in the Columbia Journalism Review:

[Pay-per-click] was once the crown jewel of content-heavy startups like Gawker, where young writers typed dozens of articles each week, aggregating and snarking their way to a digital-media empire. Now it’s something of a financial loophole used by content mills that prey on desperate young journalists, who scrape together clickbait in exchange for pennies.

Contrast the situation of a pay-per-click writer to a salaried journalist. The person on salary is rewarded for careful work, and is assigned to cover topics based on their importance, rather than self selecting what they think will earn clicks. In his foundational work on the nature of peer production, Harvard law professor Yokai Benkler notes that a strength of peer production is that individuals self identify for tasks they are qualified for. That works pretty well for things like open-source software and Wikipedia. And it even works pretty well for unpaid writing—expert bloggers often self-identify to write pieces on topics they care about and are knowledgeable about. But where it doesn’t work is when in journalism the peer production economy overlaps with the micropayment economy, and we get, as Murtha notes, clickbait in exchange for pennies.

Instead of saying “It’s about ethics in game journalism,” I suggest that GamerGate folks say, “It’s about underpayment in game journalism.” And we might as well remove the word “game”: It’s about underpayment in journalism. I will argue that the gaming press is a bellwether for the rest of the industry. Because game journalism is arguably less important than political or business journalism, it is leading the way in de-professionalization.

Fortunately, the solution to all this is pretty easy: Be willing to pay for quality news. If you care about game journalism or journalism more generally, find a venue that pays a living wage to talented professionals, and be willing to pay for it.


Addendum: Since a few people were confused, I am not a journalist. I teach and do research at Georgia Tech.

Risks of User-Generated Content: Nitrates? What Nitrates?

May 28, 2013 1 comment

More studies demonstrating serious health risks of preserved meats have been published recently.  My kids love salami, so I decided to go on the Boar’s Head website to see what chemicals were in their products.  I couldn’t find any ingredient lists on the site (and it’s awkward to ask the deli counter worker to read you the list!)  But I was amused to find this:


Notice that the number one search on the website is something they have no answer for! The “top searches” list must be generated automatically. (A search for “nitrites” sends you to their FAQ, with an article from The American Meat Institute saying the preservatives are safe.) One risk of user-generated content is that your users may highlight exactly the thing you don’t want to talk about!

Finding Your Twenty-Eights: Why You Need to Talk to Your Users

February 21, 2012 Leave a comment

Here’s a user behavior puzzle for you: Why would you ban someone whose offense is not showing up?

Kurt Luther‘s Pipeline software is being used for some impressive projects lately. In November-December, a team of artists used it to create an interactive advent calendar which they called Holiday Flood.  Twenty-eight artists from twelve countries worked to create two pieces of art for each of the Twelve Days of Christmas song, and embed a hidden tag in each artwork that together formed a holiday greeting card for the newgrounds  community. Kurt and I have been observing their activity on Pipeline to try to understand online collaboration on creative projects.

As in any complicated collaboration, dropouts occur.  When someone drops out, the project leader typically needs to find a replacement.  One artist dropped out of Holiday Flood complaining he was too busy, but the next day joined a different project on newgrounds. Holiday Flood leader Renae banned him from the project, removing his access to discussions and work in progress.  This intrigued us: Why would someone ban a user who doesn’t show up anyway? My hunch was that Renae was annoyed with him. How could you quit our project saying you’re too busy, but then join another one the next day? The nerve!  So banning him was more an emotional act than a functional one.  A feature we implemented for practical reasons was used for a more symbolic purpose.

Well, that was my hunch. But when we interviewed Renae, she told a  different story.  Each Pipeline project has a count of total  number of participants. After Renae recruited a replacement artist,   she kept looking at the  count and it said 29 participants. But she knew it was  really supposed to be 28.  She banned Mr. Dropout to correct the counter to 28.

The morale of the story of course is Talk To Your Users.  I regularly get papers to review that do extensive data analysis on an online site and then speculate as to why people behave the way they do–but never ask a live person a single question.  In research on social computing, mixed methods are critical.  I speculated that our leader was angry at Mr. Dropout, but in truth she just wanted the counter to say 28.  There are twenty-eights lurking in your data set–explanations for user behavior that you can not guess.

Should you believe Wikipedia?

May 18, 2011 15 comments

I got a nice email today from librarian Tedd Guedel of Herzing University asking about the reliability of Wikipedia (he saw the announcement for a talk I gave last October, ““How Wikipedia Really Works, and What This Means for the Nature of “Truth”.”) With Tedd’s permission, I quote:

“As a rule, I steer students away from Wikipedia as a valid academic source. Do you have a power point or any information that you gave during your lecture that you could share with me? … Our old IT director Loved Wikipedia because “it is maintained by the masses” I do not like Wikipedia because “it is maintained by the masses.”

So who’s right, Tedd or his IT director? Luckily in this case I don’t have to take sides–they both are right. The answer is: what page on Wikipedia? How many people have edited it? How many people are ‘watching’ it? I will argue that a popular, high profile Wikipedia page is the most accurate reference that has ever been created in the history of the written word. (Really!) A low-profile page that few people have edited is unreliable. It all depends on how many people have checked the article and its references.

To explain why, it might help to discuss how a refereed journal article is reviewed. Articles in high-quality, peer reviewed journals are generally considered the gold standard for reliability. An author submits an article to a journal, and it is sent out to approximately three experts in the field for review. Each expert reads the paper carefully, and sends detailed comments. The experts are anonymous to the author, and can make critical comments without fear of giving offense. The author usually revises the article, and the experts read it again. Most articles are rejected. Nothing is published until the experts are happy. The reliability of the article comes from the number of people who have reviewed it, the special expertise of those people, and how carefully they reviewed it. Once the article is printed, it can not be updated. Corrections have to happen in a follow-on article, which may not happen and may not be evident when you are looking at the original. Exact customs vary by field, but this is the general pattern.

What happens when a popular Wikipedia article is created? The birth of a Wikipedia article on a high-profile topic is a beautiful thing to witness. For example, Brian Keegan notes that in the 100 hours after the Sendai earthquake and tsunami in Japan, 1,727 people made 6,931 edits to 49 relevant articles. The main Sendai quake page at the time of this writing has 289 references. Everything about it has been checked and rechecked. Today 349 people have the article on their “watchlist”–the list of pages they monitor for changes. (Not everyone actually checks their watchlist of course.) Vandalism on Wikipedia is typically removed quickly–Fernanda Viegas and Martin Wattenberg found that it is often corrected in seconds.

Next time someone is nominated to the US Supreme Court or becomes the next Pope, watch their page as it evolves.  On any Wikipedia page, you can click the “View History” tab and see all the edits to the article over time.  Over the period of a few days, the newly famous person’s page evolves from a few sentences to a complete concordance on their life and work–with every fact supported by references, and anything unsupported removed quickly. It’s astonishing to witness.

So what would you rather have–something checked by three experts over six months to a year, or something checked by 1,727 people in the first 100 hours?  And remember that many of those 1,727 people are checking references and not allowing anything that isn’t documented.  Also remember that the refereed journal article is fixed at a moment in time, and beyond that any errors or new developments aren’t included. A Wikipedia article is updated continuously. Of course the purpose of a journal article and encyclopedia article are entirely different–one presents new knowledge and the other summarizes consensus and explicitly forbids original research. They’re not comparable. But if you believe that reliability of knowledge is in relation to how many people check it and how carefully, then a popular Wikipedia article does pretty well.  Amazingly well, in fact.

Particularly surprising to me is the fact that topics on controversial issues can be quite good. For example, I would have guessed that the article on whether vaccines cause autism would be a cesspool of controversy and misinformation. But it’s not. It reviews the history of the controversy in comprehensive detail (supported by references) and unequivocally says that the original paper suggesting a connection has been proven a fraud and there is a scientific consensus that there is no such link.  Hooray!

But those are all examples of high-profile articles. What about low profile ones?  Click the ‘random article’ button on the left hand column of the Wikimedia software, and see what you get.  Often you’ll get something that’s barely been started–a “stub” in Wikipedia parlance.  It’s possible to put something unsupported in an obscure article, and it may not be checked. Famously, a prankster wrote that journalist John Seigenthaler was a suspect in the assassination of John F Kennedy.  The error remained there for over six months until a friend of Seigenthaler’s noticed it.  I should note that this happened in 2005 and the culture of Wikipedia has changed since then–things are now checked more carefully.  But is it possible for a prank or honest error to linger? If it’s in an article that is not high profile, absolutely.

Another  problem is circular references. It happens like this: a journalist uses Wikipedia as a source for some information, and publishes an article without having any other  source. Later, a kind Wikipedia editor notices that the article is unsupported and searches for good references–and finds the journalist’s article and cites it!  (Oops…)

It’s not surprising that people are confused about whether to believe Wikipedia–the truth is complicated. I believe today we have a crisis in epistemology–no one knows what to believe any more. But it’s also a teachable moment. A moment to teach students about peer review and the importance of references and how to think critically about the reliability of everything they read.

Internet Public Shaming

August 10, 2010 3 comments

I laughed at this when I saw it: “Girl quits her job on dry erase board, emails entire office (33 Photos).” In it, you’ll see a woman holding a small dry erase board with messages. She quits her job, and describes what a loser her boss “Spencer” is. Among other things, she accidentally overheard him calling her a “HOPA” (hot piece of a**). This is her revenge. Since Spencer installed monitoring software on everyone’s computers to see if they’re wasting time (and she as his assistant has the passwords), she outed him for being on non-work related sites a lot –including being on Farmville 19.7 hours a week.

My first reaction was amusement. She seems pretty cool. A bit like Heather Armstrong of Dooce–everyone’s hip friend with attitude. Then I thought, wait, I bet Spencer just has Farmville running in the background–he can’t be playing that many hours a week. And why is his bad breath really relevant here? And isn’t this all horribly mean? OK, he sounds like an annoying boss. Maybe even a chauvinist pig. But is public shaming the right answer?

Of course my next thought was, I wonder if this is real or just performance art. But either way, it’s part of a disturbing trend–the self righteous using the Internet to do more harm than good while “righting wrongs.” Clay Shirky wrote about this in his book “Here Comes Everybody.” And as he points out, the phenomenon of using the Internet for public shaming is particularly intense in Asian countries, where the “human flesh search engine” can track people down and ruin their lives. OK, the girl on the subway in Korea should have cleaned up after her dog–no question about it. But did she deserve to be turned into a pariah? Wikipedia tracks similar incidents in its article on Internet vigilantism.

The good news is, this medium gives formerly dis-empowered people a voice. Instead of just quitting and slinking off, White-Board Girl has a recourse. Instead of just getting angry as you slip in dog poo on the subway car, you can collaborate to identify the inconsiderate dog owner. But the problem is that the response is out of proportion to the crime, especially when you consider that the Internet is a largely archival medium. (An old cliché says taking information off of the Internet is like taking pee out of a pool.) So Puppy Poo Girl and Spencer will have their judgment lapses follow them potentially indefinetely. And that seems a bit too much–approaching Nathaniel Hawthorne’s scarlet A or Neal Stephenson’s tattoos that say “poor impulse control.”


WhiteBoard Girl (or “Jenny DryErase”) is indeed a hoax. Which is fortunate for the Spencers of the world, real or imagined!

Kids and Copyright

March 18, 2010 8 comments

[Updated with some new info & clarifications.]

A while back I asked Larry Lessig: kids can’t agree to contracts. So isn’t there a problem with sites where kids upload their intellectual property? They can’t agree to the license….

Finally got an answer back from Larry. Here’s my  attempt at a layman’s summary:

  • Kids own intellectual property (IP) they create.
  • Kids can agree to license their IP.
  • Kids can later “disaffirm” any license they enter into, until about one year after they become adults.
    • In California, a special process can be followed to prevent future disaffirmation.

I assume this means that a site could simply later remove the content at the minor’s request, and wouldn’t be held responsible for the fact that others have likely copied that material. (An old joke says, “Taking information off the Internet is like taking pee out of a pool.”)

Andres Monroy-Hernandez (lead developer of the Scratch website) asks an interesting follow-up question: What happens to derivative works in this instance? I imagine you’d have to deal with that on a case-by case basis–and it could get complicated.

I find all this reassuring. I was worried that people posting kids’ content online might somehow be liable for doing so. But if I’m understanding things correctly, it simply means “if they ask you to take it down, take it down.” (Though on the other side of the argument, Steven Hetcher at Vanderbilt argues that contracts between minors and websites that post their content may be “unconscionable” and hence invalid.)

I got interested in kids and copyright because I’m interested in peer production of content, and the learning opportunities  made possible through creating things and sharing them. But from talking with Larry, it struck me that the much bigger issue seems to be the implications that copyright law has for schools. In particular:

  • Schools can’t put student work online without students’ permission, because students own copyright to their own work.
  • A teacher who allows a student to place harmful content about herself online on a school website may be held to have acted negligently. School districts have an affirmative duty to take all reasonable steps to protect their students from foreseeable harm.

Fascinating stuff!

The New Expertise

March 17, 2010 1 comment

Though the old expertise is still there, it’s true that a new kind is emerging. Peer-produced content both compliments professional content and competes with it, to varying degrees in different contexts.  More from my contributions to the PBS Panel:

A great domain to look at these issues is healthcare. Work by folks like Lena Mamykina on online and mobile support for people with diabetes suggests that we need to teach patients to be scientists of their own disease. It doesn’t work to simply go to the doctor and request instructions and follow them. It’s particularly true of diabetes, but is a profound message for healthcare more generally. And beyond healthcare, for life more generally.

The wealth of peer produced information and support is an essential component of helping people make that transition–to help them to take on more agency in their care, their own lives more broadly. But at the same time, the individual does not yet have the tools or training to know how to sort good info from bad.

We are most definitely in the early days. We are left with a design challenge: to develop tools to better support individuals making sense of all the available information and mis-information. To create communities where sense-making is a collaborative effort, and your friends are there to help with the knowledge-building discourse.

%d bloggers like this: