Some time in the 1990s, I had terrible algae in my fish tank. I posted a question about it on USENET on the group alt.aquaria, and got a helpful answer from a guy I’ll call Oscar. Oscar really knows his fish, and spent a tremendous amount of time answering people’s questions on the group.
Later that day, I wandered into a crazy USENET flame war. Someone had created a group called alt.good-news. The group’s description said it was a place to share happy news—kind of like Upworthy in the days before the web. However, the phrase “the good news” has significant meaning to some Christians—it’s the good news that Christ is here to save us. Within hours, alt.good-news ironically deteriorated into a nasty flame fest with people arguing about whether it was a Christian group. I saw Oscar posting on the group, and sent him a private email: “Isn’t this flame war crazy? This group was supposed to be happy!” And he mailed me back:
“Don’t assume people always behave the way they behave when they talk about fish.”
It took only a moment’s exploration for me to learn that Oscar was actually a legendary USENET troll. Who just happened to also be an expert fish keeper who took pride in politely helping others with fish questions.
I think about Oscar a lot when I contemplate the mess that is Twitter. Social norms tend to be local, but on Twitter there is no local. People with radically different ideas of appropriate behavior run directly into one another. Or as one wise Redditor commented, “Twitter is like everyone shitposting on the same subreddit with no moderation.” Unfortunately, the blocking mechanisms we have at our disposal are crude. There’s no easy way to say, “I want to read what Oscar has to say about fish, but only about fish.” You either block Oscar or you don’t. And if you do decide to block him, there’s no easy way to say, “And please block him on the three other sites that we both use.” Our existing blocking mechanisms are too coarse grained and too weak.
The design of blocking mechanisms is in its infancy. Even farther behind is the design of understanding mechanisms. What if we could somehow scaffold people coming to a more nuanced understanding of the other person’s point of view, instead of just dismissing them entirely?
Got an idea for how to support more nuanced blocking or understanding? Leave me a comment!
A Twitter acquaintance shared this video with me last night: Buzzfeed’s Color Cabal Conspiracy – Harmful News. In it, the narrator critiques a Buzzfeed article where a naïve writer takes the words of trolls as truth, and Buzzfeed publishes it (with an added footnote later that it might not be true).
I’ve been studying members of the #GamerGate movement, and I’ve seen some awful stuff posted online: misogyny, rape threats, racism, and more. But at the same time, I also see that a subset of GamerGate supporters are reasonable people, and the movement has some valid points. One point is that journalism is in crisis.
The tag line for GamerGate is “It’s about ethics in game journalism.” I object to the use of the word “ethics.” Using that word implies that people are deliberately writing incorrect things. I think that’s giving the writers too much credit, assuming they know the truth and are deliberately subverting it. I’m sure there are cases where that is true, but I will argue that in the overwhelming majority of cases, Hanlon’s Razor comes into play: never attribute to malice to what can be explained by simple incompetence.
Changing business models have created problems for the current state of journalism—all the incentives are out of whack. If a freelance writer is paid a couple hundred dollars for a story, how much time can they afford to spend on it? I used to write short articles for Wired when I was a graduate student, and made enough for a bit of extra spending money—like going out to dinner or on a weekend trip. But for an adult with rent to pay, it can’t even scratch the surface. And payment has dropped dramatically over the last several years.
If you pay people pocket money, you get amateurs. Even worse, if you pay per click, you get writers pandering to prurient interests. Jack Murtha writes in the Columbia Journalism Review:
[Pay-per-click] was once the crown jewel of content-heavy startups like Gawker, where young writers typed dozens of articles each week, aggregating and snarking their way to a digital-media empire. Now it’s something of a financial loophole used by content mills that prey on desperate young journalists, who scrape together clickbait in exchange for pennies.
Contrast the situation of a pay-per-click writer to a salaried journalist. The person on salary is rewarded for careful work, and is assigned to cover topics based on their importance, rather than self selecting what they think will earn clicks. In his foundational work on the nature of peer production, Harvard law professor Yokai Benkler notes that a strength of peer production is that individuals self identify for tasks they are qualified for. That works pretty well for things like open-source software and Wikipedia. And it even works pretty well for unpaid writing—expert bloggers often self-identify to write pieces on topics they care about and are knowledgeable about. But where it doesn’t work is when in journalism the peer production economy overlaps with the micropayment economy, and we get, as Murtha notes, clickbait in exchange for pennies.
Instead of saying “It’s about ethics in game journalism,” I suggest that GamerGate folks say, “It’s about underpayment in game journalism.” And we might as well remove the word “game”: It’s about underpayment in journalism. I will argue that the gaming press is a bellwether for the rest of the industry. Because game journalism is arguably less important than political or business journalism, it is leading the way in de-professionalization.
Fortunately, the solution to all this is pretty easy: Be willing to pay for quality news. If you care about game journalism or journalism more generally, find a venue that pays a living wage to talented professionals, and be willing to pay for it.
Addendum: Since a few people were confused, I am not a journalist. I teach and do research at Georgia Tech.
Our boys (ages 10 and 12) love video games. And following the truism that every generation has media choices that baffle their parents, they also love watching videos of other people playing video games. They would play and watch all day, if we let them. On weekdays, by the time they get home from school and finish their homework, we don’t mind if they spend the free time that remains playing games. On weekends, we have always limited their screen time.
This policy has always chafed. A few months ago, our twelve-year-old protested, exasperated, “Do you have any evidence that too much video games is bad for you?” I patiently explained, “It’s not that video games are bad for you. It’s that we want you to have a balanced life—read a little, get some exercise, play some video games, practice your saxophone. If you did any one of those activities to the exclusion of others, we’d ask you to balance more: ‘Put down that book and go play a video game! You can’t read all day!’”
Five months ago, it occurred to me: Why not make the policy better match the rationale? Instead of limiting our kids’ screen time, we started requiring them to do a variety of activities each weekend day: read, exercise, and practice their musical instrument. As long as those things are done at some point during the day, they can have as much screen time as they like.
So far, the policy is a huge improvement. There is much less grumbling, and better balance in their weekend days. When asked how the policy is going so far, our twelve-year-old explained that he agrees that reading and exercise are important. (He’s less sure about music practice!) He also finds the new policy makes for a more relaxing weekend day. Our ten-year-old comments, “I like it better. The point is so that I do other things with my day, and I think it’s fair.”
The day-to-day implementation is not without challenges. We still need to remind them, “Did you exercise yet today?” And if the reminder comes too late in the day, it’s just not going to happen. If we forget to remind them and monitor, the new system deteriorates to a full day of screen time. But then again, the old system did too (“Did you forget to turn the timer on? How long have you been playing?”)
It’s encouraging to me that our kids have embraced the values that underlie this system—that you must make choices about how you spend your time, certain activities are important, and balance is important.
What approach does your family use? Leave me a comment!