Posts Tagged ‘social computing’

Pokémon Go and Work/Life Balance

January 2, 2017 Leave a comment

I love casual games, though I’ve written before about how they can sometimes be disruptive. Surprisingly I do not find Pokémon Go particularly disruptive. As promised, it promotes walking (you get credit for hatching Pokémon eggs the further you walk.) And it has other interesting qualities I could not have predicted when I started playing it.

Most importantly and most surprising: It promotes better work/life balance. When I am out for a walk, if I have Pokémon Go open, I get credit for the distance walked. As a result, I tend to leave the app open, which means I don’t check my email. That means I am more truly not at work, for my walk.

The bad news of course is that I’m looking at my phone, rather than at the scenery. But generally speaking I find I still appreciate where I am, and enjoy chatting with people I am walking with. It takes little of my attention.

If I am playing while walking with people who are not playing, I never stop to do a gym battle. A gym battle takes a couple minutes, and that’s too long to ask friends to wait. It’s also important to leave the sound off. Most people always leave the sound off. I leave it on when I’m walking alone, because the audio feedback means I spend less time looking at my phone. After you throw a Pokéball, it takes a few seconds to see if you caught the Pokémon or not. If you listen to the sound effects, you can stop looking at the phone and listen for whether you caught it. But if I’m walking with other people, the sound is annoying, and also misleading—they assume I’m more distracted than I really am.

My second surprise: it is a participatory exploration of probability and economics. Probability is fundamental to the game—each time you try to catch a Pokémon, a circle around it shows whether you have a high (green), medium (yellow), or low (red) chance of catching it. A player is constantly calculating: How hard will this be to catch, and is it worth it? It’s a constant reminder of the basic laws or probability: past trials don’t affect the outcome of the next one.

When you try to catch a Pokémon, you have to decide: Am I going to throw a regular Pokéball, a great ball, or an ultra ball? The latter are increasingly rare, but have a higher catch rate. The more powerful the Pokémon, the harder it is to catch, and the higher quality Pokéball you need to use. If I use too cheap a ball, then I have to try again, and again—and might miss catching it entirely, if it runs away. Choosing to use a regular Pokéball might mean I wasted five or more balls, rather than using one or two great balls. It’s like the game is whispering in my ear over and over: don’t be cheap, don’t be cheap….

An economist friend noticed right after the launch of the game that it demonstrates the “sunk cost fallacy”: If it was worth throwing those previous six Pokéballs at that Pidgey, it’s worth throwing one more.

Pokémon Go is good for certain times and places. It’s great for travel, because different places have different Pokémon. It was fun to catch all the Growlithes in San Diego (a Pokémon common there and rare in most other cities). It was particularly fun to use at the San Diego Zoo Safari Park, which had a safari-like quality of Pokémon on the day I visited. When you’re visiting a zoo, you do a lot walking, and wander from exhibit to exhibit. Playing Pokémon Go at the zoo made the whole experience more fun. When a rare Pokémon appeared on the radar (a Snorlax), I got to chat with strangers who came to try to catch it from around the zoo. On the other hand, it was also nice to go a number of places (like the lighthouse and beach at Point Loma in San Diego) where there was no cell service, and I put my phone away. The trick of course is knowing when to put your phone away when there still is cellular service.

I won’t lie—I do sometimes play when I shouldn’t. Particularly when I’m somewhere I don’t want to be. A Pokéstop is a place you can get free Pokéballs and other useful items every five minutes. Fortunately or unfortunately, there is Pokéstop accessible in a conference room where I have a number of boring meetings. For a long meeting, I find playing a casual game helps me to pay more attention to the meeting. The distraction is so light that I am still paying attention to the meeting and less likely to zone out entirely. But it’s perceived by others as disrespectful (if they catch me with my phone under the table), and I probably shouldn’t do it. Like any casual game, Pokémon Go requires mindfulness in when you choose to play.

Whether Pokémon Go survives in the long or even medium term depends on whether the developers can keep adding features and special events to keep it interesting. But for now, it’s a casual game that fits into my life better than others.

Naturalistic Inquiry and Silk Purses

September 22, 2016 2 comments

For me as a social computing researcher, some of the most insightful and useful parts of the research literature are qualitative sociology written thirty to fifty years ago. For example, The Great Good Place by Ray Oldenburg (1989) documents how people need a “third place”—one that is neither work nor home. Within the third places Oldenburg studied, he observed timeless patterns of human behavior like the role of “the regulars” and how people greet a newcomer. Or consider The Presentation of Self in Everyday Life by Erving Goffman, originally published in 1959. Goffman documents how people are always performing for one another, and the impressions they deliberately give may differ from the impressions given off (what people actually infer from someone’s performance). How could we even begin to understand social behavior on sites like Facebook and Twitter without Goffman as a foundation?

When I think about this style of research, I often think about how it would be reviewed if it were submitted for publication today. No doubt Goffman would be told his work was insufficiently scientific—how do you really know there are “impressions given off”? Can you measure them? I doubt he’d make it through the review process.

How do we know that Goffman’s observations are “true”? Honestly, we don’t. The “proof” is in how useful generations of researchers to follow have found his observations. Goffman took messy qualitative data, and used that as a touchstone for his own powers of inference and synthesis. His observations aren’t in the data, but emerged from a process of his examination of the data. It’s probably impossible to make a firm statement about how valuable this type of work is at the time of writing—the evidence is in the value that others find in building on the work later.

Contemporary researchers are still working in this style. I call writing in this mode “silk purse papers.” They say you can’t make a silk purse from a sow’s ear, except sometimes you can. Insights emerge from messy data in an act of interpretation. The first paper of my own that I would put in this category is my paper “Situated Support for Learning: Storm’s Weekend with Rachael,” published in Journal of the Learning Sciences in 2000. In it, I take a case study of a weekend in which one girl taught another to program and draw broader conclusions about the nature of support for learning—support for learning is more effective when it is richly connected to other elements of the learning environment. I can’t prove to you that those observations are valid, but I can say that others have found them useful.

I’m always vaguely embarrassed when I’m in the middle of writing a silk purse paper. A tiny little voice in my head nags, “You’re making stuff up. You could use this data to tell a dozen different stories. You’re saying what you want to say and pretending it’s science.” But then I remember Goffman, and try to cheer up. It may not be “science,” but it’s certainly a valid mode of naturalistic inquiry. Take a second and think about foundational writings in your area of expertise. How many of those papers are silk purses?

In fact I wonder if we are all always making silk purses. Just because a paper is quantitative does not make it any less interpretive. My colleagues and I just finished a complicated analysis of content people post on Facebook publicly versus post to friends only (to appear in CSCW 2017). It’s totally quantitative work. But what we chose to measure and not measure and how we explain what it means strike me as much acts of interpretation (to be evaluated by the passage of time) as the complicated stories we tell with qualitative data.

Where we as a field get into trouble is when people with naïve ideas about “science” start reviewing qualitative work, and get it wrong. Sociology of science and technology should be featured in more graduate programs in human-computer interaction and human-centered computing. And as a field we need to have a more nuanced conversation about interpretive work, and how to review it.

More on TOS: Maybe Documenting Intent Is Not So Smart

February 29, 2016 1 comment

In my last post, I wrote that “Reviewers should reject research done by violating Terms of Service (TOS), unless the paper contains a clear ethical justification for why the importance of the work justifies breaking the law.”  My friend Mako Hill (University of Washington) pointed out to me (quite sensibly) that that would get people in more trouble–it  asks people to document their intent to break the TOS.  He’s right.  If we believe that under some circumstances breaking TOS is ethical, then requiring researchers to document their intent is not strategic.

This leaves us in an untenable situation.  We can’t engage in a formal review of whether something breaking TOS is justified, because that would make potential legal problems worse. Of course we can encourage individuals to be mindful and not break TOS without good reason. But is that good enough?

Sometimes TOS prohibit research for good reason. For example, YikYak is trying to abide by user’s expectations of ephemerality and privacy. People participate in online activity with a reasonable expectation that the TOS are rules that will be followed, and they can rely on that in deciding what they choose to share. Is it fair to me if my content suddenly shows up in your research study, when it’s clearly prohibited by the TOS?  Do we really trust individual researchers to decide when breaking TOS is justified with no outside review?  When I have a tricky research protocol, I want review. Just letting each researcher decide for themselves makes no sense. The situation is a mess.

Legal change is the real remedy here–such as passing Aaron’s Law, and also possibly an exemption from TOS for researchers (in cases where user rights are scrupulously protected).

Do you have a better solution?  I hope so. Leave me a comment!

Do Researchers Need to Abide by Terms of Service (TOS)? An Answer.

February 26, 2016 8 comments

The TL;DR: Reviewers should reject research done by violating Terms of Service (TOS), unless the paper contains a clear ethical justification for why the importance of the work justifies breaking the law. No one should deliberately break the law without being aware of potential consequences.

Do social computing researchers need to follow Terms of Service (TOS)? Confusion on this issue has created an ethical mess. Some researchers choose not to do a particular piece of work because they believe they can’t violate TOS, and then another researcher goes and does that same study and gets it published with no objections from reviewers. Does this make sense?

It happened to me recently. The social app YikYak is based in Atlanta, so I’m pretty sure my students and I started exploring it before anyone else in the field. But we quickly realized that the Terms of Service prohibit scraping the site, so we didn’t do it. We talked with YikYak and started negotiations about getting permission to scrape, and we might’ve gotten permission if we had been more persistent. But I felt like I was bothering extremely busy startup employees with something not on their critical path. So we quietly dropped our inquiries. Two years later, someone from another university published a paper about YikYak like what we wanted to write, using scraped data. This kind of thing happens all the time, and isn’t fair.

There are sometimes good reasons why companies prohibit scraping. For example, YikYak users have a reasonable expectation that their postings are ephemeral. Having them show up in a research paper is not what they expect. Sometimes a company puts up research prohibitions because they’re trying to protect their users. Can a University IRB ever allow research that is prohibited in a site’s TOS?

Asking permission of a company to collect data is sometimes successful.  Some companies have shared huge datasets with researchers, and learned great things from the results. It can be a win-win situation. If it’s possible to request and obtain permission, that’s a great option—if the corporation doesn’t request control over what is said about their site in return. The tricky question is whether researchers will be less than honest in publishing findings in these situations, because they fear not getting more data access.

Members of the research community right now are confused about whether they need to abide by TOS. Reviewers are confused about whether this issue is in their purview—should they consider whether a paper abides by TOS in the review process? Beyond these confusions lurks an arguably more important issue: What happens when a risk-averse company prohibits reasonable research? This is of critical importance because scholars cannot cede control of how we understand our world to corporate interests.

Corporations are highly motivated to control what is said about them in public. When I was co-chair of CSCW in 2013, USB sticks of the proceedings were in production when I received an email from BigCompany asking that a publication by one of their summer interns be removed. I replied, “It’s been accepted for publication, and we’re already producing them.” They said, “We’ll pay to have them remade.” I said, “I need to get this request from the author of the paper, not company staff.” They agreed, and a few hours later a very sheepish former summer intern at BigCompany emailed me requesting that his paper be withdrawn. He had submitted it without internal approval. ACM was eager to make BigCompany happy, so it was removed. BigCompany says it was withdrawn because it didn’t meet their standards for quality research—and that’s probably true. But it’s also true that the paper was critical of BigCompany, and they want to control messaging about them. Companies do have a right to control publication by their employees, and the intern was out of line. Of course companies want to control what their employees say about them in public—that makes sense. But what are the broader implications if they also prohibit outside analysis?

If online sites both control what their employees say and prohibit independent analysis, how do we study their impact on the world? Can anyone criticize them? Can anyone help them to see how their design choices reshape our culture, and reshape people’s lives?

From this perspective, one might argue that it can be ethical to violate Terms of Service. But it’s still not legal. TOS can be legally binding, and simply clicking through them is typically interpreted as assent. They are rarely enforced, but when they are, the consequences can be devastating, as the sad death of Internet pioneer Aaron Swartz shows. Swartz was accused of little more than violating TOS, and charged with multiple felonies under the Computer Fraud and Abuse Act (CFAA). The stress of his prosecution led to his suicide.

For another example, my colleague Brian Larson pointed me to the case Palmer v. Kleargear. John Palmer and Jennifer Kulas left a negative review of Kleargear on the site Ripoff Report after the item they ordered was never sent. Kleargear tried to enforce an anti-disparagement clause in their TOS, and demanded the review be taken down. Since you can’t delete reviews on Ripoff Report without paying a $2000 fee, they declined. Which set in motion a multi-year legal battle. Pending legislation in California may make such clauses illegal. However, the consequences for individuals of violating terms—even absurd terms—remain potentially high.

TOS are contracts. Larry Lessig points out that “Contracts are important. Their breach must be remedied. But American law does not typically make the breach of a contract a felony.” Proposed legal changes, “Aaron’s Law,” would limit the scope of the CFAA so that breaches of contract are treated more like breaches of contract rather than felonies. Breaches of contract often have limited penalties if there is no real harm done. Researchers should keep on eye on this issue—unless Aaron’s Law passes, ridiculous penalties are still the law.

We’re in a quandary. We have compelling reasons why violating TOS is sometimes ethical, but it’s still not legal. So what are we as a field supposed to do? Here’s my answer:

If an individual chooses to take on the legal risk of violating TOS, they need to justify why. This is not something you can do lightly. In publishing work that comes from data obtained by violating TOS, this must be clearly acknowledged and justified. Work that breaks TOS with no justification should be rejected by reviewers. Reviewers should pro-actively review a site’s TOS, if it’s not explicitly discussed in the paper.

However, you should think carefully about doing work that violates TOS collaboratively with subordinates. Can you as faculty  take this risk yourself? Sure. But faculty are in a position of power over students, who may have difficulty saying no. Senior faculty also have power over junior faculty. If a group of researchers of differing seniority wants to do this kind of work, the more senior members need to be careful that there is no implicit coercion to participate caused by the unavoidable power relations among members of the research team.

I believe we need legal change to remedy this state of affairs. Until that happens, I would encourage friends to be cautious. Someone is going to get in hot water for this—don’t let it be you.

Do I like the fact that corporations have such control over what is said about them? I do not—not at all. But legal consequences are real, and people should take on that risk only when they have good reason and know what they are doing, without any implicit coercion.

In summary, I am proposing:

Reviewers should reject research done by violating Terms of Service (TOS), unless the paper contains a clear ethical justification for why the importance of the work justifies breaking the law. Reviewers’ should pro-actively check a site’s TOS if it is not discussed in the paper.

If a team of researchers who choose to violate TOS are of different academic ranks (i.e. tenured, pre-tenure, students), then the more senior authors should seriously consider whether more junior participants are truly consenting and not implicitly pressured.

Professionals societies like ACM and IEEE should advocate for legal reform to the Computer Fraud and Abuse Act (CFAA) such as the proposed Aaron’s Law.

Do you agree? Leave me a comment!

Thanks to Casey Fiesler and Brian Larson for helpful comments on this post.

Smart Phones and Parenting

April 1, 2014 1 comment

“Don’t bother Daddy while he’s reading the newspaper” is a cultural cliché with a certain truth to it. For some parents, a few minutes of quiet are a healthy break. For others, the excuses are more continual—they would avoid their kids entirely if they could. My grandfather was in that second category. I spent a lot of time at my grandparents’ house as a child, and my grandfather was always kind and fun to be with—when I saw him. But my dominant memory is of standing in the door of his art studio and asking if I could see him—if I could come in, or he would come out—and being told, “no, Grandpa is making his art now. “ Grandpa was always making his art. He worked as an architect during the week, and weekends were for working on his abstract painting and sculpture. Recently I shared this memory with my father, and he looked startled and sad—his memories of his father were identical.

Adults ignoring kids is not a new phenomenon. These days they are less often hiding behind the newspaper and more often hiding behind their smartphone. It’s the same phenomenon, but worse, because the phone is always there. Grandpa did sometimes leave his workshop (like for meals), and the newspaper is not with you at all times. As Sherry Turkle observed in her book Alone Together, the phone is a constant temptation. It doesn’t cause adults to ignore kids. But if an adult is of a hide-in the-art-studio inclination, it aids and abets bad behavior.

On the other hand, there are some absolutely wonderful things about smart-phone-enabled parenting. When our kids (ages 8 and 10) ask questions, we look up answers. “Mom, which countries drive on the left side of the road?” “What was the biggest earthquake?” “How much sugar is in Sprite compared to fruit juice?” These are all in my recent browser history. We also look up word definitions. Recently they asked, what does “lavish” mean? How is it different from “extravagant”? Of course I know those words, but dictionary definitions make it much easier to explain. And in the process, I think about nuances I hadn’t focused on before—I learn as much from these conversations as they do. My kids and I also regularly ask both serious and silly questions of the Siri program. (Try asking the air velocity of a swallow.) If they develop an interest in artificial intelligence, you’ll know where it came from. When the story this blog is named for took place, I looked up what a bison sounds like when we got home. Today, I would play it at the table. This is not a distraction from a family meal—quite the opposite. We have lively conversations and information we look up on our phones makes it more interesting for everyone.

Danah boyd astutely points out that much of what we see families doing with technology is not new. There have always been parents who ignore their kids to varying degrees, and use media as an excuse. There have always been parents who excitedly look things up to discuss with their kids. But in both cases, phones make it much easier, and it ends up happening more often.

The challenge for the future is finding ways to enhance the upsides and minimize the down. This will predominantly involve people learning to be more mindful in their use. However, there’s an intriguing question about whether mobile technology designers can play any meaningful role in accentuating the positive.

A Targeted Ad on Social Media that Worked!

In an imagined happy future, targeted advertising brings you what you want when you want it, alerting you to quality products and services you actually need.  It’s a win-win—the consumer is happy, the vendor is happy, and the social media sites that made the targeting possible are happy. But it’s not working quite like that yet, is it? 

Several weeks ago I went shopping for a new clock for my office and my kitchen.  I made my purchases.  And the next day my Facebook page was still covered in clock ads.  The sites I was shopping on (Amazon and Etsy) shared the fact that I was interested in clocks with Facebook (probably through ‘cookies’—little pieces of information stored by the sites I accessed on my computer.)  It’s weeks later, and I am still getting clock ads.  I have never been less likely to buy a clock—I just bought two.  

Or take the case of my son’s bathing suit.  We bought him a matching bathing suit and swim shirt a few months ago, and got the suit one size too small.  It fit him in March, but doesn’t still fit him now in June.  Ooops.  The top still fits, so a couple days ago I went on the Gymboree website to see if we could get him the bottom a size bigger.  Unfortunately, they’re out of his size.  Ah well.  But a picture of that bathing suit is still showing up as my top ad on Facebook.  I think it’s taunting me. Image

When you think of the data and social subtlety required to solve these problems, it seems like a daunting task.  OK, the first one might not be so bad—maybe if someone actually completed a clock purchase, the system should infer that they might not be interested in more clocks?  But it’s hard to fathom how they could solve the bathing suit case.  From the data trail I left, it looks like I might be interested in that suit but hesitated.  The idea of a system that would have enough data to solve the problem is frightening.  A system that knows the browsing was for my son and not for a gift, and knows my son’s correct size? I can’t always get his size right myself.

So it was with genuine appreciation that earlier this week I realized I had received a social media ad that worked—it was what I wanted, when I wanted it.  PhD student Casey Fiesler posted on her Facebook page several weeks ago that she recommends the book Ready Player One.  She said it was the first good cyberpunk she’d read since Snowcrash.  She included a link to the book on Amazon.  I had a look, and decided to buy it.  It is quite possibly the geekiest piece of media in any form I have ever encountered—and I loved it.  It’s a page turner. 

What I didn’t realize until this week is that it was not entirely accidental that I saw it in Casey’s Facebook newsfeed.  Facebook offers ‘promoted posts’—you can pay a few dollars to increase the chance that your friends will see something you post.  If you have more than a few Facebook friends, you likely are seeing only a fraction of what your friends are posting. The algorithm that determines what you see and don’t is proprietary.  Did Casey pay to promote her post?  Of course not.  Amazon did.  Amazon is paying Facebook to raise the profile of postings that include URLs to products on their site.  My friend genuinely recommended that book to me—Amazon just helped make sure I saw it. Violá—a targeted ad that made everyone happy!  I hope they invent more clever techniques for win-win advertising.

Sean Munson points out that this technique results in people seeing links to Amazon products with joke reviews over and over. Some people post links to products just because the reviews are funny–like the infamous Bic for Her pen reviews, where one review was found helpful by currently over 31,000 people. But I saw that post so many times that it became maddening. Two key points: 1) There is such a thing as over-promoting, and 2) Social subtlety is hard!

The Speed and Accuracy of Wikipedia: A Family Story

May 28, 2013 8 comments

Mom, did Uncle Oscar die?

In February 2008, I called my mother to inquire about the health of my great uncle Oscar Brodney, because Wikipedia told me he had passed away. Uncle Oscar was a Hollywood screenwriter. He wrote the screenplay for The Glenn Miller Story (for which he was nominated for an academy award), Abbott and Costello’s Mexican Hayride, Harvey, and many more.  In June 2007, I updated his Wikipedia page to say “Brodney still lives in Hollywood, California and celebrated his 100th birthday in February 2007.”  Actually, he was in Beverly Hills, California–as someone else quickly corrected. 

Editing Oscar’s page put it on my watchlist.  Wikipedia editors have a list of pages they’re interested in, so they can check changes.  Anything you edit is automatically added to your watchlist.  That’s one way quality is maintained.  On February 16th, 2008, I checked my watchlist and saw that someone had updated Oscar’s page.  It now said:

Brodney passed quietly in his sleep on February 12, 2008 in Playa del Rey, CA.

He did?  That was news to me.  So I called my mother. 

Me: Mom, did Uncle Oscar die?

Mom: I don’t think so, but let me call Betty.

My great aunt Betty is Oscar’s youngest sister.  Mom called Betty and asked if Oscar had died.  Betty said, “I don’t think so… But let me check my email.”  Betty checked her email, and sure enough there was a message waiting for her from a few days earlier saying her brother had passed away.  Oscar’s closest living relative learned of his death via my Wikipedia watchlist.

The edit to Oscar’s page was made the day after his death by an anonymous user –someone who didn’t even log in. It wasn’t made by a family member, as far as I’ve been able to determine.  The IP address of the anonymous user was apparently from Las Vegas, Nevada. Oscar lived in a nursing home for the last few months of his life, and the specific detail about the manner and place of death makes me wonder if the anonymous editor was someone who worked at the home or a friend of someone who worked there.  We’ll probably never know. (If you made that edit, please email me!  I’d love to know who you are and how you knew.)

However, the story doesn’t stop there.  No one placed an obituary for Oscar in Variety or other newspapers.  He was almost 101 years old, and most people who would have cared were long gone.  So a careful Wikipedia user undid the edit.  In accordance with Wikipedia’s policy on Biographies of Living Persons, declaring someone dead is serious business.  You can’t do it without proof.  I replied back on the article’s talk page (each Wikipedia article has a place for editors to discuss it) saying

I have confirmed that the information about Brodney’s death is correct from a primary source (his sister). Can we redo this?

Another editor replied back,

Per WP:OR and WP:BLP, we need an independent, third party reliable source to report a death. Is there a news article anywhere?

I couldn’t find a newspaper ad or public notice anywhere, so for months Oscar stayed undead–not dead on Wikipedia I mean.  Until in July a kind Wikipedia editor noticed that his name had appeared in the social security administration death records, and Oscar was finally allowed to officially rest in peace.

Two things strike me as remarkable about this story.  The first is the speed and power of Wikipedia’s social network.  My network of strong ties failed to get this news to me in a timely fashion. Wikipedia’s global network routed around that blockage through an anonymous person.

Second, Wikipedia’s commitment to verification is remarkable for its tenacity, in certain areas.  As I’ve written before, a high profile page (like that of a current world leader) is scrutinized in every detail. In less popular pages (like the page for Oscar Brodney), errors can creep in.  But even on a low profile page, editors are incredibly careful about certain things. And deaths are one of those things.  You don’t go around declaring people dead without proof.  And the editor who undid the change to Oscar’s page was right–how do we really know he has passed away?  We need proof.  And luckily another Wikipedia editor knew how to find acceptable proof when I did not. 

A “socio-technical system” is a combination of people, artifacts (in this case the MediaWiki software that Wikipedia runs on), and social practices.  And in this example, all those parts worked together in a remarkable way.  Oscar would have approved.

%d bloggers like this: