Archive for the ‘social implications’ Category

Pokémon Go and Work/Life Balance

January 2, 2017 Leave a comment

I love casual games, though I’ve written before about how they can sometimes be disruptive. Surprisingly I do not find Pokémon Go particularly disruptive. As promised, it promotes walking (you get credit for hatching Pokémon eggs the further you walk.) And it has other interesting qualities I could not have predicted when I started playing it.

Most importantly and most surprising: It promotes better work/life balance. When I am out for a walk, if I have Pokémon Go open, I get credit for the distance walked. As a result, I tend to leave the app open, which means I don’t check my email. That means I am more truly not at work, for my walk.

The bad news of course is that I’m looking at my phone, rather than at the scenery. But generally speaking I find I still appreciate where I am, and enjoy chatting with people I am walking with. It takes little of my attention.

If I am playing while walking with people who are not playing, I never stop to do a gym battle. A gym battle takes a couple minutes, and that’s too long to ask friends to wait. It’s also important to leave the sound off. Most people always leave the sound off. I leave it on when I’m walking alone, because the audio feedback means I spend less time looking at my phone. After you throw a Pokéball, it takes a few seconds to see if you caught the Pokémon or not. If you listen to the sound effects, you can stop looking at the phone and listen for whether you caught it. But if I’m walking with other people, the sound is annoying, and also misleading—they assume I’m more distracted than I really am.

My second surprise: it is a participatory exploration of probability and economics. Probability is fundamental to the game—each time you try to catch a Pokémon, a circle around it shows whether you have a high (green), medium (yellow), or low (red) chance of catching it. A player is constantly calculating: How hard will this be to catch, and is it worth it? It’s a constant reminder of the basic laws or probability: past trials don’t affect the outcome of the next one.

When you try to catch a Pokémon, you have to decide: Am I going to throw a regular Pokéball, a great ball, or an ultra ball? The latter are increasingly rare, but have a higher catch rate. The more powerful the Pokémon, the harder it is to catch, and the higher quality Pokéball you need to use. If I use too cheap a ball, then I have to try again, and again—and might miss catching it entirely, if it runs away. Choosing to use a regular Pokéball might mean I wasted five or more balls, rather than using one or two great balls. It’s like the game is whispering in my ear over and over: don’t be cheap, don’t be cheap….

An economist friend noticed right after the launch of the game that it demonstrates the “sunk cost fallacy”: If it was worth throwing those previous six Pokéballs at that Pidgey, it’s worth throwing one more.

Pokémon Go is good for certain times and places. It’s great for travel, because different places have different Pokémon. It was fun to catch all the Growlithes in San Diego (a Pokémon common there and rare in most other cities). It was particularly fun to use at the San Diego Zoo Safari Park, which had a safari-like quality of Pokémon on the day I visited. When you’re visiting a zoo, you do a lot walking, and wander from exhibit to exhibit. Playing Pokémon Go at the zoo made the whole experience more fun. When a rare Pokémon appeared on the radar (a Snorlax), I got to chat with strangers who came to try to catch it from around the zoo. On the other hand, it was also nice to go a number of places (like the lighthouse and beach at Point Loma in San Diego) where there was no cell service, and I put my phone away. The trick of course is knowing when to put your phone away when there still is cellular service.

I won’t lie—I do sometimes play when I shouldn’t. Particularly when I’m somewhere I don’t want to be. A Pokéstop is a place you can get free Pokéballs and other useful items every five minutes. Fortunately or unfortunately, there is Pokéstop accessible in a conference room where I have a number of boring meetings. For a long meeting, I find playing a casual game helps me to pay more attention to the meeting. The distraction is so light that I am still paying attention to the meeting and less likely to zone out entirely. But it’s perceived by others as disrespectful (if they catch me with my phone under the table), and I probably shouldn’t do it. Like any casual game, Pokémon Go requires mindfulness in when you choose to play.

Whether Pokémon Go survives in the long or even medium term depends on whether the developers can keep adding features and special events to keep it interesting. But for now, it’s a casual game that fits into my life better than others.

Why LinkedIn is Creepy: Asymmetry of Visibility

February 3, 2012 12 comments

A friend recently shared this story: he was having trouble finding contact information for an old friend, and it occurred to him that his ex-girlfriend would be in touch. So he looked at his ex’s LinkedIn page to search through her list of contacts.  It turns out, though, that his ex has a premium LinkedIn account, which gives you a list of everyone who has looked at your profile.  She contacted him, “I see you were looking me up….”  This was NOT what he wanted. I suppose if Shakespeare were writing today, this is would be prime material for a modern Comedy of Errors.

What is uncomfortable about this situation is the asymmetry of visibility and awareness. She has a premium account, and can see more. He does not, and was erstwhile unaware that anyone had the ability to track profile views. It’s like a hidden surveillance camera. Principles of social translucence suggest that mutual visibility facilitates successful cooperative behavior. One-way mirrors are creepy.

Social Translucence and Internet Parenting

January 10, 2012 Leave a comment

Part of what makes putting the computer in the family room work well is that it has a degree of “social translucence.”  Tom Erickson and Wendy Kellogg write:

We begin by asking what properties of the physical world support graceful human-human communication in face-to-face situations, and argue that it is possible to design digital systems that support coherent behavior by making participants and their activities visible to one another. We call such systems “socially translucent systems” and suggest that they have three characteristics—visibility, awareness, and accountability—which enable people to draw upon their experience and expertise to structure their interactions with one another.

When I walk through the dining room where our computer is located, I can’t see what my son is typing unless I come uncomfortably close. And that would feel rude, so I generally don’t. But if he’s looking at images or videos, I can see them at a distance. The physical properties of the space afford greater privacy for text than for other media.  No one planned it that way, but it’s a pretty strategic setup when you think about it. I can quickly get a sense of the general sort of thing he’s doing but the details typically remain more private.

It works the other way around too–I use the same computer, and my kids are aware of what I’m doing on it too. If the one who is old enough to read is watching, I intuitively know when he’s close enough to actually read the words on my screen and when he’s not.  It’s quite striking how detailed these affordances are–what they allow and what they don’t allow is complicated.  The use of the physical properties of the space to maintain a mixture of privacy and mutual awareness is social translucence.

I got some interesting responses to my last post. People definitely are comfortable at different positions on the spectrum from trusting kids to monitoring them.  Kids and teens are continually facing new challenges, and at any given time there are some they are ready for and some they are not.  Parents need to let them experiment and make mistakes–but not mistakes with tragic or irreversible consequences.  It’s a delicate balance. But wherever you are on the spectrum from “protect them” to “let them learn from their mistakes,” I think there’s one thing we all can agree on: we need more socially translucent solutions to Internet parenting.

For me, I want to know if my kid is online at 4 am. I want to know if he’s being bullied, or bullying others.  I want to know that he’s using good judgement in the kind of content he accesses.  I want to know if there’s something else I should be worried about–is there something parents should watch out for that I don’t even know about?  But beyond all that, I don’t need to see the details of exactly what he’s saying to his friends or doing online.  The interesting question is: could a tool be designed to help?  What would a socially translucent tool for parenting your kids’ Internet use look like?  It’s a tremendously hard design problem–particularly if you hope to create mutual awareness among people rather than an algorithm that tries to operationalize values (a task which Internet filters attempt, and largely fail). But if such a design was successful, it would be a win for both kids’ privacy and effective parenting.

(For more on this topic, see Social and Technical Challenges in Parenting Teens’ Social Media Use by Sarita Yardi.)

Artifacts Have Politics. Now What?

September 23, 2011 1 comment

I’ve read Langdon Winner‘s essay “Do Artifacts Have Politics?” a dozen or more times. I first read it in grad school in the 1990s, and now I assign it in… well, almost ever class I teach.  Winner shows that some artifacts have deliberate politics. The highway overpasses around New York City were deliberately designed to keep poor people (especially non-whites) away from the beaches.  We know this because their designer, Robert Moses, said so.  Other artifacts (like nuclear versus solar power)  are not necessarily intentionally political, but lend themselves to certain kinds of power arrangements.

I once attended a lecture Winner gave, and during the question period asked him: “OK, I’m an engineer and I accept everything you say. What would you like my peers and I to do differently?” He didn’t really have an answer. I guess it’s kind of a hard question.  And I’m still pondering it myself.  I suppose “be mindful” is one straightforward answer, but the details matter–and the details in practice aren’t obvious.

When I teach the paper, I often use face recognition technology as a discussion topic. If you could invent perfect face recognition, would you?  If for example you could set up a camera at every convenience store and gas station in the nation that would reliably identify bomber Eric Rudolph while he was on the loose, would you? Are the implications different if, as is inevitable, the technology has an error rate? This leads to a discussion of the checks and balances we have in US law and whether we really trust the government to honor them in practice. If we err on deciding that we will trust the government and work within the system to make sure the limits are respected, that leads to a scarier question: What about use of this technology by totalitarian regimes in other nations?  If you invented it, wouldn’t they eventually get access too?  Is the inventor responsible for all of a technology’s eventual uses?  Knowing this, would you want to be the inventor or not?  It’s reasonable to say you’ll invent it and try to stay involved in the broader sociopolitical context of its use in practice–maybe that’s really all you can do. But you also need to recognize that you will sometimes lose control of what happens next.  Which raises the question, is there any technology that, given that you will likely eventually lose control of its uses, that you would decline to invent?  It’s easy to come up with an absurd example where the answer is yes (Marvin the Martian’s “destroy earth” button comes to mind). It’s harder to come up with heuristics for when something less extreme might fit that description.

I’ve had this discussion with classes over and over. It’s a great conversation–it gets students thinking.  And the discussion time and time again has followed the same path–until yesterday.  In my “Intro to Human-Centered Computing” class yesterday, master’s  student Vincent Martin commented, “I need face recognition technology. I would love to be able to recognize my friends and family again.”  Vincent is blind.  I’m surprised that obvious application doesn’t come up in conversation every time we discuss this issue.  I’ll make sure it comes up in the future.  If you were at all leaning towards refraining from developing face rccognition technology, I hope this would change your mind.  For every basic technology we develop, how many hidden surprise–uses for both good and ill that we can’t anticipate–are there?

OK, artifacts have politics. What does this mean for us as designers and engineers? Beyond high-level platitudes like “be mindful,” what should we do differently? I’m still wondering.

%d bloggers like this: