Posts Tagged ‘social computing’

Pokémon Go and Work/Life Balance

January 2, 2017 1 comment

I love casual games, though I’ve written before about how they can sometimes be disruptive. Surprisingly I do not find Pokémon Go particularly disruptive. As promised, it promotes walking (you get credit for hatching Pokémon eggs the further you walk.) And it has other interesting qualities I could not have predicted when I started playing it.

Most importantly and most surprising: It promotes better work/life balance. When I am out for a walk, if I have Pokémon Go open, I get credit for the distance walked. As a result, I tend to leave the app open, which means I don’t check my email. That means I am more truly not at work, for my walk.

The bad news of course is that I’m looking at my phone, rather than at the scenery. But generally speaking I find I still appreciate where I am, and enjoy chatting with people I am walking with. It takes little of my attention.

If I am playing while walking with people who are not playing, I never stop to do a gym battle. A gym battle takes a couple minutes, and that’s too long to ask friends to wait. It’s also important to leave the sound off. Most people always leave the sound off. I leave it on when I’m walking alone, because the audio feedback means I spend less time looking at my phone. After you throw a Pokéball, it takes a few seconds to see if you caught the Pokémon or not. If you listen to the sound effects, you can stop looking at the phone and listen for whether you caught it. But if I’m walking with other people, the sound is annoying, and also misleading—they assume I’m more distracted than I really am.

My second surprise: it is a participatory exploration of probability and economics. Probability is fundamental to the game—each time you try to catch a Pokémon, a circle around it shows whether you have a high (green), medium (yellow), or low (red) chance of catching it. A player is constantly calculating: How hard will this be to catch, and is it worth it? It’s a constant reminder of the basic laws or probability: past trials don’t affect the outcome of the next one.

When you try to catch a Pokémon, you have to decide: Am I going to throw a regular Pokéball, a great ball, or an ultra ball? The latter are increasingly rare, but have a higher catch rate. The more powerful the Pokémon, the harder it is to catch, and the higher quality Pokéball you need to use. If I use too cheap a ball, then I have to try again, and again—and might miss catching it entirely, if it runs away. Choosing to use a regular Pokéball might mean I wasted five or more balls, rather than using one or two great balls. It’s like the game is whispering in my ear over and over: don’t be cheap, don’t be cheap….

An economist friend noticed right after the launch of the game that it demonstrates the “sunk cost fallacy”: If it was worth throwing those previous six Pokéballs at that Pidgey, it’s worth throwing one more.

Pokémon Go is good for certain times and places. It’s great for travel, because different places have different Pokémon. It was fun to catch all the Growlithes in San Diego (a Pokémon common there and rare in most other cities). It was particularly fun to use at the San Diego Zoo Safari Park, which had a safari-like quality of Pokémon on the day I visited. When you’re visiting a zoo, you do a lot walking, and wander from exhibit to exhibit. Playing Pokémon Go at the zoo made the whole experience more fun. When a rare Pokémon appeared on the radar (a Snorlax), I got to chat with strangers who came to try to catch it from around the zoo. On the other hand, it was also nice to go a number of places (like the lighthouse and beach at Point Loma in San Diego) where there was no cell service, and I put my phone away. The trick of course is knowing when to put your phone away when there still is cellular service.

I won’t lie—I do sometimes play when I shouldn’t. Particularly when I’m somewhere I don’t want to be. A Pokéstop is a place you can get free Pokéballs and other useful items every five minutes. Fortunately or unfortunately, there is Pokéstop accessible in a conference room where I have a number of boring meetings. For a long meeting, I find playing a casual game helps me to pay more attention to the meeting. The distraction is so light that I am still paying attention to the meeting and less likely to zone out entirely. But it’s perceived by others as disrespectful (if they catch me with my phone under the table), and I probably shouldn’t do it. Like any casual game, Pokémon Go requires mindfulness in when you choose to play.

Whether Pokémon Go survives in the long or even medium term depends on whether the developers can keep adding features and special events to keep it interesting. But for now, it’s a casual game that fits into my life better than others.

Naturalistic Inquiry and Silk Purses

September 22, 2016 2 comments

For me as a social computing researcher, some of the most insightful and useful parts of the research literature are qualitative sociology written thirty to fifty years ago. For example, The Great Good Place by Ray Oldenburg (1989) documents how people need a “third place”—one that is neither work nor home. Within the third places Oldenburg studied, he observed timeless patterns of human behavior like the role of “the regulars” and how people greet a newcomer. Or consider The Presentation of Self in Everyday Life by Erving Goffman, originally published in 1959. Goffman documents how people are always performing for one another, and the impressions they deliberately give may differ from the impressions given off (what people actually infer from someone’s performance). How could we even begin to understand social behavior on sites like Facebook and Twitter without Goffman as a foundation?

When I think about this style of research, I often think about how it would be reviewed if it were submitted for publication today. No doubt Goffman would be told his work was insufficiently scientific—how do you really know there are “impressions given off”? Can you measure them? I doubt he’d make it through the review process.

How do we know that Goffman’s observations are “true”? Honestly, we don’t. The “proof” is in how useful generations of researchers to follow have found his observations. Goffman took messy qualitative data, and used that as a touchstone for his own powers of inference and synthesis. His observations aren’t in the data, but emerged from a process of his examination of the data. It’s probably impossible to make a firm statement about how valuable this type of work is at the time of writing—the evidence is in the value that others find in building on the work later.

Contemporary researchers are still working in this style. I call writing in this mode “silk purse papers.” They say you can’t make a silk purse from a sow’s ear, except sometimes you can. Insights emerge from messy data in an act of interpretation. The first paper of my own that I would put in this category is my paper “Situated Support for Learning: Storm’s Weekend with Rachael,” published in Journal of the Learning Sciences in 2000. In it, I take a case study of a weekend in which one girl taught another to program and draw broader conclusions about the nature of support for learning—support for learning is more effective when it is richly connected to other elements of the learning environment. I can’t prove to you that those observations are valid, but I can say that others have found them useful.

I’m always vaguely embarrassed when I’m in the middle of writing a silk purse paper. A tiny little voice in my head nags, “You’re making stuff up. You could use this data to tell a dozen different stories. You’re saying what you want to say and pretending it’s science.” But then I remember Goffman, and try to cheer up. It may not be “science,” but it’s certainly a valid mode of naturalistic inquiry. Take a second and think about foundational writings in your area of expertise. How many of those papers are silk purses?

In fact I wonder if we are all always making silk purses. Just because a paper is quantitative does not make it any less interpretive. My colleagues and I just finished a complicated analysis of content people post on Facebook publicly versus post to friends only (to appear in CSCW 2017). It’s totally quantitative work. But what we chose to measure and not measure and how we explain what it means strike me as much acts of interpretation (to be evaluated by the passage of time) as the complicated stories we tell with qualitative data.

Where we as a field get into trouble is when people with naïve ideas about “science” start reviewing qualitative work, and get it wrong. Sociology of science and technology should be featured in more graduate programs in human-computer interaction and human-centered computing. And as a field we need to have a more nuanced conversation about interpretive work, and how to review it.

More on TOS: Maybe Documenting Intent Is Not So Smart

February 29, 2016 1 comment

In my last post, I wrote that “Reviewers should reject research done by violating Terms of Service (TOS), unless the paper contains a clear ethical justification for why the importance of the work justifies breaking the law.”  My friend Mako Hill (University of Washington) pointed out to me (quite sensibly) that that would get people in more trouble–it  asks people to document their intent to break the TOS.  He’s right.  If we believe that under some circumstances breaking TOS is ethical, then requiring researchers to document their intent is not strategic.

This leaves us in an untenable situation.  We can’t engage in a formal review of whether something breaking TOS is justified, because that would make potential legal problems worse. Of course we can encourage individuals to be mindful and not break TOS without good reason. But is that good enough?

Sometimes TOS prohibit research for good reason. For example, YikYak is trying to abide by user’s expectations of ephemerality and privacy. People participate in online activity with a reasonable expectation that the TOS are rules that will be followed, and they can rely on that in deciding what they choose to share. Is it fair to me if my content suddenly shows up in your research study, when it’s clearly prohibited by the TOS?  Do we really trust individual researchers to decide when breaking TOS is justified with no outside review?  When I have a tricky research protocol, I want review. Just letting each researcher decide for themselves makes no sense. The situation is a mess.

Legal change is the real remedy here–such as passing Aaron’s Law, and also possibly an exemption from TOS for researchers (in cases where user rights are scrupulously protected).

Do you have a better solution?  I hope so. Leave me a comment!

Do Researchers Need to Abide by Terms of Service (TOS)? An Answer.

February 26, 2016 8 comments

The TL;DR: Reviewers should reject research done by violating Terms of Service (TOS), unless the paper contains a clear ethical justification for why the importance of the work justifies breaking the law. No one should deliberately break the law without being aware of potential consequences.

Do social computing researchers need to follow Terms of Service (TOS)? Confusion on this issue has created an ethical mess. Some researchers choose not to do a particular piece of work because they believe they can’t violate TOS, and then another researcher goes and does that same study and gets it published with no objections from reviewers. Does this make sense?

It happened to me recently. The social app YikYak is based in Atlanta, so I’m pretty sure my students and I started exploring it before anyone else in the field. But we quickly realized that the Terms of Service prohibit scraping the site, so we didn’t do it. We talked with YikYak and started negotiations about getting permission to scrape, and we might’ve gotten permission if we had been more persistent. But I felt like I was bothering extremely busy startup employees with something not on their critical path. So we quietly dropped our inquiries. Two years later, someone from another university published a paper about YikYak like what we wanted to write, using scraped data. This kind of thing happens all the time, and isn’t fair.

There are sometimes good reasons why companies prohibit scraping. For example, YikYak users have a reasonable expectation that their postings are ephemeral. Having them show up in a research paper is not what they expect. Sometimes a company puts up research prohibitions because they’re trying to protect their users. Can a University IRB ever allow research that is prohibited in a site’s TOS?

Asking permission of a company to collect data is sometimes successful.  Some companies have shared huge datasets with researchers, and learned great things from the results. It can be a win-win situation. If it’s possible to request and obtain permission, that’s a great option—if the corporation doesn’t request control over what is said about their site in return. The tricky question is whether researchers will be less than honest in publishing findings in these situations, because they fear not getting more data access.

Members of the research community right now are confused about whether they need to abide by TOS. Reviewers are confused about whether this issue is in their purview—should they consider whether a paper abides by TOS in the review process? Beyond these confusions lurks an arguably more important issue: What happens when a risk-averse company prohibits reasonable research? This is of critical importance because scholars cannot cede control of how we understand our world to corporate interests.

Corporations are highly motivated to control what is said about them in public. When I was co-chair of CSCW in 2013, USB sticks of the proceedings were in production when I received an email from BigCompany asking that a publication by one of their summer interns be removed. I replied, “It’s been accepted for publication, and we’re already producing them.” They said, “We’ll pay to have them remade.” I said, “I need to get this request from the author of the paper, not company staff.” They agreed, and a few hours later a very sheepish former summer intern at BigCompany emailed me requesting that his paper be withdrawn. He had submitted it without internal approval. ACM was eager to make BigCompany happy, so it was removed. BigCompany says it was withdrawn because it didn’t meet their standards for quality research—and that’s probably true. But it’s also true that the paper was critical of BigCompany, and they want to control messaging about them. Companies do have a right to control publication by their employees, and the intern was out of line. Of course companies want to control what their employees say about them in public—that makes sense. But what are the broader implications if they also prohibit outside analysis?

If online sites both control what their employees say and prohibit independent analysis, how do we study their impact on the world? Can anyone criticize them? Can anyone help them to see how their design choices reshape our culture, and reshape people’s lives?

From this perspective, one might argue that it can be ethical to violate Terms of Service. But it’s still not legal. TOS can be legally binding, and simply clicking through them is typically interpreted as assent. They are rarely enforced, but when they are, the consequences can be devastating, as the sad death of Internet pioneer Aaron Swartz shows. Swartz was accused of little more than violating TOS, and charged with multiple felonies under the Computer Fraud and Abuse Act (CFAA). The stress of his prosecution led to his suicide.

For another example, my colleague Brian Larson pointed me to the case Palmer v. Kleargear. John Palmer and Jennifer Kulas left a negative review of Kleargear on the site Ripoff Report after the item they ordered was never sent. Kleargear tried to enforce an anti-disparagement clause in their TOS, and demanded the review be taken down. Since you can’t delete reviews on Ripoff Report without paying a $2000 fee, they declined. Which set in motion a multi-year legal battle. Pending legislation in California may make such clauses illegal. However, the consequences for individuals of violating terms—even absurd terms—remain potentially high.

TOS are contracts. Larry Lessig points out that “Contracts are important. Their breach must be remedied. But American law does not typically make the breach of a contract a felony.” Proposed legal changes, “Aaron’s Law,” would limit the scope of the CFAA so that breaches of contract are treated more like breaches of contract rather than felonies. Breaches of contract often have limited penalties if there is no real harm done. Researchers should keep on eye on this issue—unless Aaron’s Law passes, ridiculous penalties are still the law.

We’re in a quandary. We have compelling reasons why violating TOS is sometimes ethical, but it’s still not legal. So what are we as a field supposed to do? Here’s my answer:

If an individual chooses to take on the legal risk of violating TOS, they need to justify why. This is not something you can do lightly. In publishing work that comes from data obtained by violating TOS, this must be clearly acknowledged and justified. Work that breaks TOS with no justification should be rejected by reviewers. Reviewers should pro-actively review a site’s TOS, if it’s not explicitly discussed in the paper.

However, you should think carefully about doing work that violates TOS collaboratively with subordinates. Can you as faculty  take this risk yourself? Sure. But faculty are in a position of power over students, who may have difficulty saying no. Senior faculty also have power over junior faculty. If a group of researchers of differing seniority wants to do this kind of work, the more senior members need to be careful that there is no implicit coercion to participate caused by the unavoidable power relations among members of the research team.

I believe we need legal change to remedy this state of affairs. Until that happens, I would encourage friends to be cautious. Someone is going to get in hot water for this—don’t let it be you.

Do I like the fact that corporations have such control over what is said about them? I do not—not at all. But legal consequences are real, and people should take on that risk only when they have good reason and know what they are doing, without any implicit coercion.

In summary, I am proposing:

Reviewers should reject research done by violating Terms of Service (TOS), unless the paper contains a clear ethical justification for why the importance of the work justifies breaking the law. Reviewers’ should pro-actively check a site’s TOS if it is not discussed in the paper.

If a team of researchers who choose to violate TOS are of different academic ranks (i.e. tenured, pre-tenure, students), then the more senior authors should seriously consider whether more junior participants are truly consenting and not implicitly pressured.

Professionals societies like ACM and IEEE should advocate for legal reform to the Computer Fraud and Abuse Act (CFAA) such as the proposed Aaron’s Law.

Do you agree? Leave me a comment!

Thanks to Casey Fiesler and Brian Larson for helpful comments on this post.

Smart Phones and Parenting

April 1, 2014 1 comment

“Don’t bother Daddy while he’s reading the newspaper” is a cultural cliché with a certain truth to it. For some parents, a few minutes of quiet are a healthy break. For others, the excuses are more continual—they would avoid their kids entirely if they could. My grandfather was in that second category. I spent a lot of time at my grandparents’ house as a child, and my grandfather was always kind and fun to be with—when I saw him. But my dominant memory is of standing in the door of his art studio and asking if I could see him—if I could come in, or he would come out—and being told, “no, Grandpa is making his art now. “ Grandpa was always making his art. He worked as an architect during the week, and weekends were for working on his abstract painting and sculpture. Recently I shared this memory with my father, and he looked startled and sad—his memories of his father were identical.

Adults ignoring kids is not a new phenomenon. These days they are less often hiding behind the newspaper and more often hiding behind their smartphone. It’s the same phenomenon, but worse, because the phone is always there. Grandpa did sometimes leave his workshop (like for meals), and the newspaper is not with you at all times. As Sherry Turkle observed in her book Alone Together, the phone is a constant temptation. It doesn’t cause adults to ignore kids. But if an adult is of a hide-in the-art-studio inclination, it aids and abets bad behavior.

On the other hand, there are some absolutely wonderful things about smart-phone-enabled parenting. When our kids (ages 8 and 10) ask questions, we look up answers. “Mom, which countries drive on the left side of the road?” “What was the biggest earthquake?” “How much sugar is in Sprite compared to fruit juice?” These are all in my recent browser history. We also look up word definitions. Recently they asked, what does “lavish” mean? How is it different from “extravagant”? Of course I know those words, but dictionary definitions make it much easier to explain. And in the process, I think about nuances I hadn’t focused on before—I learn as much from these conversations as they do. My kids and I also regularly ask both serious and silly questions of the Siri program. (Try asking the air velocity of a swallow.) If they develop an interest in artificial intelligence, you’ll know where it came from. When the story this blog is named for took place, I looked up what a bison sounds like when we got home. Today, I would play it at the table. This is not a distraction from a family meal—quite the opposite. We have lively conversations and information we look up on our phones makes it more interesting for everyone.

Danah boyd astutely points out that much of what we see families doing with technology is not new. There have always been parents who ignore their kids to varying degrees, and use media as an excuse. There have always been parents who excitedly look things up to discuss with their kids. But in both cases, phones make it much easier, and it ends up happening more often.

The challenge for the future is finding ways to enhance the upsides and minimize the down. This will predominantly involve people learning to be more mindful in their use. However, there’s an intriguing question about whether mobile technology designers can play any meaningful role in accentuating the positive.

A Targeted Ad on Social Media that Worked!

In an imagined happy future, targeted advertising brings you what you want when you want it, alerting you to quality products and services you actually need.  It’s a win-win—the consumer is happy, the vendor is happy, and the social media sites that made the targeting possible are happy. But it’s not working quite like that yet, is it? 

Several weeks ago I went shopping for a new clock for my office and my kitchen.  I made my purchases.  And the next day my Facebook page was still covered in clock ads.  The sites I was shopping on (Amazon and Etsy) shared the fact that I was interested in clocks with Facebook (probably through ‘cookies’—little pieces of information stored by the sites I accessed on my computer.)  It’s weeks later, and I am still getting clock ads.  I have never been less likely to buy a clock—I just bought two.  

Or take the case of my son’s bathing suit.  We bought him a matching bathing suit and swim shirt a few months ago, and got the suit one size too small.  It fit him in March, but doesn’t still fit him now in June.  Ooops.  The top still fits, so a couple days ago I went on the Gymboree website to see if we could get him the bottom a size bigger.  Unfortunately, they’re out of his size.  Ah well.  But a picture of that bathing suit is still showing up as my top ad on Facebook.  I think it’s taunting me. Image

When you think of the data and social subtlety required to solve these problems, it seems like a daunting task.  OK, the first one might not be so bad—maybe if someone actually completed a clock purchase, the system should infer that they might not be interested in more clocks?  But it’s hard to fathom how they could solve the bathing suit case.  From the data trail I left, it looks like I might be interested in that suit but hesitated.  The idea of a system that would have enough data to solve the problem is frightening.  A system that knows the browsing was for my son and not for a gift, and knows my son’s correct size? I can’t always get his size right myself.

So it was with genuine appreciation that earlier this week I realized I had received a social media ad that worked—it was what I wanted, when I wanted it.  PhD student Casey Fiesler posted on her Facebook page several weeks ago that she recommends the book Ready Player One.  She said it was the first good cyberpunk she’d read since Snowcrash.  She included a link to the book on Amazon.  I had a look, and decided to buy it.  It is quite possibly the geekiest piece of media in any form I have ever encountered—and I loved it.  It’s a page turner. 

What I didn’t realize until this week is that it was not entirely accidental that I saw it in Casey’s Facebook newsfeed.  Facebook offers ‘promoted posts’—you can pay a few dollars to increase the chance that your friends will see something you post.  If you have more than a few Facebook friends, you likely are seeing only a fraction of what your friends are posting. The algorithm that determines what you see and don’t is proprietary.  Did Casey pay to promote her post?  Of course not.  Amazon did.  Amazon is paying Facebook to raise the profile of postings that include URLs to products on their site.  My friend genuinely recommended that book to me—Amazon just helped make sure I saw it. Violá—a targeted ad that made everyone happy!  I hope they invent more clever techniques for win-win advertising.

Sean Munson points out that this technique results in people seeing links to Amazon products with joke reviews over and over. Some people post links to products just because the reviews are funny–like the infamous Bic for Her pen reviews, where one review was found helpful by currently over 31,000 people. But I saw that post so many times that it became maddening. Two key points: 1) There is such a thing as over-promoting, and 2) Social subtlety is hard!

The Speed and Accuracy of Wikipedia: A Family Story

May 28, 2013 8 comments

Mom, did Uncle Oscar die?

In February 2008, I called my mother to inquire about the health of my great uncle Oscar Brodney, because Wikipedia told me he had passed away. Uncle Oscar was a Hollywood screenwriter. He wrote the screenplay for The Glenn Miller Story (for which he was nominated for an academy award), Abbott and Costello’s Mexican Hayride, Harvey, and many more.  In June 2007, I updated his Wikipedia page to say “Brodney still lives in Hollywood, California and celebrated his 100th birthday in February 2007.”  Actually, he was in Beverly Hills, California–as someone else quickly corrected. 

Editing Oscar’s page put it on my watchlist.  Wikipedia editors have a list of pages they’re interested in, so they can check changes.  Anything you edit is automatically added to your watchlist.  That’s one way quality is maintained.  On February 16th, 2008, I checked my watchlist and saw that someone had updated Oscar’s page.  It now said:

Brodney passed quietly in his sleep on February 12, 2008 in Playa del Rey, CA.

He did?  That was news to me.  So I called my mother. 

Me: Mom, did Uncle Oscar die?

Mom: I don’t think so, but let me call Betty.

My great aunt Betty is Oscar’s youngest sister.  Mom called Betty and asked if Oscar had died.  Betty said, “I don’t think so… But let me check my email.”  Betty checked her email, and sure enough there was a message waiting for her from a few days earlier saying her brother had passed away.  Oscar’s closest living relative learned of his death via my Wikipedia watchlist.

The edit to Oscar’s page was made the day after his death by an anonymous user –someone who didn’t even log in. It wasn’t made by a family member, as far as I’ve been able to determine.  The IP address of the anonymous user was apparently from Las Vegas, Nevada. Oscar lived in a nursing home for the last few months of his life, and the specific detail about the manner and place of death makes me wonder if the anonymous editor was someone who worked at the home or a friend of someone who worked there.  We’ll probably never know. (If you made that edit, please email me!  I’d love to know who you are and how you knew.)

However, the story doesn’t stop there.  No one placed an obituary for Oscar in Variety or other newspapers.  He was almost 101 years old, and most people who would have cared were long gone.  So a careful Wikipedia user undid the edit.  In accordance with Wikipedia’s policy on Biographies of Living Persons, declaring someone dead is serious business.  You can’t do it without proof.  I replied back on the article’s talk page (each Wikipedia article has a place for editors to discuss it) saying

I have confirmed that the information about Brodney’s death is correct from a primary source (his sister). Can we redo this?

Another editor replied back,

Per WP:OR and WP:BLP, we need an independent, third party reliable source to report a death. Is there a news article anywhere?

I couldn’t find a newspaper ad or public notice anywhere, so for months Oscar stayed undead–not dead on Wikipedia I mean.  Until in July a kind Wikipedia editor noticed that his name had appeared in the social security administration death records, and Oscar was finally allowed to officially rest in peace.

Two things strike me as remarkable about this story.  The first is the speed and power of Wikipedia’s social network.  My network of strong ties failed to get this news to me in a timely fashion. Wikipedia’s global network routed around that blockage through an anonymous person.

Second, Wikipedia’s commitment to verification is remarkable for its tenacity, in certain areas.  As I’ve written before, a high profile page (like that of a current world leader) is scrutinized in every detail. In less popular pages (like the page for Oscar Brodney), errors can creep in.  But even on a low profile page, editors are incredibly careful about certain things. And deaths are one of those things.  You don’t go around declaring people dead without proof.  And the editor who undid the change to Oscar’s page was right–how do we really know he has passed away?  We need proof.  And luckily another Wikipedia editor knew how to find acceptable proof when I did not. 

A “socio-technical system” is a combination of people, artifacts (in this case the MediaWiki software that Wikipedia runs on), and social practices.  And in this example, all those parts worked together in a remarkable way.  Oscar would have approved.

On Google Glass and Gargoyles: a Call to Action

May 20, 2013 4 comments

Wearable computing first entered my social circle in 1993, when fellow grad students at the MIT Media Lab (led by Thad Starner) started inventing and wearing devices of their own design.   The amazing thing to me is that a key social implication of wearables was predicted a year earlier (1992) by novelist Neal Stephenson in his book Snow Crash.   Stephenson used the term “gargoyle” to refer to someone with a wearable who is not really listening to you:

Gargoyles are no fun to talk to. They never finish a sentence. They are adrift in a laser-drawn world, scanning retinas in all directions, doing background checks on everyone within a thousand yards, seeing everything in visual light, infrared, millimeter. wave radar, and ultrasound all at once. You think they’re talking to you, but they’re actually poring over the credit record of some stranger on the other side of the room, or identifying the make and model of airplanes flying overhead.

Since the announcement of Google Glass (for which Thad was lead technical advisor), a productive public conversation about its privacy implications has begun.  I’m glad we’re all talking about the privacy factor, but I don’t think enough attention has yet been paid to the distraction factor.  Sherry Turkle wrote in her book Alone Together that our devices are increasingly preventing us from being fully present. I recently quit playing the game Words with Friends because it was always drawing my attention.  I would start playing at an entirely appropriate moment, but then that moment would pass and part of my attention would still be on the game. I have a tendency to be absorbed by games, and having a really good one in my pocket wasn’t working for me.  So I made a conscious decision to quit, and have been in a more comfortable daily rhythm since.

Since some time around the invention of stone tools, humans have lived immersed in socio-technical systems: richly connected combinations of people, tools, and social practices.  Each of these affects the others.  Who we are as individuals and who we are as a culture are intertwined with what tools we possess and how we choose to use them.  There are things about future wearable computers that I am looking forward to.  I said hi to a Georgia Tech student on my way into a restaurant with my family last night.  If my glasses could have reminded me of her name, I would have been grateful.  And I hope this support would help me truly learn her name, though I fear some people would use such a support to not bother to try. And the privacy implications of course are headache inducing.  When we have face recognition working, next could I please have bird recognition?  (Was that really a piping plover or just a sandpiper?)  How about rock recognition?  (Is that schist or gneiss?)  It’s a naturalist’s dream.  There will be a myriad new applications of wearable computing and augmented reality, some trivial and some profound, that we can’t yet begin to imagine.

But you know what I’m not looking forward to?  Hey–are you listening to me or are you reading your email?  I’ve spent 20 years with friends with wearables, and some of them, sometimes, do indeed live up to Stephenson’s “gargoyle” moniker.  Are we about to be even more alone together?

Some wearables advocates argue the opposite–that a wearable stops you from having to look down at your phone, and helps keep (at least part of) your attention where you are.  Only time will tell if they are right.  If wearables ever play Words With Friends… look out.

It’s not just the device, but how people use it.  And a key challenge is that we are all increasingly connected.  Teenagers say they text so many times a day because their friends are texting them.  It’d be rude not to reply, wouldn’t it?  It can become a challenge for any one individual to opt out and make a different choice.  In the 1990s, the director of the MIT Media Lab, Nicholas Negroponte, told faculty that he expected them to read email every day–even while on vacation.  One faculty member responded to this by planning a vacation to a remote island where there was literally no possibility of Internet access.  One wonders if such islands even exist any more.  It can be a challenge for any one of us to change the pattern, because we are all interwoven in it.

What is mindful use of technology? To address that question, we have to ask, what is the good life–for us as individuals?  As families?  As communities? The issues expand uncontrollably.  We can in the end merely say: Mindfulness is important.  We must make self-reflective choices and not get sucked into dysfunctional patterns by our technologies.  And it’s a learning process.  We all learn together to put a new technology in its proper place in our lives.  My children don’t watch as much television as I did as a child—they don’t want to.  Sometimes it takes a generation to adjust. And then a newer technology comes along and we all go back to square one.

For the present, I have a call to action: Can we all agree not to silently tolerate gargoyles?  If you’re talking to someone with a Google Glass and they seem to be not paying attention to the conversation, do something goofy and see if they notice.  Make a silly face or stick a finger in your nose.  When they ask, “What are you doing?” You can grin and reply, “I was wondering what you were doing…..”


Is Online Cheating Accelerating?

March 29, 2012 1 comment

Grad student teams in my Design of Online Communities class handed in super qualitative studies of seven online sites this term.  Grading the papers, I couldn’t help notice that three of the seven sites were fundamentally wrestling with issues of student cheating online. On OpenStudy and StackOverflow, students regularly post their homework questions and wait for others to answer. Neither site is quite sure what to do about the problem.  Answering questions is essential to their mission. How do you distinguish between getting legitimate help and outsourcing your work?

A team of students from Korea studied a site called GoHackers, which Korean students use to help with test preparation for study-abroad tests like the GRE and the TOEFL. The electronic version of the tests generally reuse questions from a pool. If each test taker remembers one test question, together students can quickly build a comprehensive database.   One interview subject had posted a particularly thorough test guide online, and another student asked him for his autograph. Our student researchers explicitly asked site members whether they had any ethical or legal qualms about the test prep site, and no one they interviewed was concerned at all. 

Of course it’s a coincidence that three of seven papers touched on this theme this term. And cheating is not a new phenomenon–far from it. But what is in fact new is the ease by which it can be accomplished.  It’s not simply a little easier–it’s a lot easier, and that is leading to a different magnitude and type of issue.

if there’s a silver lining, it’s that this trend may challenge us teachers to rethink our practices–to rethink what makes a good assignment or test.  To rethink what the purposes  of “homework” and “test’ are anyway and how those goals can better be met, perhaps with more authentic and contextual activities. And to pay more attention to ethics education and meta-cognitive awareness in our students: making sure we make it clear to students why they are doing what they are doing for school.

Finding Your Twenty-Eights: Why You Need to Talk to Your Users

February 21, 2012 Leave a comment

Here’s a user behavior puzzle for you: Why would you ban someone whose offense is not showing up?

Kurt Luther‘s Pipeline software is being used for some impressive projects lately. In November-December, a team of artists used it to create an interactive advent calendar which they called Holiday Flood.  Twenty-eight artists from twelve countries worked to create two pieces of art for each of the Twelve Days of Christmas song, and embed a hidden tag in each artwork that together formed a holiday greeting card for the newgrounds  community. Kurt and I have been observing their activity on Pipeline to try to understand online collaboration on creative projects.

As in any complicated collaboration, dropouts occur.  When someone drops out, the project leader typically needs to find a replacement.  One artist dropped out of Holiday Flood complaining he was too busy, but the next day joined a different project on newgrounds. Holiday Flood leader Renae banned him from the project, removing his access to discussions and work in progress.  This intrigued us: Why would someone ban a user who doesn’t show up anyway? My hunch was that Renae was annoyed with him. How could you quit our project saying you’re too busy, but then join another one the next day? The nerve!  So banning him was more an emotional act than a functional one.  A feature we implemented for practical reasons was used for a more symbolic purpose.

Well, that was my hunch. But when we interviewed Renae, she told a  different story.  Each Pipeline project has a count of total  number of participants. After Renae recruited a replacement artist,   she kept looking at the  count and it said 29 participants. But she knew it was  really supposed to be 28.  She banned Mr. Dropout to correct the counter to 28.

The morale of the story of course is Talk To Your Users.  I regularly get papers to review that do extensive data analysis on an online site and then speculate as to why people behave the way they do–but never ask a live person a single question.  In research on social computing, mixed methods are critical.  I speculated that our leader was angry at Mr. Dropout, but in truth she just wanted the counter to say 28.  There are twenty-eights lurking in your data set–explanations for user behavior that you can not guess.

%d bloggers like this: