In an imagined happy future, targeted advertising brings you what you want when you want it, alerting you to quality products and services you actually need. It’s a win-win—the consumer is happy, the vendor is happy, and the social media sites that made the targeting possible are happy. But it’s not working quite like that yet, is it?
Several weeks ago I went shopping for a new clock for my office and my kitchen. I made my purchases. And the next day my Facebook page was still covered in clock ads. The sites I was shopping on (Amazon and Etsy) shared the fact that I was interested in clocks with Facebook (probably through ‘cookies’—little pieces of information stored by the sites I accessed on my computer.) It’s weeks later, and I am still getting clock ads. I have never been less likely to buy a clock—I just bought two.
Or take the case of my son’s bathing suit. We bought him a matching bathing suit and swim shirt a few months ago, and got the suit one size too small. It fit him in March, but doesn’t still fit him now in June. Ooops. The top still fits, so a couple days ago I went on the Gymboree website to see if we could get him the bottom a size bigger. Unfortunately, they’re out of his size. Ah well. But a picture of that bathing suit is still showing up as my top ad on Facebook. I think it’s taunting me.
When you think of the data and social subtlety required to solve these problems, it seems like a daunting task. OK, the first one might not be so bad—maybe if someone actually completed a clock purchase, the system should infer that they might not be interested in more clocks? But it’s hard to fathom how they could solve the bathing suit case. From the data trail I left, it looks like I might be interested in that suit but hesitated. The idea of a system that would have enough data to solve the problem is frightening. A system that knows the browsing was for my son and not for a gift, and knows my son’s correct size? I can’t always get his size right myself.
So it was with genuine appreciation that earlier this week I realized I had received a social media ad that worked—it was what I wanted, when I wanted it. PhD student Casey Fiesler posted on her Facebook page several weeks ago that she recommends the book Ready Player One. She said it was the first good cyberpunk she’d read since Snowcrash. She included a link to the book on Amazon. I had a look, and decided to buy it. It is quite possibly the geekiest piece of media in any form I have ever encountered—and I loved it. It’s a page turner.
What I didn’t realize until this week is that it was not entirely accidental that I saw it in Casey’s Facebook newsfeed. Facebook offers ‘promoted posts’—you can pay a few dollars to increase the chance that your friends will see something you post. If you have more than a few Facebook friends, you likely are seeing only a fraction of what your friends are posting. The algorithm that determines what you see and don’t is proprietary. Did Casey pay to promote her post? Of course not. Amazon did. Amazon is paying Facebook to raise the profile of postings that include URLs to products on their site. My friend genuinely recommended that book to me—Amazon just helped make sure I saw it. Violá—a targeted ad that made everyone happy! I hope they invent more clever techniques for win-win advertising.
Sean Munson points out that this technique results in people seeing links to Amazon products with joke reviews over and over. Some people post links to products just because the reviews are funny–like the infamous Bic for Her pen reviews, where one review was found helpful by currently over 31,000 people. But I saw that post so many times that it became maddening. Two key points: 1) There is such a thing as over-promoting, and 2) Social subtlety is hard!
Mom, did Uncle Oscar die?
In February 2008, I called my mother to inquire about the health of my great uncle Oscar Brodney, because Wikipedia told me he had passed away. Uncle Oscar was a Hollywood screenwriter. He wrote the screenplay for The Glenn Miller Story (for which he was nominated for an academy award), Abbott and Costello’s Mexican Hayride, Harvey, and many more. In June 2007, I updated his Wikipedia page to say “Brodney still lives in Hollywood, California and celebrated his 100th birthday in February 2007.” Actually, he was in Beverly Hills, California–as someone else quickly corrected.
Editing Oscar’s page put it on my watchlist. Wikipedia editors have a list of pages they’re interested in, so they can check changes. Anything you edit is automatically added to your watchlist. That’s one way quality is maintained. On February 16th, 2008, I checked my watchlist and saw that someone had updated Oscar’s page. It now said:
Brodney passed quietly in his sleep on February 12, 2008 in Playa del Rey, CA.
He did? That was news to me. So I called my mother.
Me: Mom, did Uncle Oscar die?
Mom: I don’t think so, but let me call Betty.
My great aunt Betty is Oscar’s youngest sister. Mom called Betty and asked if Oscar had died. Betty said, “I don’t think so… But let me check my email.” Betty checked her email, and sure enough there was a message waiting for her from a few days earlier saying her brother had passed away. Oscar’s closest living relative learned of his death via my Wikipedia watchlist.
The edit to Oscar’s page was made the day after his death by an anonymous user –someone who didn’t even log in. It wasn’t made by a family member, as far as I’ve been able to determine. The IP address of the anonymous user was apparently from Las Vegas, Nevada. Oscar lived in a nursing home for the last few months of his life, and the specific detail about the manner and place of death makes me wonder if the anonymous editor was someone who worked at the home or a friend of someone who worked there. We’ll probably never know. (If you made that edit, please email me! I’d love to know who you are and how you knew.)
However, the story doesn’t stop there. No one placed an obituary for Oscar in Variety or other newspapers. He was almost 101 years old, and most people who would have cared were long gone. So a careful Wikipedia user undid the edit. In accordance with Wikipedia’s policy on Biographies of Living Persons, declaring someone dead is serious business. You can’t do it without proof. I replied back on the article’s talk page (each Wikipedia article has a place for editors to discuss it) saying
I have confirmed that the information about Brodney’s death is correct from a primary source (his sister). Can we redo this?
Another editor replied back,
I couldn’t find a newspaper ad or public notice anywhere, so for months Oscar stayed undead–not dead on Wikipedia I mean. Until in July a kind Wikipedia editor noticed that his name had appeared in the social security administration death records, and Oscar was finally allowed to officially rest in peace.
Two things strike me as remarkable about this story. The first is the speed and power of Wikipedia’s social network. My network of strong ties failed to get this news to me in a timely fashion. Wikipedia’s global network routed around that blockage through an anonymous person.
Second, Wikipedia’s commitment to verification is remarkable for its tenacity, in certain areas. As I’ve written before, a high profile page (like that of a current world leader) is scrutinized in every detail. In less popular pages (like the page for Oscar Brodney), errors can creep in. But even on a low profile page, editors are incredibly careful about certain things. And deaths are one of those things. You don’t go around declaring people dead without proof. And the editor who undid the change to Oscar’s page was right–how do we really know he has passed away? We need proof. And luckily another Wikipedia editor knew how to find acceptable proof when I did not.
A “socio-technical system” is a combination of people, artifacts (in this case the MediaWiki software that Wikipedia runs on), and social practices. And in this example, all those parts worked together in a remarkable way. Oscar would have approved.
Wearable computing first entered my social circle in 1993, when fellow grad students at the MIT Media Lab (led by Thad Starner) started inventing and wearing devices of their own design. The amazing thing to me is that a key social implication of wearables was predicted a year earlier (1992) by novelist Neal Stephenson in his book Snow Crash. Stephenson used the term “gargoyle” to refer to someone with a wearable who is not really listening to you:
Gargoyles are no fun to talk to. They never finish a sentence. They are adrift in a laser-drawn world, scanning retinas in all directions, doing background checks on everyone within a thousand yards, seeing everything in visual light, infrared, millimeter. wave radar, and ultrasound all at once. You think they’re talking to you, but they’re actually poring over the credit record of some stranger on the other side of the room, or identifying the make and model of airplanes flying overhead.
Since the announcement of Google Glass (for which Thad was lead technical advisor), a productive public conversation about its privacy implications has begun. I’m glad we’re all talking about the privacy factor, but I don’t think enough attention has yet been paid to the distraction factor. Sherry Turkle wrote in her book Alone Together that our devices are increasingly preventing us from being fully present. I recently quit playing the game Words with Friends because it was always drawing my attention. I would start playing at an entirely appropriate moment, but then that moment would pass and part of my attention would still be on the game. I have a tendency to be absorbed by games, and having a really good one in my pocket wasn’t working for me. So I made a conscious decision to quit, and have been in a more comfortable daily rhythm since.
Since some time around the invention of stone tools, humans have lived immersed in socio-technical systems: richly connected combinations of people, tools, and social practices. Each of these affects the others. Who we are as individuals and who we are as a culture are intertwined with what tools we possess and how we choose to use them. There are things about future wearable computers that I am looking forward to. I said hi to a Georgia Tech student on my way into a restaurant with my family last night. If my glasses could have reminded me of her name, I would have been grateful. And I hope this support would help me truly learn her name, though I fear some people would use such a support to not bother to try. And the privacy implications of course are headache inducing. When we have face recognition working, next could I please have bird recognition? (Was that really a piping plover or just a sandpiper?) How about rock recognition? (Is that schist or gneiss?) It’s a naturalist’s dream. There will be a myriad new applications of wearable computing and augmented reality, some trivial and some profound, that we can’t yet begin to imagine.
But you know what I’m not looking forward to? Hey–are you listening to me or are you reading your email? I’ve spent 20 years with friends with wearables, and some of them, sometimes, do indeed live up to Stephenson’s “gargoyle” moniker. Are we about to be even more alone together?
Some wearables advocates argue the opposite–that a wearable stops you from having to look down at your phone, and helps keep (at least part of) your attention where you are. Only time will tell if they are right. If wearables ever play Words With Friends… look out.
It’s not just the device, but how people use it. And a key challenge is that we are all increasingly connected. Teenagers say they text so many times a day because their friends are texting them. It’d be rude not to reply, wouldn’t it? It can become a challenge for any one individual to opt out and make a different choice. In the 1990s, the director of the MIT Media Lab, Nicholas Negroponte, told faculty that he expected them to read email every day–even while on vacation. One faculty member responded to this by planning a vacation to a remote island where there was literally no possibility of Internet access. One wonders if such islands even exist any more. It can be a challenge for any one of us to change the pattern, because we are all interwoven in it.
What is mindful use of technology? To address that question, we have to ask, what is the good life–for us as individuals? As families? As communities? The issues expand uncontrollably. We can in the end merely say: Mindfulness is important. We must make self-reflective choices and not get sucked into dysfunctional patterns by our technologies. And it’s a learning process. We all learn together to put a new technology in its proper place in our lives. My children don’t watch as much television as I did as a child—they don’t want to. Sometimes it takes a generation to adjust. And then a newer technology comes along and we all go back to square one.
For the present, I have a call to action: Can we all agree not to silently tolerate gargoyles? If you’re talking to someone with a Google Glass and they seem to be not paying attention to the conversation, do something goofy and see if they notice. Make a silly face or stick a finger in your nose. When they ask, “What are you doing?” You can grin and reply, “I was wondering what you were doing…..”
Grad student teams in my Design of Online Communities class handed in super qualitative studies of seven online sites this term. Grading the papers, I couldn’t help notice that three of the seven sites were fundamentally wrestling with issues of student cheating online. On OpenStudy and StackOverflow, students regularly post their homework questions and wait for others to answer. Neither site is quite sure what to do about the problem. Answering questions is essential to their mission. How do you distinguish between getting legitimate help and outsourcing your work?
A team of students from Korea studied a site called GoHackers, which Korean students use to help with test preparation for study-abroad tests like the GRE and the TOEFL. The electronic version of the tests generally reuse questions from a pool. If each test taker remembers one test question, together students can quickly build a comprehensive database. One interview subject had posted a particularly thorough test guide online, and another student asked him for his autograph. Our student researchers explicitly asked site members whether they had any ethical or legal qualms about the test prep site, and no one they interviewed was concerned at all.
Of course it’s a coincidence that three of seven papers touched on this theme this term. And cheating is not a new phenomenon–far from it. But what is in fact new is the ease by which it can be accomplished. It’s not simply a little easier–it’s a lot easier, and that is leading to a different magnitude and type of issue.
if there’s a silver lining, it’s that this trend may challenge us teachers to rethink our practices–to rethink what makes a good assignment or test. To rethink what the purposes of “homework” and “test’ are anyway and how those goals can better be met, perhaps with more authentic and contextual activities. And to pay more attention to ethics education and meta-cognitive awareness in our students: making sure we make it clear to students why they are doing what they are doing for school.
Here’s a user behavior puzzle for you: Why would you ban someone whose offense is not showing up?
Kurt Luther‘s Pipeline software is being used for some impressive projects lately. In November-December, a team of artists used it to create an interactive advent calendar which they called Holiday Flood. Twenty-eight artists from twelve countries worked to create two pieces of art for each of the Twelve Days of Christmas song, and embed a hidden tag in each artwork that together formed a holiday greeting card for the newgrounds community. Kurt and I have been observing their activity on Pipeline to try to understand online collaboration on creative projects.
As in any complicated collaboration, dropouts occur. When someone drops out, the project leader typically needs to find a replacement. One artist dropped out of Holiday Flood complaining he was too busy, but the next day joined a different project on newgrounds. Holiday Flood leader Renae banned him from the project, removing his access to discussions and work in progress. This intrigued us: Why would someone ban a user who doesn’t show up anyway? My hunch was that Renae was annoyed with him. How could you quit our project saying you’re too busy, but then join another one the next day? The nerve! So banning him was more an emotional act than a functional one. A feature we implemented for practical reasons was used for a more symbolic purpose.
Well, that was my hunch. But when we interviewed Renae, she told a different story. Each Pipeline project has a count of total number of participants. After Renae recruited a replacement artist, she kept looking at the count and it said 29 participants. But she knew it was really supposed to be 28. She banned Mr. Dropout to correct the counter to 28.
The morale of the story of course is Talk To Your Users. I regularly get papers to review that do extensive data analysis on an online site and then speculate as to why people behave the way they do–but never ask a live person a single question. In research on social computing, mixed methods are critical. I speculated that our leader was angry at Mr. Dropout, but in truth she just wanted the counter to say 28. There are twenty-eights lurking in your data set–explanations for user behavior that you can not guess.
- Why LinkedIn is Creepy: Asymmetry of Visibility
- Checks and Balances on Surveillance of US Citizens: The Role of Watchdog Organizations
- A Great Experience That Must Stop: Words With Friends and the Mindful Use of Technology
- Research Topics: The PhD Student's Passion or What We Have Money For?
- Artifacts Have Politics. Now What?
- How much time are you spending on your smartphone?
- The Sexism of the Stay-at-Home Mom
- The Future of Universities: Everything a MOOC is NOT
- Logos, Ethos, and Cross-Aisle Political Understanding
- Should you believe Wikipedia?
- June 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- May 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011
- January 2011
- December 2010
- November 2010
- October 2010
- September 2010
- August 2010
- July 2010
- June 2010
- May 2010
- April 2010
- March 2010
- February 2010
- January 2010
- 43,619 hits
- RT @Slate: Sesame Street's new Muppet has a father in jail--VIDEO: slate.me/127oWTr 7 hours ago
- @elehack @reidpr neat idea--thanks :) 12 hours ago
- I am looking for a fun paper about a successful collaborative computing/CSCW system for first class of Collaborative Computing. Any ideas? 12 hours ago
- America's Worst Charities. Investigative reporting by the Tampa Bay Times. ht.ly/m99LN 12 hours ago
- RT @techdirt: Retired Federal Judge Explains Why The FISA Court Should Not Be Trusted dlvr.it/3XCLgp 1 day ago