Archive

Archive for the ‘conferences’ Category

CSCW 2012 Update (Guest post by Jonathan Grudin)

January 3, 2013 Leave a comment

A guest post by Jonathan Grudin.

A summary of events since CSCW 2012 as we head toward CSCW 2013.

The publication culture of computer science received unparalleled attention in 2012. Experiments in conference reviewing and a new ACM policy signal that past approaches aren’t seen to be working well. Some changes are more radical than the CSCW 2012 revision cycle, which has been used in at least four other conferences such as SIGMOD.

How is CSCW 2012 looking? Downloads and citations are imperfect measures of impact, but they are the available lamppost. ACM’s CSCW 2012 downloads exceed 25,000, more than CSCW 2011 has accumulated over two years or CSCW 2010 over three years. This is a dramatic increase in conference impact. (In comparison, CHI 2012 has about half as many downloads as CHI 2011.)

Citations accumulate more heavily in the second and third years after a conference. That said, there are now five times as many ACM-recorded CSCW 2012 citations as CSCW 2011 citations a year ago (with a higher per-paper citation rate than CHI 2012, though the latter was a couple months later). In a couple years, frequency distributions will provide a more nuanced sense of the effects of doubling the CSCW acceptance rate to 40%.

A panel at the biennial Computing Research Association conference at Snowbird discussed conference-journal and open access issues. In November, 30 senior researchers attended an invited three-day Dagstuhl workshop on the same issues. ACM released a policy aimed at preventing certain kinds of conference-journal hybrids and encouraging others. CACM published many commentaries on these topics, including one this month by Gloria Mark, John Riedl, and me that describes CSCW 2012 and other experiments. I have an essay in the current Interactions that suggests an evolutionary biology analogy. Links to all this stuff are below.

Some core CS areas can experiment more easily than SIGCHI because their prestigious conferences are smaller and they have members who like building tools. At Snowbird and Dagstuhl there was appreciation for the CSCW 2012 experiment and negligible concern for acceptance rates. Some conferences maintain low acceptance rates primarily to maintain a single track conference; their organizers realize that they reject high quality work, at a price. My view is that low conference acceptance rates are a cancer with a higher mortality rate than most of us survivors realize, but not all of our community agrees.

I hope to see you in San Antonio. — Jonathan

Below are links to event materials and documents. The first two papers will be freely accessible at http://research.microsoft.com/~jgrudin/ when ACM Author-izer permits, in about a month.

Conference-journal hybrids. Grudin, J., Mark, G. & Riedl, J. January 2013 CACM. http://dx.doi.org/10.1145/2398356.2398371

Journal-conference interaction and the competitive exclusion principle. Grudin, J. Jan-Feb 2013 Interactions. http://dx.doi.org/10.1145/2405716.2405732

Snowbird panel (6/24/2012) on publication models in computing research. http://www.cra.org/events/snowbird-2012/

Dagstuhl workshop (11/2012) on the publication culture of computing research.  http://www.dagstuhl.de/12452

List of CACM articles with links (2009-2013) on publication issues, and other resources. http://research.microsoft.com/~jgrudin/CACMviews.pdf

2012 ACM policy on conference and journal reviewing practices. http://www.acm.org/sigs/volunteer_resources/conference_manual/acm-policy-on-the-publication-of-conference-proceedings-in-acm-journals

Advertisements
Categories: academia, conferences, CSCW

Program Committee Meetings Considered Harmful

October 14, 2011 7 comments

The organizers of CSCW 2012 have started an intriguing experiment this year: a review process with an extra revise and resubmit phase. The goal is to try to find reasons to accept papers, rather than find reasons to reject them. Over all, I would say the experiment is a huge success. The quality of papers is just as good, and a lot of good work got saved.  Though it does have some unpredictable consequences, for example how will promotion and tenure committees view the conference now that the total acceptance rate is higher?  The quality of the work is just as good, but does everyone know that?

With the two-round review process, in fact most decisions on papers were already made before the face-to-face program committee (PC) meeting, which I’m at right now. We had only 27 papers to discuss here, of  388 submissions.  This raises the question: do we even need to hold a face-to-face PC meeting?  It costs a lot of money and a lot of carbon to bring everyone here.  If it’s not necessary, we shouldn’t have it for pragmatic reasons.  I want to argue, though, that the reasons to not have the PC meeting are more than pragmatic: it will actually improve the quality of the conference.

Before the conference, some number of reviewers and associate chairs (ACs) read the paper carefully and give their reviews.  Then they can discuss the paper via a discussion board for that paper, which retains the anonymity of the reviewers to one another.  At the PC meeting, a room full of associate chairs meet to discuss the paper. And here’s where something odd happens: people who haven’t read the paper offer their opinions.  So for example, yesterday one presenter said “this paper has interesting qualitative findings, but is somewhat under-theorized.  It’s about an interesting user population, but is mainly just descriptive.”  And then a long discussion ensued about whether to accept this kind of paper.  But most of the people in the discussion hadn’t read the paper. To me the discussion should turn on the quality of the actual paper.  I don’t think we can answer this question in the abstract. Giving serious weight to comments by people who haven’t read the paper is bizarre. I believe this leads to poorer quality decisions.

So what do you do if reviewers can’t reach agreement on a paper? I suspect that many of these cases can be resolved by adding an additional reviewer, and continuing to discuss it online. A synchronous conference call could possibly be arranged where needed. But we would make better decisions if ultimately the people participating in the conversation all had read the paper.  One downside of this approach is that PC meetings serve to calibrate expectations for how high the bar is.  But I think there are other creative approaches to helping people calibrate, including providing reviewers with a visualization.  This could include making visible how harsh or generous each reviewer is on average, compared to other reviewers of the same papers.

PC meetings do serve an important function for community building, helping junior peers become more central members of the community, and reflecting on where the field is going as a whole. These functions could be filled with a special meeting and dinner for ACs at the conference event which helps plan for future years.

I genuinely enjoy face-to-face PC meetings. I get to see such terrific people at them. But I think it’s time we question whether they are good idea. We may make better quality decisions without them.

Categories: conferences

Balancing HCI and Computational Thinking: Levels of Abstraction and Agency

February 27, 2011 Leave a comment

My colleague Mark Guzdial wrote a great blog post last week called “HCI and Computational Thinking are Ideological Foes.” A lot of HCI wants to make the computer invisible, like Heidegger’s hammer. While you’re using it, you’re thinking about what you’re trying to accomplish–not about how to use the (computer/hammer).  But computer science education takes the opposite approach: please, please look at the computer/hammer!  It’s interesting, and if you’re going to really use it, you need to really look at it.

How much detail should we hide from the user? I think about this every time I run software update on my MacOS computers. The system tells me it has stuff to update, and would I please click OK to install it?  I have to click a button to ask for what it plans to update. I find the dialog infuriating, as if it’s saying “don’t worry your pretty little head about what we’re going to update–we’ll just take care of it, OK honey?” I click to ask for more details, and it tells me that it’s going to, for example, update iTunes and do a system security patch. OK, that works for me–now I click go ahead. But notice that I didn’t say “OK, but what exact part of iTunes are you going to fix? Show me the original code and the patch.” It seems OK to me that they told me it’s a stability update to iTunes. The level of abstraction is right (for me) after I ask it for more details. I don’t want even more details.

Similarly, Mark isn’t arguing that “kids these days–they don’t know their one’s and zero’s any more! They don’t even know what a decrement register is! Whatever happened to punch cards? Why in high school, I had to program our IMSAI to echo characters on the screen by toggling in binary codes with the front panel switches!”  OK, I did have to do that, and I loved that assignment, and I wish everyone could still do that, but it seems impractical. It’s OK in this day and age to keep students at a slightly higher level of abstraction. As Beki Grinter points out in her interesting response to Mark’s post, sometimes we really do want to hide some details from users. Then the question becomes, what level of abstraction is appropriate for what users doing what tasks? And I agree that a lot of HCI today is, like the MacOS software update dialog, pushing the level of abstraction too high.

A complementary concept to level of abstraction is sense of agency. Do I feel like I am taking this action (regardless of whether I’m using more abstracted or lower level tools to do it) or do I feel like the system is doing it for me? The difference is a subtle one. In either case, lower level details are being taken care of without requiring my attention. But when I retain a sense of agency, I feel like I have a clear mental model of what is going to happen and the entire process was set in motion by a deliberate choice on my part. A higher level of abstraction is tolerable when we design in a clarity that helps the user retain that sense of agency.

You might say that designing applications to be used by people and designing CS education are fundamentally different with regard to abstraction and agency. But they are more similar than perhaps it seems at first glance. Consider for example the question of memory management. Do students need to explicitly allocate and free memory? Or is it OK for that to all just happen for them? I suppose that depends on how critical performance is in the application you’re developing, and how good your garbage collection system is. I still think that “serious” programmers need to learn how to do their own memory management, so they understand it, even if they don’t have to do it on a day to day basis. But I do NOT think that serious programmers need to make keys echo by toggling in binary on the front panel switches. The minimum acceptable level of abstraction has migrated upward a bit. Even for the true hacker it has moved up a bit, and for the web developer it has moved up much more.

There’s a delicate and important design problem here. Let’s take the example of creating a new web development environment. How much abstraction is the right choice for this task? How much of how networks and web browsers really work should we expose to the web developer? How do we get the right level of abstraction that lets the web developer have a clear mental model of what’s going on and retain a sense of agency over the task they are accomplishing?  The HCI of programming languages and development environments is an absolutely critical research problem.

There’s still a gap in levels in my argument: What Mark saw on the ischools conference program was a lot of HCI for and about end users. Which you might view as being entirely different than designing for programmers. Except that what I find so exciting about modern computer technology is that maybe there doesn’t need to be such a gap between users and programmers. As technology increasingly surrounds all of us, it’s an open question how much real control ordinary people will have over that technology. But what if we think of everyone as programmers? Think of everyone as programmers who need tools with different levels of abstraction for different tasks. The same person may use a high-level tool for one task, and a lower-level look-at-the-details-of-the-hammer tool for another a few minutes later. How can we create these tools with many levels of abstraction, but which always keep that sense of agency–I am the person doing this with the tool, rather than the tool is doing this for me.  And Mark is dead right that there is not enough dialog about this at the moment at CHI and the ischools conference and similar.  I was thrilled to see a paper accepted for CHI this year on usability of operating system permissions systems. It was one paper. We need 100 more like it.

We underestimate the intelligence and independence of our users when we keep trying to abstract everything away for them, without giving them the choice. HCI and CSED have grown apart, and that’s a tragedy for both traditions of research and for all of us as humans who live with these technologies intertwined through more and more of our lives.

One thing Mark wrote was wrong, though. He concluded his post by writing “Here’s a prediction: We won’t see a panel on ‘Computational Thinking’ at CHI, CSCW, or iConference any time soon.” He’s wrong because I’m going to organize it, and he’s going to be on the panel!

Which Social Computing Papers are the New Classics?

March 23, 2010 3 comments

I first started teaching the Georgia Tech graduate class Design of Virtual Communities in 1998 as a special topics class. A couple years later, I applied to make it (now dubbed ‘Design of Online Communities’) a regular class. I made a presentation about why it was worthy of being a class to a faculty meeting, and before voting to approve it, my colleagues asked a few questions. I particularly remember them asking: This is such a new area. Are the readings changing from year to year? Or is there a stable body of knowledge?

At the time, I could say with confidence: the papers worth reading have stayed quite stable! We start off with Wellman and Gulia “Net Surfers Don’t Ride Alone,” then we read Ray Oldenburg’s “The Great Good Place”…. There is a core set of readings here that is stable. (You can browse through all the syllabi over the years.)

The online sites have changed quite a bit–almost continually. In 1998, I asked students to check out this site called “ebay,” because most of them hadn’t seen it. We looked at WorldsAway and The Palace. And I had them check out the cool new book recommendation feature on barnesandnoble.com, which used an algorithm developed by a Media Lab spin-off company called Firefly. It’s funny to think that a simple recommendation feature was ever “new”!

But while the sites worth discussing have always changed, the readings mostly haven’t. That is, until now. Slowly, slowly, a few readings that have always seemed fundamental have begun to look crufty and irrelevant. Not all the readings by any means–we will always read Oldenburg and Goffman. But it’s an eye-opener to find yourself in front of class trying to lead a discussion about a paper you’ve always loved and leave yourself a note “remove next year.”

And in their place… well, there are too many candidates, aren’t there? It seems like we’ve taken over not just the ACM CSCW conference, but CHI as well. And that’s not to mention work coming out of Group, WikiSym, and five other high-quality venues. But the fascinating question to ask for any given paper is: will this still be on the syllabus in 2020? How can we tell now which papers are the new big contributions–the new classics?

Of course this is always how science works–you can’t tell what’s important til you have a few years’ perspective. But I think we’re in an unusual time in this field now. Maybe not a real paradigm shift, but certainly not a slow period of “normal science” either.

What are the new classics? Which ones would you bet on? Leave me a comment!

The TED Brand

March 9, 2010 2 comments

I used to think of a “brand” as stuff like Tide and Coca-cola–the name of a commercial product. About ten years or so ago, a marketing professor talked me into being on a panel about online communities at a business conference, and she patiently explained to me that a brand is “a promise of quality.” I didn’t 100% understand what that meant or appreciate its power til this past Saturday when I spoke at TEDxNYED.

TED is a wildly successful, top-tier annual conference. So successful that they’ve franchised it into “TEDx.” Independent organizers can apply to host an “independently organized TED event.” If they’re successful, they get elaborate instructions/requirements on how to host it–how to work with the media, what the signage should look like, how talks should be organized, etc. Some folks from TED headquarters help out a bit, but it’s basically an independent event. TEDxNYED was organized by a bunch of independent school teachers, mostly from the greater New York area.  In fact, one of my high school teachers–Jeff Weitz of the Horace Mann School–invited me.

Using the TED name, the organizers assembled an all-star cast of speakers. When they invited me, they said “Henry Jenkins and Larry Lessig are speaking,” and I must’ve replied back “yes, I’ll come” in about a microsecond. So in fact did everyone else they invited–the organizers told me they were astonished at the immediate flood of positive responses.  In fact, it caused a bit of a problem because they didn’t balance the first round of invitations as much as they intended. They figured they’d get the first round of yes’s and no’s and then look to fill out ethnic and gender and topic diversity with a second round of invites–but nearly everyone in the first round said yes.

TED is such a strong brand that during the conference, #TEDxNYED was trending on Twitter. Over 5000 people watched on the web live that day. Who knows how many will watch when the video is posted for later viewing. When I sat back down in my seat after my talk, I glanced at my email on my phone. My mailbox was full of “<name> is now following you on Twitter!”   I’ve gotten more than 100 new followers since Saturday.

The TED format is the most un-academic I’ve experienced. I’m not sure if I’ve ever been to a conference before where there was absolutely no interaction onstage. 18-minute talks, no questions. No panels. The speakers speak, the audience listens. And as much as the TED liturgy tells everyone that the audience are as important and interesting as the speakers, the format says otherwise.

There are two models of what I experienced this weekend:

1. I went to a small conference organized by some very nice high-school teachers, held in the auditorium of a private high school, with snacks in the gym. The organizers had never done this before, and a few things were rough around the edges, but in the end it worked out great.

2. I went to a world-class, high-profile event.

The power of the TED brand transformed model 1 into model 2. The “promise of quality” that is TED is a force to be reckoned with. And now I think I understand what a “brand” is.

Categories: conferences
%d bloggers like this: