Archive

Archive for January, 2013

Common Ground for Talking about MOOCs

January 5, 2013 6 comments

As a result of the MOOC craze, some of my colleagues are suddenly spending a lot of time thinking and talking about technology and teaching.  I suppose I should find this refreshing—my background is originally in educational technology, and everyone is talking about my area!  Cool, right? But in practice I’ve found most of these conversations frustrating rather than energizing.  I think it comes down to a lack of shared assumptions and vocabulary.  Consider this conversation I just had with a colleague, liberally paraphrasing:

Me: In MOOCs, students can’t ask questions to a meaningful degree.

Colleague: But I answer every question I’m asked!

Me: Yes, but the structure of environment discourages question asking.  Compare the number of conversational turns per student versus faculty in a traditional class versus a MOOC.  The numbers make it impossible for you to have meaningful interactions with students.

Colleague: OK, well students can ask questions of one another. We have ways to get them answers. Why does question asking matter so much anyway?

[NB: This is a rough caricature inspired by a real conversation but does not accurately represent the views of the actual person, which is why there’s no name.  I’m making an abstract point here.]

It took me a moment to realize that we didn’t have a shared understanding of the fundamental nature of teaching and learning (for more on this, see Mark Guzdial’s excellent blog).  My colleague is right that they can have a lot of FAQs and answer most questions—but is that teaching?  What is teaching? My colleague (understandably) had never heard of the work of Lev Vygotsky, and I couldn’t imagine discussing this further without that shared background.  For those who are curious, here’s a quick, simplified explanation.

Vygotsky wrote about the “Zone of Proximal Development” (ZPD), which is the difference between what a person can do alone and what they can do with help.  Two learners can have identical capabilities, but quite different ZPDs.  For example, imagine you are teaching a math concept to children several grades earlier than it is normally taught.  Let’s say you are explaining negative numbers to a class of first graders.  For some, you could show them a well-done video on negative numbers, and they would understand.  Let’s call them group A.  For others, you could successfully supplement the video with an intelligent conversation in which you probe and respond to their understandings and misunderstandings.  That’s teaching in their ZPD.  Call those group B. For others you would perhaps say, “let’s revisit this when they are older.”  In other words, this is currently outside their ZPD.  Call them group C.  You can empirically measure how many students fit into each group.  You would find that Group A is a tiny subset—not just the sharpest students, but the sharpest students whose learning style fits with learning from videos.

Many early implementations of MOOCs are currently teaching predominantly to group A.  Completion rates for MOOCs are typically low—a figure of 10% is commonly cited.  That’s group A. The challenge going forwards is: what can we do to reach at least group B?  What does it mean to teach in someone’s ZPD when you can not give them any meaningful amount of personalized attention because there are a thousands of students in the class?  Could a software tutor help?  How effective are software tutors today, and how expensive is it to develop them?  These are fascinating questions to address going forwards.

I don’t want to romanticize traditional higher ed.  Large lecture classes also give students no meaningful opportunity for interaction.  The interaction takes place in recitation sections.  And we’ve all had recitation sections that were a waste of time.  Recitations don’t always live up to that ideal of a knowledgeable instructor helping to stretch you as far as you can go in your ZPD.  MOOCs as currently being offered in most places are equivalent to that large lecture class with the useless recitation instructor. Not an inspiring image. How do we do better?  That’s the challenge going forwards.

Categories: Uncategorized Tags:

CSCW 2012 Update (Guest post by Jonathan Grudin)

January 3, 2013 Leave a comment

A guest post by Jonathan Grudin.

A summary of events since CSCW 2012 as we head toward CSCW 2013.

The publication culture of computer science received unparalleled attention in 2012. Experiments in conference reviewing and a new ACM policy signal that past approaches aren’t seen to be working well. Some changes are more radical than the CSCW 2012 revision cycle, which has been used in at least four other conferences such as SIGMOD.

How is CSCW 2012 looking? Downloads and citations are imperfect measures of impact, but they are the available lamppost. ACM’s CSCW 2012 downloads exceed 25,000, more than CSCW 2011 has accumulated over two years or CSCW 2010 over three years. This is a dramatic increase in conference impact. (In comparison, CHI 2012 has about half as many downloads as CHI 2011.)

Citations accumulate more heavily in the second and third years after a conference. That said, there are now five times as many ACM-recorded CSCW 2012 citations as CSCW 2011 citations a year ago (with a higher per-paper citation rate than CHI 2012, though the latter was a couple months later). In a couple years, frequency distributions will provide a more nuanced sense of the effects of doubling the CSCW acceptance rate to 40%.

A panel at the biennial Computing Research Association conference at Snowbird discussed conference-journal and open access issues. In November, 30 senior researchers attended an invited three-day Dagstuhl workshop on the same issues. ACM released a policy aimed at preventing certain kinds of conference-journal hybrids and encouraging others. CACM published many commentaries on these topics, including one this month by Gloria Mark, John Riedl, and me that describes CSCW 2012 and other experiments. I have an essay in the current Interactions that suggests an evolutionary biology analogy. Links to all this stuff are below.

Some core CS areas can experiment more easily than SIGCHI because their prestigious conferences are smaller and they have members who like building tools. At Snowbird and Dagstuhl there was appreciation for the CSCW 2012 experiment and negligible concern for acceptance rates. Some conferences maintain low acceptance rates primarily to maintain a single track conference; their organizers realize that they reject high quality work, at a price. My view is that low conference acceptance rates are a cancer with a higher mortality rate than most of us survivors realize, but not all of our community agrees.

I hope to see you in San Antonio. — Jonathan

Below are links to event materials and documents. The first two papers will be freely accessible at http://research.microsoft.com/~jgrudin/ when ACM Author-izer permits, in about a month.

Conference-journal hybrids. Grudin, J., Mark, G. & Riedl, J. January 2013 CACM. http://dx.doi.org/10.1145/2398356.2398371

Journal-conference interaction and the competitive exclusion principle. Grudin, J. Jan-Feb 2013 Interactions. http://dx.doi.org/10.1145/2405716.2405732

Snowbird panel (6/24/2012) on publication models in computing research. http://www.cra.org/events/snowbird-2012/

Dagstuhl workshop (11/2012) on the publication culture of computing research.  http://www.dagstuhl.de/12452

List of CACM articles with links (2009-2013) on publication issues, and other resources. http://research.microsoft.com/~jgrudin/CACMviews.pdf

2012 ACM policy on conference and journal reviewing practices. http://www.acm.org/sigs/volunteer_resources/conference_manual/acm-policy-on-the-publication-of-conference-proceedings-in-acm-journals

Categories: academia, conferences, CSCW
%d bloggers like this: