August 11, 2010

Crowdsourcing is not a theory (II)

At #AOM2010, crowdsourcing was a surprisingly hot topic on the program. As is usually the case at Academy, the most interesting and useful crowdsourcing ideas were not in a paper session, but a pre-arranged symposium (this one on Monday during the main program.)

In previous posts, I’ve asked whether crowdsourcing is a theory, including most recently a posting from last week’s OUI2010. The consensus at Monday’s panel — or at least from those listening to the presentation — seems to be that crowdsourcing is an umbrella term that subsumes a range of phenomena, which can be studied using multiple theoretical perspectives. Researchers are already well along in trying to disentangle important differences within that umbrella.

The session was organized by Yuqing “Ching” Ren and Natalia Levina, and also had presentations by Linda Wang, Nikolay Archak and Karim Lakhani. (Levina has posted her slides on her website. Update Thursday 8 pm: Ren has also posted her slides.)

Since I’m an OI (or O/U/CI) researcher rather than someone who studies crowdsourcing, let me focus on three aha! moments.

Crowdsourcing: Why the Power of the Crowd Is Driving the Future of BusinessNoting that the term dates to a 2006 article (and later book) by James Howe, Ren identified six archetypes for crowdsourcing processes or organizations that use them:
  1. Online content networks, e.g. iStockPhoto
  2. Open innovation intermediaries: InnoCentive
  3. Marketplaces for work: Small human intelligence tasks: Amazon Mechnical Turk.
  4. Peer production and open source or open content: Wikipedia
  5. Corporate initiatives: direct without intermediaries: Dell IdeaStorm
  6. Open contest: TopCoder
I might quibble since not all “open source” fits the open content/peer production paradigm — just as not all open source is open innovation — nor is “open source” exactly the same as Wikipedia or other “open content” processes. I suspect the bullet point was not meant to imply this, only that the peer production open source communities fit into a group of similar phenomenon with Wikipedia (which at the 5,000' level is certainly true).

The second point, made by Karim Lakhani, is that firms typically run crowdsourcing in one of two modes.

The first mode is a competition, often winner-take-all, with all the dynamics of winners, losers, incentives, etc. (Anyone on the UI/OI circuit in the past 3 years has heard Karim give a TopCoder talk, and now at least one of these papers is forthcoming.) In this case, you want to smoke out the best idea from a large population, without demotivating participants through long odds of success.

However, the open source and other collaborative modes are fundamentally different, because individuals build upon each other, and rather than accessing the “best” knowledge of the crowd, firms are using the collective (and cumulative) knowledge of the crowd. (In the past I’ve asked if crowdsourcing is open innovation or user innovation — in this case it looks a lot like cumulative innovation).

These are clearly not disjoint, since the latest crowdsourcing fad is allowing competition between ad hoc (or pre-formed) teams. Still, Karim’s right that the dynamics of the two are fundamentally different and should not be conflated.

Finally, Karim (or perhaps Karim and co-author Karim Boudreau) made a really insightful point about how innovation contests are different from many other business optimization problems. The goal of a good contest is not to maximize the mean output quality, but instead that of the extreme value — often 2 or 3 deviations above the mean. You are getting M draws from a population of N, and all you care about is the superstar among that M (or maximizing the chance that above-average members of N become participants.)

This increased conceptual clarity shows that academic research on crowdsourcing is maturing much faster than I’d realized. (I mainly follow crowdsourcing for this blog, since my own empirical research tends to be on B2B open or user innovation).

Every new phenomenon goes through this process, when academics start trying to make sense of the phenomenon, and eventually are able to abstract universals without getting the facts wrong. This is the process we saw with “Internet,” “e-commerce” and “open source” research being replayed all over again. I was living the first one, tried to ignore the second one, but was in the thick of the third.

Today, researchers who ignore the canon of open source research will be suitably chastised (at least at any journal that picks competent reviewers.) Crowdsourcing is a ways from that, but certainly the landscape is changing rapidly and authors need to keep up to date with the latest work.

No comments: