In previous posts, I’ve asked whether crowdsourcing is a theory, including most recently a posting from last week’s OUI2010. The consensus at Monday’s panel — or at least from those listening to the presentation — seems to be that crowdsourcing is an umbrella term that subsumes a range of phenomena, which can be studied using multiple theoretical perspectives. Researchers are already well along in trying to disentangle important differences within that umbrella.
The session was organized by Yuqing “Ching” Ren and Natalia Levina, and also had presentations by Linda Wang, Nikolay Archak and Karim Lakhani. (Levina has posted her slides on her website. Update Thursday 8 pm: Ren has also posted her slides.)
Since I’m an OI (or O/U/CI) researcher rather than someone who studies crowdsourcing, let me focus on three aha! moments.
Noting that the term dates to a 2006 article (and later book) by James Howe, Ren identified six archetypes for crowdsourcing processes or organizations that use them:
- Online content networks, e.g. iStockPhoto
- Open innovation intermediaries: InnoCentive
- Marketplaces for work: Small human intelligence tasks: Amazon Mechnical Turk.
- Peer production and open source or open content: Wikipedia
- Corporate initiatives: direct without intermediaries: Dell IdeaStorm
- Open contest: TopCoder
The second point, made by Karim Lakhani, is that firms typically run crowdsourcing in one of two modes.
The first mode is a competition, often winner-take-all, with all the dynamics of winners, losers, incentives, etc. (Anyone on the UI/OI circuit in the past 3 years has heard Karim give a TopCoder talk, and now at least one of these papers is forthcoming.) In this case, you want to smoke out the best idea from a large population, without demotivating participants through long odds of success.
However, the open source and other collaborative modes are fundamentally different, because individuals build upon each other, and rather than accessing the “best” knowledge of the crowd, firms are using the collective (and cumulative) knowledge of the crowd. (In the past I’ve asked if crowdsourcing is open innovation or user innovation — in this case it looks a lot like cumulative innovation).
These are clearly not disjoint, since the latest crowdsourcing fad is allowing competition between ad hoc (or pre-formed) teams. Still, Karim’s right that the dynamics of the two are fundamentally different and should not be conflated.
Finally, Karim (or perhaps Karim and co-author Karim Boudreau) made a really insightful point about how innovation contests are different from many other business optimization problems. The goal of a good contest is not to maximize the mean output quality, but instead that of the extreme value — often 2 or 3 deviations above the mean. You are getting M draws from a population of N, and all you care about is the superstar among that M (or maximizing the chance that above-average members of N become participants.)
This increased conceptual clarity shows that academic research on crowdsourcing is maturing much faster than I’d realized. (I mainly follow crowdsourcing for this blog, since my own empirical research tends to be on B2B open or user innovation).
Every new phenomenon goes through this process, when academics start trying to make sense of the phenomenon, and eventually are able to abstract universals without getting the facts wrong. This is the process we saw with “Internet,” “e-commerce” and “open source” research being replayed all over again. I was living the first one, tried to ignore the second one, but was in the thick of the third.
Today, researchers who ignore the canon of open source research will be suitably chastised (at least at any journal that picks competent reviewers.) Crowdsourcing is a ways from that, but certainly the landscape is changing rapidly and authors need to keep up to date with the latest work.