July 27, 2009

Buying not making innovation

Larry Magid has an apt summary of Google’s innovation strategy in his San Jose Mercury column this morning:
Google, after all, has done an amazing job with its search engine and, thanks to the profits from all the ads it sells, has an enormous war chest to invest in research and development. The company is so keen on innovation that it allows its engineers to spend 20 percent of their working time on projects that aren't necessarily part of their job description. It's that "20 percent time" that helped spawn such projects as Google Suggest, AdSense for Content and Orkut.

And what Google can't invent, it can buy. Its Google Voice application, which it acquired when it bought GrandCentral Communications in 2007, is a stellar product, as is YouTube, which Google acquired in 2006.
I realize this is just one commentator’s interpretation of a $20 billion/year company. Still, I find Magid’s final point particularly interesting. Most of the internal innovations are related to search (Orkut is a me-too social networking site that has pockets of success), while the new areas have come from acquisitions.

In other words, to successfully diversify outside its area of expertise, Google has to buy not make those businesses. Google is the most successful high-margin, high R&D, high-growth tech company of our era, just as Microsoft was in the 1990s, Apple and DEC in the 1980s, and IBM in the late 1960s. One way to look at this is if Google doesn’t have the resources to pull off internal diversification, who does?

Another way to look at it is that Google is copying Cisco — diversification through acquisition — because it’s painfully aware of its predecessors’ failures. Yes, IBM created some great businesses through internal R&D, as have Apple and Microsoft (in many cases by hiring key talent from outside). However, the “not invented here” model of internal innovation also brought us such notable flops as the DECmate, DEC Rainbow, IBM PCjr and Apple Newton.

So perhaps we should give Google credit for taking its cash and savvy for buying the best innovations that are available — assuming it spends its money more prudently than the drunken sailors in Washington.

This does come back to a minor academic controversy: is it “open innovation” to buy up innovative companies? It’s open innovation to buy products from such companies, and closed innovation to develop things inhouse. Although others might disagree, I think the integration (or diversification) by acquisition is in the end a form of closed innovation, because it reflects an ongoing desire to control key technologies through administrative hierarchies rather than source them using markets.

July 13, 2009

The wisdom of crowdsourcing

In conjunction with the death of some middle-aged pop star, the Boston Globe is running a visual trivia contest to name seven of the 45 stars in a photo taken for the 1985 taping of “We are the World.” It’s a fun exercise for anyone who is/was a fan of 80s music.

I got two wrong — one because I couldn’t see the singer, and one because there were two (somewhat obscure) artists closely associated with each other, and I guessed the wrong one. In the latter case, out of 20,000+ respondents, the correct answer had the lowest % of right answers of the entire quiz (65.2%). The quiz allowed people to peek before answering, which may have inflated the correct answer count.

I used to watch Who wants to be a millionaire? and it was remarkable how often (in response to a “lifeline”) the audience was right, particularly on the obscure questions. Still, I was amused when audience either split on a plausible answer or even got it wrong; with a large enough sample, a wrong answer would suggest some sort of systematic bias (e.g. towards a more famous actor or place).

From a strategic standpoint, it suggests to me that there are two types of crowd-sourcing contexts. In one, it’s helpful (or fun) to get the right answer, but it’s not the end of the world if you don’t. In other cases (the $1 million question, diagnosing your child’s infection) mistakes have consequences, and only the right answer will do.

I think the crowd-sourcing literature needs to make more of this distinction. For user innovations, the assumption (probably correctly) is the more the merrier — do a good job of ideation and the firm can sift through the ideas to get the right one. If you’re relying on Wikipedia, IMDB or other user-generated content to be accurate, then you want the correct answer. (Perhaps that’s why WikiDoctor is a cybersquatter rather than a real website).

It seems to me that several a great opportunities for experimental research here. First, if the recipient of the crowd-sourced data wants accuracy, are there ways (e.g. weighting) to design the idea generation or filtering process to improve accuracy?

Secondly, does the nature of the UGC/crowd source request (either implicitly or explicitly) change what the crowd does? For example, if you were surveying nurses, doctors or EMTs, would a simple manipulation (“life threatening” vs. “not life threatening”) change how the contributor approached the contribution process?