The case of Ulrich Lichtenthaler began slowly two years ago, with three papers retracted for “statistical irregularities.” Perhaps the stern reaction of Research Policy suggested there was something more serious than the official announcements implied, but many at that point still believed it was just an honest mistake.
At this point, with 19 retracted or withdrawn papers, it would be impossible to argue with a straight face that the errors were inadvertent or unintentional. Instead, we have a pattern of intentional and systematic fraud that is the most serious such case in innovation studies in the past 15 years.
When a company has repeated failures, its credibility (and viability) suffers accordingly. Companies that poison babies or have cars that kill people lose their customers rapidly, and often go out of business. Passengers are understandably skittish to fly an airline that’s lost two jumbos full of passengers in six months (even if the 2nd case was just pure bad luck). Many people forget that it took only a single plane crash to respectively drive out of business two of the most storied brands in US aviation, Pan Am (1988) and TWA (1996), even though neither airline bore the primary blame for the resulting crash.
So how many retractions would it take for people to stop citing Ulrich Lichtenthaler? Apparently more than 20, if submissions to our upcoming open innovation conference are any indication. Slightly less than 14% (1:7) of the submitted papers cite one or more Licthenthaler papers.
Three of these submissions even cite a retracted paper: two a JET-M paper retracted in June, and a third the AMJ paper retracted last December. One submission cites 7 Lichtenthaler papers, another cites 5; both cite more Lichtenthaler than Chesbrough, even though the latter has written more (and more highly cited) papers than Licthenthaler. (Google says Chesbrough has 11 OI studies that are more cited than Lichtenthaler’s most-cited OI paper).
As I've noted before, some of my friends say “I’ll never cite Lichtenthaler” while others say “As long as it’s not retracted, I’ll cite it if it makes a contribution”.
I think there’s enough evidence to deduce a pattern. We don’t need any more priors to calibrate our Bayesian probabilities. Fool me twice — let alone sixteen times — shame on me.
The opinions expressed in this article are those of the author, and are not necessarily those of any website, conference, journal, book or special issue.