September 26, 2013

Clean OI story in Bath

Last week, I was fortunate to present the opening keynote at the Strategizing Open Innovation workshop at the University of Bath School of Management.

It was everything a small conference (~55 attendees) should be: a concentration of specialized expertise, a single track, plenty of time for discussing each paper, a chance to meet any participant. In short, it was everything that the Academy of Management (with its 10,000 attendees and 12 minute presentations) is not.

It’s always interesting to learn what others are doing in their open innovation research. Letizia Mortara summarized the sizable ongoing research program at the Institute for Manufacturing at Cambridge, which focuses on how firms are actually using open innovation. With Tim Minshall, Mortara has authored a number of papers that provide important new insights into adoption, implementation and adaptation of open innovation by companies. IMHO, one of the most interesting is their chapter (Chapter 12) in the forthcoming Chesbrough-Vanhaverbeke-West OI book from Oxford.

Keynote speaker Teppo Felin (formerly of BYU, now of Oxford) linked OI to two bodies of work: his own work (with a famous purple dinosaur) on microfoundations and the Nickerson-Zenger work on the problem unit of analysis that led to his OI paper with Zenger (that the latter summarized last May in Atlanta). Ammon Salter (formerly of Imperial, now at Bath) presented an in-progress paper that links OI to Cyert & March, co-authored with Oliver Alexy (formerly of Imperial, now at TU München).

In addition to the keynote and my own paper, I was also tasked with summarizing the future of open innovation. Probably no one was surprised that I plugged both the 2014 Research Policy special issue (with Chesbrough, Salter and Vanhaverbeke) and the forthcoming Oxford book.

However, in my closing comments, I also sought to classify the research presented at the conference using three typologies:
  1. Three modes of open innovation (Gassmann & Enkel, 2004; Enkel et al 2009): inbound (outside-in), outbound (inside-out) and coupled;
  2. Four phase OI process model (West & Bogers, 2013): obtaining, integrating, commercializing (& interaction); and
  3. OI adoption process (of the Montara & Minshall chapter): antecedents to OI, implementation, results
The trend (i.e. most popular alternative) was pretty much as I would have predicted. The one encouraging exception was that there were three papers about results, i.e. papers that measure the outcomes of open innovation.

At the end, we were all grateful to the School of Management and our organizers, Felicia Fai & Anthony Roath, for putting on such a productive conference. (Most of us would also thank the Italian invaders for building such a durable Romanesque structure in Bath during the first millennium).
Keynote speakers  Letizia Mortara and Teppo Felin at the
conference dinner reception. Photo by Joel West

September 22, 2013

A few more shoes left to drop

Over the last year, the Licthenthaler retractions scandal (and its ramifications for our field) has tended to come up as a conference mealtime discussion topic — at least when I'm at a conference with European innovation scholars. Last week’s open innovation workshop in Bath was no exception. However, unlike at most conferences, the topic also boiled out into the open during a plenary discussion.

In my opening keynote, I had mentioned the opportunity for more outbound open innovation research, given that half of the Licthenthaler retractions were on this topic. This was news to some people. Although the scandal is well known among German business academics and open innovation scholars, it turns out there were a few attendees who hadn’t heard about the 13 retracted articles by Ulrich Licthenthaler and his former habillitation supervisor Holger Ernst, nor the three articles that were accepted but withdrawn prior to online publication.

At the closing session, a few doctoral students and faculty asked about the new rules in the post-Licthenthaler world. Here let me offer an assessment of what it means and also some thoughts on what’s next (or what’s left).

Research Policy and its Standards

As it turns out, last month students at one doctoral consortium at the Academy in Orlando read June’s editorial by Research Policy editor Ben Martin. The abstract summarizes the problem facing the journal and innovation studies more broadly:
This extended editorial asks whether peer-review is continuing to operate effectively in policing research misconduct in the academic world. It explores the mounting problems encountered by editors of journals such as Research Policy (RP) in dealing with research misconduct. Misconduct can take a variety of forms. Among the most serious are plagiarism and data fabrication or falsification, although fortunately these still seem to be relatively rare. More common are problems involving redundant publication and self-plagiarism, where the boundary between acceptable behavior (attempting to exploit the results of one’s research as fully and widely as possible) and unacceptable behavior (in particular, misleading the reader as to the originality of one’s publications) is rather indistinct and open to interpretation. With the aid of a number of case-studies, this editorial tries to set out clearly where RP Editors regard that boundary as lying.
On the first page, Martin provides the broader context:
[W]e know the pressures of academic competition are rising, whether for tenure, research funds, promotion or status, which may mean that more researchers are tempted to cut corners… The use of performance indicators based on publications, citations, impact factors and the like may also be adding to the temptation to stray from previous conventions regarding what constitutes appropriate research behavior or to attempt to surreptitiously ‘stretch’ the boundary between appropriate and inappropriate behavior. …

There are worrying signs that research misconduct is on the increase. The number of retractions of published papers by journals has increased more than 10-fold in a single decade – from around 30 a year in the early 2000s to some 400 in 2011. … Moreover, the majority of retractions are seemingly the consequence of research misconduct rather than simple error.

With regard to the particular problem of self-plagiarism and related activities described below, the number of academic articles referring to ‘self-plagiarism’, ‘salami publishing’, ‘redundant publication’ or ‘duplicate publication’ has risen nearly five-fold from 170 in 2000 to 820 in 2012. More and more editorials are appearing in which journal editors complain about the growing burden being imposed on them as they attempt to detect, evaluate and sanction research misconduct in its various forms.
Martin noted that the journal only faced an “occasional” problem of research misconduct, until 2007, when it stumbled across the scandal of a dozen or more plagiarized articles published by Hans Gottinger†. (The scandal was jointly investigated by Research Policy and Nature).

After listing various retracted and withdrawn articles, the sixth page of Martin’s editorial refers to the Licthenthaler case (emphasis mine):
More recently, an even more complicated case was brought to the attention of RP Editors by two individuals who independently had been asked to review papers by the same author (a professor at a European university) submitted to two other journals. They discovered that the author concerned had published an astonishing total of over 60 journal articles since 2004. Since this number was too great to handle, the two reviewers concentrated their attention on 15 articles published in leading journals over the period 2007–2010 (including three published in Research Policy), all of which formed part of a single stream of research emerging from a survey of over 100 firms in Europe that the author had conducted. They found that in these papers, similar analyses had been carried out with differing combinations from a large set of variables (sometimes relabeled, to add to the confusion) with no references to indicate the prior papers the author had already produced on the same broad theme. Moreover, in some cases, a given variable was treated as a dependent variable and in others as an independent variable. Perhaps more worryingly, variables that were demonstrated to be significant in some papers were then (presumably deliberately) omitted in the analysis reported in other papers. The author was asked for an explanation. This explanation was deemed unsatisfactory by the RP Editors, with the result that two of the RP papers[31] had to be formally retracted.

[Footnote 31: At a late stage in the investigation, it also became apparent that in one of these RP papers the degree of statistical significance of several of the claimed findings had been misreported or exaggerated. Whether this was simply the result of ‘accidental’ mistakes, as the author claimed, is unclear. However, the fact that similar problems have since been confirmed in several other papers by this author makes this less plausible as an explanation.]
The workshop participants asked what the new rules are for multiple publication from the same data. How does one avoid self-plagiarism and salami slicing? I think Martin and RP have spelled out more clearly than anyone else what these rules are. Drawing from his editorial, his public comments and from my current experience as a RP guest editor, let me paraphrase it into two guidelines.

First, would a reasonable reviewer (or reader) conclude that this article deserves publication if all previous or parallel articles were visible at the same time? Second, does it appear that the authors have withheld from the editor (if not the blinded manuscript) full disclosure of all related work? As Martin concluded:
Failure to provide all pertinent information in the full version implies a premeditated attempt by the author(s) to deceive the journal as to the level of originality of the paper. As such, it represents grounds for the summary rejection of a paper.
† Martin’s editorial doesn’t mention the names of any transgressors. When I asked him about it, he said it was because some of the journal’s actions were public (i.e. retracted articles) and some were not (rejected articles); since he couldn’t list all the names, he decided to list none of them.

Further Sanctions

The Licthenthaler story seems to be winding down. With his habillitation and Lehrbefähigung withdrawn by WHU earlier this month, what’s left is the results of the investigation by Mannheim, his current employer. The university issued a brief press release in July; as with the WHU announcement, it was issued only in German so here is my composite (computer-aided) translation:
Allegations of scientific misconduct against Professor Dr. Ulrich Lichtenthaler:
University examines the way forward
Press release of 31 July 2013

After the Permanent Commission for the investigation of allegations of scientific misconduct at the University of Mannheim has sent its final report to the allegations of scientific misconduct against Prof. Dr. Ulrich Lichtenthaler, the university now is considering the commission report.

"For legal reasons, in particular allowing for the right to fairness to Prof. Dr. Ulrich Lichtenthaler, I cannot yet make any statement about the contents of the 170 page report or any possible consequences" said the Rector of the University Mannheim, Prof. Dr. Ernst-Ludwig von Thadden. The report was given to Prof. Dr. Lichtenthaler. "The public has a legitimate interest in full disclosure of the allegations and will be informed of further steps by to the extent legally possible," said the rector of the university.

After the Rector of the University of Mannheim received allegations of scientific misconduct against Dr. Lichtenthaler in the summer of 2012, a responsible commission of inquiry of the university was immediately called. Since 24 July 2012, the Commission has made a considerable commitment of its members to deal with the allegations against Prof. Lichtenthaler. Among other things the Commission has commissioned external reports, interviewed respondents and performed its own extensive evaluations. By letter of 21 July 2013, the Commission sent its final report to the Rector. This completes the work of the Commission in accordance with no. 4.3 of the guidelines of the University of Mannheim for safeguarding good scientific practice. The Rector will check any further action.
Further Retractions

That’s not to say that the retractions are over. They are still trickling in; Martin notes the varying degrees of concern (if not integrity) by the editors of the affected journals:
In the case of the extensive self-plagiarism by the German author, other journals were slow to react when alerted to the problem, and in at least one case, the eventual retraction of an article by this author was justified rather vaguely in terms of ‘data problems’ rather than giving details of the specific form of misconduct involved.
According to someone who’s read the various Licthenthaler articles, five articles have problems comparable to those of retracted articles, but are at journals that have not yet announced any decision regarding Licthenthaler’s publications
  • Lichtenthaler, Ulrich (2009). “Outbound open innovation and its effect on firm performance: examining environmental influences,” R & D Management, 39 (4): 317-330. DOI: 10.1111/j.1467-9310.2009.00561.x
  • Lichtenthaler, Ulrich (2010). “Organizing for external technology exploitation in diversified firms,” Journal of Business Research, 63 (11): 1245-1253. DOI: 10.1016/j.jbusres.2009.11.005
  • Lichtenthaler, Ulrich & Holger Ernst (2008). “Intermediary services in the markets for technology: Organizational antecedents and performance consequences,” Organization Studies, 29 (7): 1003-1035. DOI: 10.1177/0170840608090531
  • Lichtenthaler, Ulrich & Miriam Muethel (2012). “The role of deliberate and experiential learning in developing capabilities: Insights from technology licensing,” Journal of Engineering and Technology Management, 29 (2): 187-209. DOI: 10.1016/j.jengtecman.2011.10.001
  • Lichtenthaler, Ulrich & Miriam Muethel (2012). “The Impact of Family Involvement on Dynamic Innovation Capabilities: Evidence From German Manufacturing Firms,” Entrepreneurship Theory and Practice, 36 (6): 1235-1253. DOI: 10.1111/j.1540-6520.2012.00548.x
A sixth article at a top journal has been investigated, but the results (and any corrective action) have yet to be announced.

What happens after such retractions? It appears that the scientific field partially (but not entirely) self-corrects on retracted articles. As Jeff Furman, Kyle Jensen and Fiona Murray reported in their 2012 Research Policy study of retracted medical research:
Our core results [imply] that annual citations of an article drop by 65% following retraction, controlling for article age, calendar year, and a fixed article citation effect. … The effect of retraction does appear to be stronger in the most recent decade than in prior decades, although the large, statistically significant impact of retractions on future citations does not appear to be only induced by modern IT. The results … suggest that papers retracted between 1972 and 1999 experienced a 63% decline in citations after retraction, while those retracted since 2000 experienced a 69% decline in citations.
One would hope that in the future, after a retraction any subsequent citations would rapidly decline to zero. Fortunately, online publication (unlike dusty library print collections) allows prominent marking of the retraction status for a previously published article.

References

Jeffrey L. Furman, Kyle Jensen, Fiona Murray, “Governing knowledge in the scientific community: Exploring the role of retractions in biomedicine,” Research Policy 41, 2 (March 2012): 276–290. doi: 10.1016/j.respol.2011.11.001

Ben Martin, “Whither research integrity? Plagiarism, self-plagiarism and coercive citation in an age of research assessment,” Research Policy 42, 6 (June 2013): 1005-1014. doi: 10.1016/j.respol.2013.03.011

September 19, 2013

Magic 'open' pixie dust

Today and tomorrow I'm at the University of Bath for a two day workshop entitled “Strategizing Open Innovation.” The event was organized and hosted by the Strategy and Innovation Management Group within the School of Management, and included four keynotes and an open call for papers.

The first keynote came from California’s second most famous open innovation scholar. I have uploaded my slides to SlideShare, and will talk more about the content another time. Tomorrow, two participants from the June 2012 open innovation workshop in London — Teppo Felin (Oxford) and Letizia Mortara (Cambridge) — will offer their own insights into open innovation.

And then there was the second keynote this morning by Richard Whittington of Oxford, which was not about open innovation but about open strategy. It spurred a rather vigorous discussion.

When hearing the phrase “open strategy,” most readers would think of the 2007 Chesbrough and Appleyard paper in California Management Review which states:
If we are to make strategic sense of innovation communities, ecosystems, networks, and their implications for competitive advantage, we need a new approach to strategy—what we call “open strategy.”

Open strategy balances the tenets of traditional business strategy with the promise of open innovation. It embraces the benefits of openness as a means of expanding value creation for organizations. It places certain limits on traditional business models when those limits are necessary to foster greater adoption of an innovation approach. Open strategy also introduces new business models based on invention and coordination undertaken within a community of innovators. At the same time, though, open strategy is realistic about the need to sustain open innovation approaches over time. Sustaining a business model requires a means to capture a portion of the value created from innovation. Effective open strategy will balance value capture and value creation, instead of losing sight of value capture during the pursuit of innovation. Open strategy is an important approach for those who wish to lead through innovation.
The focus this morning was on openness not as an antecedent of open innovation but as an organization with permeable boundaries (open systems). In fact, during the discussion Felin cited Dick Scott’s Rational, Natural and Open Systems (1981) while Whittington also cited as an influence Karl Popper’s The Open Society and its Enemies (1945).

Many of the ideas about openness are ones I’ve published in the past, such as firms deliberately choosing openness (“How open is open enough?”) and selective degrees of openness (“…Open standards: Black, white and many shades of gray”). (The work of Joachim Henkel is also very relevant here). This is not to claim “I said that first,” but merely that elements have been out there before and the author (as would any author) bears the obligation to demonstrate that this framework tells us something new and interesting.

However, I am a little concerned that this is yet another example of “open” being used as magic pixie dust which can be sprinkled on anything to make it special. Whittington explicitly disclaimed any suggestion that openness is necessarily better — as when Apple has lost its ability to keep product introductions secret due to supplier leaks. Still, ceteris paribus, calling something "open" strategy implies that the "open" approach is better than the "closed" one.

September 13, 2013

The de-habilitation of a serial salami slicer

Professor Ulrich Lichtenthaler had his habillitation revoked by WHU, the university announced today.

Here is my composite translation of the German language press release, based on the Google, Bing and Systran translations
Allegations of dishonest academic practice against Professor Dr. Ulrich Lichtenthaler: Senate WHU decides to withdraw teaching qualification

Vallendar [Rhineland-Palatinate, Germany] 13 September 2013.

At its meeting on 11 September 2013, the Senate of the WHU - Otto Beisheim School of Management unanimously decided to withdraw the teaching qualifications [Lehrbefähigung] that Professor Dr. Ulrich Lichtenthaler gained at WHU. The withdrawal was preceded by an intensive investigation into the allegations of scientific misconduct, which had the goal of producing a complete investigation.

After a thorough examination and discussion of the Senate of the WHU has come to the conclusion that an essential condition for the granting of the teaching certificate was not met. Prof. Lichtenthaler may appeal the decision.

Course of the Procedure

After the Dean of WHU in summer 2012 had learned of statistical defects and other scientific shortcomings in the work of Prof. Lichtenthaler, these were investigated in detail. The existing commission for safeguarding good scientific practice at WHU presented its final report to the Dean of WHU on June 13, 2013, after a thorough examination of the scientific works of Professor Dr. Lichtenthaler. The report was the basis of the examination by the Senate, which had begun on June 20 and on September 11 led to the decision on the withdrawal of the teaching certificate. The decisions are based on principles and rules of procedure of the WHU for the handling of scientific misconduct and the habilitation procedure.
For those of us outside Germany, Wikipedia helpfully explains that the Habillitation is a post-doctoral examination (in German-speaking Europe) that is the prerequisite for the Lehrbefähigung (teaching certificate). I don’t know what normally happens to a professor who used to have a Lehrbefähigung but no longer has one — since I imagine this doesn’t happen very often.

The outcome is a validation of the faith that many of us had that the system would eventually confront the serious charges here and not sweep them under the rug. It appears that the desire of the WHU faculty to distance themselves from their tainted alumnus outweighed any desire to cover up or explain away the problem.

The decision appears not to impact the PhD that Dr. Lichtenthaler earned at WHU. It is unclear what impact it will have upon the investigation by his current employer, University of Mannheim, and his chaired professorship. I’m told that it’s very difficult to fire or demote a German professor because of civil service rules.

I am concerned that administrative punishments might also short-circuit the remaining (and long overdue) investigations into some questionable papers that have been published. R&D Management has published 5 articles and retracted none, so it’s hard to imagine that his integrity batting average was 100% at this journal when it was 50% (or 0%) at other journals.

According to someone who closely examined Dr. Lichtenthaler’s entire publication output, five articles remain that have the same level of problems as the 12 retracted articles, including a 2009 article at R&D Management and articles at Organization Studies and Entrepreneurship Theory & Practice. However, there isn’t much transparency as to whether these journals are investigating these problem articles or plan to do so.

As someone co-editing a book and a special issue of a journal on open innovation, this uncertainty is a problem for our field. Of this prolific output, what can we cite and what can’t we cite? Some individuals are erring on the side of caution, but others are continuing to cite the non-retracted articles on the assumption that they are as valid as any other in that journal. What if these articles are retracted later? What if they are seriously flawed — to the point that they never should have been published — but are never retracted? What does this mean for the integrity of our field and the lessons that today’s doctoral students will draw for their own careers?

References

Lichtenthaler, Ulrich (2009). “Outbound open innovation and its effect on firm performance: examining environmental influences,” R & D Management, 39 (4): 317-330. DOI: 10.1111/j.1467-9310.2009.00561.x

Lichtenthaler, Ulrich & Holger Ernst (2008). “Intermediary services in the markets for technology: Organizational antecedents and performance consequences,” Organization Studies, 29 (7): 1003-1035. DOI: 10.1177/0170840608090531

Lichtenthaler, Ulrich & Miriam Muethel (2012). “The Impact of Family Involvement on Dynamic Innovation Capabilities: Evidence From German Manufacturing Firms,” Entrepreneurship Theory and Practice, 36 (6): 1235-1253. DOI: 10.1111/j.1540-6520.2012.00548.x