Organizations seem to be good at things – but how can we understand what it means for an organization to be good at things? Is it the people or something between them? A review of a seminal work by Phin Upham.
As the title of their paper Measuring Competence? implies, Rebecca Henderson and Iain Cockburn are attempting to measure and compare competence in a rigorous way. They have chosen the pharmaceutical R&D area in which differential success has good proxies to measure (patent awards, drug tests, etc) and where the pharmaceutical companies have kept detailed and voluminous records. In this area, where the costs and clear of different strategies and investment decisions are so crucial and measurable, they believe, there will be general heteronomy highlighted by areas where different capabilities have been developed and become fundamental. The authors are frustrated by the lack of rigorous quantification in the field of resource based view of strategy. They believe that others have not been as careful as they might have been to measure, test, and compare competence. Having chosen a field so amenable to quantification, they figure: if we cannot measure competence here, it cannot be measure practically anywhere. Thus, the authors set themselves up to creating an exemplar for the field of a truly careful and a truly well thought out experiment based on very good evidence. Consequently, they spend much time discussing how they measure their variables, how they gather information, and how they then treat their findings in order to yield results.
To begin with, they set out a few hypothetical developments with which they will work. They elaborate carefully the role “competence” plays in differential success, “it must be heterogeneously distributed within an industry; it must be impossible to buy or sell in the available factor markets at less than its true marginal value; it must be difficult or costly to replicate.” (64) They also create a division in their competence analysis, that of “component competence,” which is the day-today sort of problem solving skills that companies develop, and “architectural competence,” which is the ability to combine and use resources to create new competencies. Under each sort of competence they try to uncover what the resultant fall-outs out be, based both on the literature and their logic. They are assuming here, of course, that competence can be measured and seen as a function of one metric of success.
While this is a fair assumption, I wonder if competencies can also act to differentiate the kind of success one achieves rather than its actual amount. Thus, each firm, with its own competencies, would be pursuing different paths (due to different competencies) and thus achieve non-overlapping results which would therefore be more valuable, in aggregate, than if they had all pursued the same course with homogenous capabilities. So, for example, if a company develops a competence in researching proteins and another develops competencies researching RNA, each path necessitating different kinds of skills, routines, abilities, hiring practices, resource allocation procedures, etc. Each company would make three discoveries but they would be their own (in some ways this is reminiscent of Porter’s view of strategy as differentiation but with the idea of competence included as well). This would be “better” than each company coming up with three discoveries very closely related to the other companies three discoveries. But the analysis of Henderson and Cockburn does not take these kind of considerations into account, instead developing one metric of “success” that each company has, i.e. objective numbers which they measure each company by.
Henderson and Cockburn have chosen the pharmaceutical industry, as I mentioned before, due to the clarity of their research objectives, the quality of their data, and the quantifiability of success. They have chosen this as an exemplar in which to do a truly careful and rigorous study to find and test competence. Thus they spend much of their time on methodology and research design. By carefully delineating and massaging the data, they get a large number of records that they feel will be useful. They construct an econometric model of the input/output of the R&D sector with various assumptions and explicating their logic at every step. One can imagine that this formula is not correct, but the logic given at least allows one to understand the rational behind the author’s formula. Using qualitative and quantitative data, setting up various variables such as the means of allocating firm resources (dictatorial or cooperative), the authors begin to flesh out the model. This model is a good example of careful constriction and explicit logic. It is as carefully constructed given the information, which was very good. But seeing the painstaking process of construction so explicitly, and the assumptions needed to build the model, it becomes clear exactly how much such models depend on judgment rather than data. If such a well documented and well researched model requires these sorts of assumptions to generate results and measure competence, it makes one wonder about less rigorous models with less complete information being constructed in other papers we read – even very interesting ones. By downplaying the theory and focusing on the measurements, this paper is a valuable lesson on how hard it is to quantify competence well.
The conclusion, that competence is an important variable in success, is due to analysis of differential success based on firm heterogeny. The results point to the need for more attempts to careful analyze competence and differentiate between different sorts of competence. Interestingly, the data suggested that small changes in organizational competence in the way in which research is manages had large effects on how the firms preformed in this area. The essay successfully showed how difficult good quantified research into competencies on the firm level can be. It then builds what the authors see as a more careful model and enumerates significant results. The relative lack of theory in the paper along with its attention to methodology made it a bit less immediately engaging to read than some other papers, but it made up for this dryness by pointing to a major problem with firm level capability research and doing a helpful job at beginning to correct it.
Phin Upham has a PhD in Applied Economics from the Wharton School (University of Pennsylvania). Phin is a Term Member of the Council on Foreign Relations. He can be reached at phin@phinupham.com.
Leave Your Comments