A confusion about the nature of research: scholarship or discoveryNot all research is intended to advance humanity's knowledge, Star Trek style, by making bold voyages of discovery in search of new phenomena and natural laws. An awful lot of it is, and has to be, scholarship: reading and commenting on what has already been said on the subject is a necessary step to saying anything original and relevant.
Arguably in the humanities most research consists of such commentary rather than original creation, of increasing our understanding rather than our knowledge, and it is not surprising therefore that the humanities have a particular problem demonstrating the social value of their research. As I have noted previously, the humanities are basically concerned with understanding the human condition. They can rarely if ever contribute new findings (excepting events like the discovery and authentication of an unknown play by Shakespeare). They do not produce new bricks in the wall of human knowledge, but at most new interpretations of the contemporary human condition.
What the humanities generally do in their scholarship is go over and over the answers that have already been made to the same questions. That is surely valuable for the professors' self-understanding (just as assigning book reports to school-children makes sure they have read the books properly), but it has diminishing marginal returns for anyone else. What exactly is the point of publishing the 10,000th article on Shakespeare? I mean, what is the added value to a reader who could have read the other articles if she wanted to? Nevertheless that is what humanities professors must spend their time doing since the incentive system is stark: publish or perish.
Strangely, although humanities professors must publish a lot these days, and their research is concerned with understanding the human condition - a subject of interest to us all - that research generally never reaches 'society'. This is because academics do not write for the public. In fact they typically get no credit and are often even despised by their peers if they publish a book intended for a general audience (especially if it proves popular). Instead they must publish in academic journals which are closed to the public in several ways. First, the style is peculiar and opaque to the non-expert reader, full of jargon and nuanced references to other papers published on the topic. Second, the journals themselves are nearly all literally closed to the public since although almost all the original research was funded in one way or another by the public (taxes), academic journals and their content are owned by a handful of private publishers who charge handsomely for access.
But at least in the sciences there is real research that matters? These are the people making the important medical discoveries and developing the new materials and technologies that will change our lives. Well, ok, not technological developments since the private sector is these days quite sufficiently competent and funded to pursue profitable uses of science. But at least for foundational science research and issues of public interest on topics and at a scale beyond the scope of private companies? Not as much as you might think.
Hot topics are not the same as important topicsAcademia consists of communities of specialist scholars, such as medieval German historians or string theorists, characterised by different histories, interests, and methodologies. The idea of any particular university as a single community of scholars is therefore mistaken: most academics barely know (or approve of) what's going on in their own department and couldn't care less about what other people in their university are working on. Actual scholarly communities are united by their narrow specialisms and are global.
Academics research and publish on topics that others in their particular community think are interesting. They get rewarded for this with the respect of their peers, which they can translate into an academic career. But such hot topics may well lack any objective social value - i.e. produce technical or theoretical knowledge of use or interest to those outside the academic community - because they are shaped and evaluated by the conventions of the academic community. Like a guild (which was the original meaning of 'university') the community sets its own standards and trains apprentices (phds) to become journeymen (post-docs) and masters (tenured professors). Like the art industry, the producers of academic research are in the privileged position of telling their paying customers (i.e. the government) what is valuable and how much to spend on it (limited only by the customer's budget).
Have you ever wondered why economics didn't see the crisis coming? It's because almost no-one thought to ask the relevant questions - such as about the robustness of macro-economic models that didn't include financial institutions or money (!); about value-at-risk pricing and uncertainty in financial models; or even about whether house prices could rise forever. Partly that was due to a kind of group-think effect that made everyone quite sure about these issues because everyone else seemed sure (like Saddam's WMD in 2003). But it also demonstrates a failure in co-ordination - a market-failure in the market of ideas, if you will - since the decentralised research system failed to direct sufficient attention to what seem rather basic and important topics. That reflects the relative disinterest of economists in societally significant questions about the real economy when they could be exploring the intellectually challenging and mathematically elegant models of the ideal economy. Those were the research areas whose study was rewarded in the academic economy of attention and respect.
Publish-or-perish is stupidThe quantity of articles published, multiplied by the rank of the journal and the number of citations each one gets is the basis for the modern academic career. It is the main metric used to monitor and control the typically rather independent academic at work, and also to evaluate and rank universities and even countries for their academic prowess. The problem is that this tool doesn't do very well at guaranteeing good quality research, let alone socially valuable research.
There are more and more academic publications appearing every year to meet the increased demand from increasing numbers of academics multiplied by the increasing publishing demands placed upon them by university and research grant administrators. [Hypocrisy Disclosure: I myself have helped found a new journal to meet this demand.] But most journal articles are never ever cited. By anyone. Their contribution is so marginal that to cite them would be a waste of computer ink. Most are never even read (except during the reviewing process). That means that around the world, millions of very highly educated people, generally paid or at least subsidised by general taxation, are sweating away to produce and write up pieces of research that are in fact absolutely pointless (though they pass peer review as minimally interesting and coherent). They have absolutely no impact on human knowledge as a whole because they are too conventional.
Such pieces generally get published in the lesser ranked journals, who have to make do with the scraps. The big beasts of academic publishing get first crack at the exciting research - the groundbreaking discoveries of autism-vaccine links and so forth. This is the kind of research that can receive thousands of citations and become a foundation for thinking in the subject. It can have an enormous influence not only on other academics, but also on society at large, via textbooks, commercial research labs, government policies, medical practice, etc.
Unfortunately, most exciting research is wrong.
The most likely reason for getting original and exciting results that seem to require a fundamental change in scientific understanding is that the researchers have made mistakes in their methodology. Imagine several research teams setting out independently to test an interesting but radical hypothesis. The research teams who correctly conclude that it is false do not get published (because the absence of an effect isn't news), but the research team that, through methodological mistakes, fluke or bias, concludes that there is a statistically significant effect will get call-backs from Nature or the BMJ. The authors succeed by luck rather than science and outlier results are published as facts and become the basis for follow-up research (and research grants) on the new hot topic.
The point is not that the scientific method doesn't work, just that progressive science - that tells us new and important things about the way the world works - is a constant struggle. Bacon had something right when he characterised science as like the courtroom interrogation of a hostile witness. It cannot be automated, whether by a fancy MRI machine or an academic bureaucracy, but requires the constant commitment and vigilance of people. And people make mistakes.
We know from the history of science that genuine breakthroughs are rare. It follows that many of the most exciting research findings that seem to promise great things for humanity will not stand up to scrutiny. And in fact that seems to be the case (for example in medical science). These are not 'bricks in the wall of knowledge'. In fact such research is a kind of anti-knowledge that gets mixed in with properly grounded knowledge and misdirects future research and the training of new researchers. It may continue to be cited for decades even after being disproved. It can even be dangerous in more applied areas like medical science, where doctors continue to recommend treatments, diets, and tests whose effectiveness has long been falsified or should never have been published.
Why does such research get published? How does it pass the gold standard of peer-review? Unfortunately, peer-review is the 'gold-standard' for testing the quality of research because it is the best available standard, not because it is reliable in some absolute sense. After all, peer-review depends upon people not some all-seeing eye. Peer-review doesn't retest the original experiment or look at the whole data-set. It only has the paper to work on and all reviewers can do is consider "Does this makes sense in light of what I know about this subject?".
Even this limited goal requires much to go right. The editors must select the right experts in the relevant field (they must be experts about experts). Those experts are volunteers and they are very busy (they have better things to do - namely publishing their own articles); they may not understand the paper but be embarrassed to admit they don't (not-real experts); they may be direct rivals of the author (vested interests). The author herself must be fundamentally honest since it is incredibly difficult for anyone else to spot deliberately manipulated results. None of these can be taken for granted.
One might hope that post-publication review would fill this gap. It is true that bad research is eventually refuted (but very very rarely withdrawn by the original journal). But in the short to medium term one cannot rely on this.
First, modern research is complicated and expensive. The motto of the Royal Society Nullius in Verba ("on no-one's word") was at the heart of the classical scientific method. Scientists invited witnesses to their experiments, and then published an instruction manual together with their results so that anyone else could 'witness' it for themselves (post-publication review). But these days experiments involve more than a few test-tubes and a bunsen burner: retesting a well-established result is expensive and time-consuming, and thus hard to justify to the department budget keeper (or the ethics committee if lots of mice will need to be sacrificed!).
Second, as a recent BMJ editorial pointed out, journals are reluctant to publish criticisms of papers they have just published, and use various procedural means to forestall them (such as length and time restrictions). Other journals are reluctant to publish critiques of papers they didn't publish. The editorial also notes that the authors of criticised papers rarely bother to respond, and even then rarely engage with the actual criticism. (NB Despite that strident editorial, the BMJ has been accused of continuing to evade publication of critical papers.) Publication it seems is not the space in which ideas get debated and assessed for their merits. Which leads one to ask, where does that happen?
Third, and perhaps most sad, the mountain of poorly focused low quality research dumped on the heads of researchers every year seems to have had a numbing effect. No one really cares anymore about the quality of published papers (truth), but, at most, whether they have a nice result you can cherry-pick and stick in your own paper.
'Publish or perish' displaces the primary concern and responsibility of scientific research: to extend humanity's knowledge of the way the world works in order to satisfy our curiosity and solve our problems. What we have instead is a collection of individual academics competing with each other to build careers, within a system where success is measured and defined by a status ranking system of magazine articles.
Incentives distortExcessive incentives to publish also undermine the character and integrity of the academic community and its members [previously]. When academics are told to publish or perish, they will publish a lot but it won't be very good. They'll take short-cuts with their methodology; they'll spin out one set of results into a dozen papers to maximise their pay-off; they'll report the one time the experiment worked and not mention all the failures; and so on. They will also likely be biased in getting the results they need.
When researchers want or expect to get certain results, it is all too easy to achieve them even if they do not deliberately - consciously - cheat. At every stage in the research process from the setting of initial questions to the methodological set-up to the measurement and reporting of results the researcher has to make choices that require expert judgement and have an enormous influence on how strong and exciting - how publishable - their results appear. That expert judgement can easily be distorted by the weight of excessive incentives, and the cumulative effect is a significant bias in what the experiment 'found'.
The official methodology of science and the precise quantitative results it produces make science seem hygienic and automatic. But the actual practise of scientific research is uncomfortably close to the making of sausages: all sort of things get put into the mixture that really shouldn't be there (such as prior beliefs and competition for academic status); and the functioning of the rickety sausage 'machine' actually depends on a scientist clearing blockages with a broomstick as he desperately tries to convert his misshapen results into an objectively meaningful and significant form. By the time the finished product is on our plate it is rather difficult to investigate its origins.
The significance of bias also indicates that the obvious solution to the problem of bad research - a closer alignment of researchers' incentives to society's interests won't work. Indeed, wherever that approach to truth-seeking has been tried the results have always been strikingly convenient rather than trustworthy. The biomedical sciences for example - particularly awash with high-rolling vested interests - are notorious for producing research that favours whoever paid for the study (sometimes in journals bought and paid for by Big Pharma).
ConclusionSome of the problems I identified could be substantially mitigated with a bit of will-power and co-ordination from the general academic community and its stakeholders. For example, the present institutional incentives to publish exciting but flawed research and not to retest it. But there is a structural problem at the heart of the present university research model which seems irresolvable: academics just aren't interested in the things society is interested in.
The academic community is in control of the research system. Its primary interest is pursuing what it considers to be intellectually interesting and prestigious research questions. Its secondary interest is in reproducing itself into the future. Neither of these directly concern 'helping society'. This is what leads to the 10,000th article on Shakespeare and excessive attention to intellectually exciting puzzles rather than less prestigious research on mundane (boring) diseases, etc.
The main solution attempted has in many ways made things worse. Those dispensing research funds, generally governments, perceived a classic principal-agent problem and took steps to better align the incentives of the agent (the academic) with the principal (society). 'Publish or perish' was therefore established by university administrations and government agencies in an attempt to monitor academics' productivity objectively and also to try to steer research funding to socially valuable questions and competent researchers. But this has created new distortions that may undermine the virtues of the self-regulating academic community without bringing the benefits of more effective and efficient research, not least because the control system still depends on academic experts (those with the most citations) for information about what research is important!
This conflict of interest is irresolvable unless dissolved. One approach, increasingly widely adopted, is to make university research less academic by setting up research in such a way that the payer calls all the shots. Research here takes places in the context of a particular project, focussing on a narrow question over a limited time-frame with a research team brought together specifically for that purpose. For example, to develop a practical and inexpensive test for a molecule associated with early stage Alzheimer's, or, on a larger scale, a vaccine for malaria suitable for mass distribution in the poor world. Although such research still goes on at universities, it is almost indistinguishable from the style of research in private industry. (One exception to this - for now - is that researchers with an academic identity still tend to want to publish-first everything they find, while those at more corporate institutions are trained to patent-first anything that might be commercially viable.) Indeed there is increasing cross-over, with academics moving between projects at universities and private companies, and governments hiring private labs for non-commercial projects and vice versa. Academics here become regular employees whose work is assessed (continually and exhaustively) on their immediate results.
A rather different approach is to reduce the external demands on academics. Society in this model should be more modest in its expectations about the social value of academic research, especially its immediate pay-off, and simply let the academics get on with the questions they think are interesting, protected from interference. The Large Hadron Collider is an example of this approach in action - it is the most expensive scientific instrument ever built ($9 billion) and its sole purpose is to answer (hopefully) some fundamental questions in the theoretical physics community that have proved irresolvable for several decades. It has no direct 'social value'.
Nevertheless, in general it is unlikely that society would continue to pour nearly so much money into academic research as it presently does if research were recharacterised in this humbler form (after all, the LHC itself is arguably a hold-over from Cold War politics: "My physics progamme is bigger than yours!"). Academics would no longer get away with claiming that their work is socially valuable and so deserving of enormous grants (that could have been spent on health-care, old-age pensions, or other social needs), while at the same time disclaiming any responsibility to work on the subjects the general society considers important. The massive complacency of the tenured classes behind sweeping claims that "What's good for academia is good for the country" would need to be deflated and academics kindly but firmly helped to achieve the humility that is in keeping with their true importance.
(An aside. It is all too easy to confuse the specialisation aspect of human capital - the fact that no one else has the ability to do what you do - with the actual value to society of what it is that only you can do. Anyone can become a sanitation worker, but becoming a professor takes special talents and decades of specialist training. No wonder rubbish collectors and sewer inspectors receive little reward in dignity or income, and every middle-class parent dreams of bringing up little professors instead. Yet which are more important? Without university professors our civilisation would be intellectually poorer, but without sanitation workers it would be swept away by a wave of epidemics.)
The point of this approach is to celebrate the true importance of academia as an independent, truth-oriented institution that stands outside and beyond public opinion, and even democratic politics. To respect the values of academic enquiry as a specialised and disaggregated but collective and co-operative enterprise and the deep expertise, commitment, and integrity of the individual academic researcher. Academic research is here considered intrinsically valuable as a set of practises that society has reason to value, like art or music (and thus we should think of universities as cultural institutions rather than economic engines - more like the British Museum than British Petroleum).
History provides some justification to expect that such research practises may produce knowledge of great benefit to society, albeit not in predictable ways amenable to impact studies by research grant committees. It is the very fact that academics' interests are orthogonal to what society generally considers important that makes such surprises possible. After all, the job of academics is to poke about in unconventional places with an unconventionally pedantic attention to detail. It is also behind one of the most valuable qualities of academic communities: for all their internal conventionality they have an ability to turn social conventions on their head in world-changing ways, whether through overturning the miasma account of infectious disease (John Snow) or demonstrating that modern famines are generally caused by political failure, not by lack of food (Amartya Sen).