Author: Trisha Greenhalgh

Classic Ethnographies

Trisha Greenhalgh discusses classic ethnography texts and their applicability to health services research.

Ethnography is, traditionally, the remit of anthropologists. Think mud huts and pith helmets: months or years living among the ‘natives’, immersed in the strangeness of a distant culture and producing what Clifford Geertz called thick description [1]. More recently, ethnography has been appropriated by both management academics and health services researchers to study the culture of organisations [2-4].

In general, doctors are not trained in anthropology – or indeed in other interpretive methods. The methodology has become distorted and over-rationalized by medically trained researchers whose criteria for excellence have been appropriated from the randomized controlled trial. They may wrongly equate rigour with the use of structured checklists of what to observe, how, and for how long – or for two researchers to make independent observations of the same phenomenon on the assumption that they should both come out of the experience with the same set of ‘facts’. In reality, ethnography draws extensively on subjective methods and rigour is as much about reflexivity (self-questioning) and criticality (considering alternative explanations) as about accuracy or reproducibility of measurement [5].

Image Credit: Fotos GOVBA, Flickr

Image Credit: Fotos GOVBA, Flickr


Continue reading »

Category: General | 4 Comments

Explaining Risk: Know Your Aristotle

I was shocked to discover recently that a professor colleague had not grasped the difference between absolute and relative risk. He was excited by an intervention that reduced the risk of a primary outcome by 50%. He didn’t know (and hadn’t thought to ask) what the absolute risk of that outcome was. And he seemed unimpressed when I explained that a 50% relative risk reduction might mean going from 80% to 40% or from 0.02% to 0.01%.

Image Credit: Aromick, Wikimedia

I asked my Twitter friends to suggest links to help me convey the point. Several offered links to the Oxford-based EBM resource Bandolier or the Canadian Medical Association Journal series. Others suggested sources intended for lay people such as the Patient UK website or books like Smart Health Choices or Testing Treatments.

I found all these sources useful to remind me of the formulae but limited value as a teaching resource. If you’re not very numerate, the instruction to posit a numerator and a denominator and then divide one by the other simply won’t inspire – even when fictitious characters are introduced (“Pat’s 10-year risk of stroke is…”).

Psychologists have produced evidence from cognitive experiments on how to communicate risk, which David Spiegelhalter has summarised. He points out, for example, that many people are confused by terms like “chance” or the use of percentages, so the use of natural frequencies (such as “one in ten”) is better.  He also reminds us that different framings of these frequencies – positive (“glass half full”) versus negative (“glass half empty”) – will lead to different perceptions of benefits and harms.

Whilst knowing about cognitive biases was helpful, I wanted a teaching byte that connected with the affective component of learning (“why should I care about this?”) as well as with the cognitive component (“what do I need to learn?”).  Interactive websites where changes in absolute risk are expressed as coloured blobs or smiley/sad faces help convey dull formulae through drill and practice. But first you’ve got to get the learner to the point where they reach out and grasp the mouse.

This example in five tweets from David Eyles did it for me: (1) Huge trial with tiny effect size: 3 patients die out of 100 with condition. New drug reduced this to 2. RR = 33% better. (2) AR gives 1% better. RR used by Pharma to sell drug using “33%” improvement” statistic. Drug gets given to 100 patients to help one. (3) Patient says “Why take drug for rest of life if chances of help are only 1%? What are side effects?” (4) Side effects are weight gain and high BP with AR of 20%. Risk of death from these now higher than benefit. (5) RR useful to Pharma. AR useful and understood by patient.

David’s example was based on James Penston’s book ‘Fiction and fantasy in medical research: The large scale randomised controlled trial.’ The story-fragment has appeal because he has turned the teaching byte into a narrative with two key characters – a villain (‘Pharma’) who seeks to maximise profit at the expense of a victim (the patient), and who is deliberately seeking to present the facts in a distorted way (that is, the villain has a dastardly motive).

Plato and Aristotle discuss the difference between absolute and relative risk. Image Source: Wikimedia

As Aristotle observed, all stories involve ‘trouble’ – in this case, the possibility that the innocent patient (a classic underdog, in literary terms) may be harmed. The question “Why take drug for rest of life…?” is a clever rhetorical device which encourages the learner to climb into the skin of the potential victim and contemplate the impending trouble from his or her perspective. Suddenly, much is at stake here: it matters what the numerator, denominator and missing values are in the equations.

In sum, the use of the story form – with evil villains, powerless victims, trouble and things at stake – is a powerful tool for engaging learners. Ben Goldacre presents the same villain-victim dyad in comic melodrama genre. Under the heading “You are 80% less likely to die from a meteor landing on your head if you wear a bicycle helmet all day”, he depicts Pharma (‘corporate whoredom’) as bent on persuading the rest of us to spend money unnecessarily on medicines whose potential to reduce our absolute risk of serious trouble is vanishingly tiny. But as Goldacre reminds us in another skilful trope, “We’re all suckers for a big number.”

Category: General | Tagged , , | 5 Comments

Less research is needed

ResearchBlogging.org

Guest blogger Trish Greenhalgh suggests its time for less research and more thinking.

The most over-used and under-analyzed statement in the academic vocabulary is surely “more research is needed”.  These four words, occasionally justified when they appear as the last sentence in a Masters dissertation, are as often to be found as the coda for a mega-trial that consumed the lion’s share of a national research budget, or that of a Cochrane review which began with dozens or even hundreds of primary studies and progressively excluded most of them on the grounds that they were “methodologically flawed”. Yet however large the trial or however comprehensive the review, the answer always seems to lie just around the next empirical corner.

With due respect to all those who have used “more research is needed” to sum up months or years of their own work on a topic, this ultimate academic cliché is usually an indicator that serious scholarly thinking on the topic has ceased. It is almost never the only logical conclusion that can be drawn from a set of negative, ambiguous, incomplete or contradictory data.

Recall the classic cartoon sketch from your childhood. Kitty-cat, who seeks to trap little bird Tweety Pie, tries to fly through the air.  After a pregnant mid-air pause reflecting the cartoon laws of physics, he falls to the ground and lies with eyes askew and stars circling round his silly head, to the evident amusement of his prey. But next frame, we see Kitty-cat launching himself into the air from an even greater height.  “More attempts at flight are needed”, he implicitly concludes.

Image Credit: flickr, breahn

On my first day in (laboratory) research, I was told that if there is a genuine and important phenomenon to be detected, it will become evident after taking no more than six readings from the instrument.  If after ten readings, my supervisor warned, your data have not reached statistical significance, you should [a] ask a different question; [b] design a radically different study; or [c] change the assumptions on which your hypothesis was based.

In health services research, we often seem to take the opposite view. We hold our assumptions to be self-evident. We consider our methodological hierarchy and quality criteria unassailable. And we define the research priorities of tomorrow by extrapolating uncritically from those of yesteryear.  Furthermore, this intellectual rigidity is formalized and ossified by research networks, funding bodies, publishers and the increasingly technocratic system of academic peer review.

Here is a quote from a typical genome-wide association study:

“Genome-wide association (GWA) studies on coronary artery disease (CAD) have been very successful, identifying a total of 32 susceptibility loci so far. Although these loci have provided valuable insights into the etiology of CAD, their cumulative effect explains surprisingly little of the total CAD heritability.”  [1]

The authors conclude that not only is more research needed into the genomic loci putatively linked to coronary artery disease, but that – precisely because the model they developed was so weak – further sets of variables (“genetic, epigenetic, transcriptomic, proteomic, metabolic and intermediate outcome variables”) should be added to it. By adding in more and more sets of variables, the authors suggest, we will progressively and substantially reduce the uncertainty about the multiple and complex gene-environment interactions that lead to coronary artery disease.

If the Kitty-cat analogy seems inappropriate to illustrate the flaws in this line of reasoning, let me offer another parallel. We predict tomorrow’s weather, more or less accurately, by measuring dynamic trends in today’s air temperature, wind speed, humidity, barometric pressure and a host of other meteorological variables. But when we try to predict what the weather will be next month, the accuracy of our prediction falls to little better than random. Perhaps we should spend huge sums of money on a more sophisticated weather-prediction model, incorporating the tides on the seas of Mars and the flutter of butterflies’ wings? Of course we shouldn’t. Not only would such a hyper-inclusive model fail to improve the accuracy of our predictive modeling, there are good statistical and operational reasons why it could well make it less accurate.

Whereas in the past, any observer could tell that an experiment had not ‘worked’, the knowledge generated by today’s multi-variable mega-studies remains opaque until months or years of analysis have rendered the findings – apparently at least – accessible and meaningful. This kind of research typically requires input from many vested interests: industry, policymakers, academic groupings and patient interest groups, all of whom have different reasons to invest hope in the outcome of the study. As Nic Brown has argued, debates around such complex and expensive research seem increasingly to be framed not by régimes of truth (what people know or claim to know) but by ‘régimes of hope’ (speculative predictions about what the world will be like once the desired knowledge is finally obtained). Lack of hard evidence to support the original hypothesis gets reframed as evidence that investment efforts need to be redoubled.[2] And so, instead of concluding that less research is needed, we collude with other interest groups to argue that tomorrow’s research investments should be pitched into precisely the same patch of long grass as yesterday’s.

Here are some intellectual fallacies based on the more-research-is-needed assumption (I am sure readers will use the comments box to add more examples).

  1. Despite dozens of randomized controlled trials of self-efficacy training (the ‘expert patient’ intervention) in chronic illness, most people (especially those with low socio-economic status and/or low health literacy) still do not self-manage their condition effectively. Therefore we need more randomized trials of self-efficacy training.
  2. Despite conflicting interpretations (based largely on the value attached to benefits versus those attached to harms) of the numerous large, population-wide breast cancer screening studies undertaken to date, we need more large, population-wide breast cancer screening studies.
  3. Despite the almost complete absence of ‘complex interventions’ for which a clinically as well as statistically significant effect size has been demonstrated and which have proved both transferable and affordable in the real world, the randomized controlled trial of the ‘complex intervention’ (as defined, for example, by the UK Medical Research Council [3]) should remain the gold standard when researching complex psychological, social and organizational influences on health outcomes.
  4. Despite consistent and repeated evidence that electronic patient record systems can be expensive, resource-hungry, failure-prone and unfit for purpose, we need more studies to ‘prove’ what we know to be the case: that replacing paper with technology will inevitably save money, improve health outcomes, assure safety and empower staff and patients.

Last year, Rodger Kessler and Russ Glasgow published a paper arguing for a ten-year moratorium on randomized controlled trials on the grounds that it was time to think smarter about the kind of research we need and the kind of study designs that are appropriate for different kinds of question.[4] I think we need to extend this moratorium substantially. For every paper that concludes “more research is needed”, funding for related studies should immediately cease until researchers can answer a question modeled on this one: “why should we continue to fund Kitty-cat’s attempts at flight”?

 This blog was informed by contributions to my Twitter page @trishgreenhalgh

Trish Greenhalgh is Professor of Primary Health Care at Barts and the London School of Medicine and Dentistry, London, UK, and also a general practitioner in north London.

[1] Prins BP, Lagou V, Asselbergs FW, Snieder H, & Fu J (2012). Genetics of coronary artery disease: Genome-wide association studies and beyond. Atherosclerosis PMID: 22698794

[2] Brown N (2007). Shifting Tenses: Reconnecting Regimes of Truth and Hope Configurations DOI: 10.1353/con.2007.0019

[3] Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M, & Medical Research Council Guidance (2008). Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ (Clinical research ed.), 337 PMID: 18824488

[4] Kessler R, & Glasgow RE (2011). A proposal to speed translation of healthcare research into practice: dramatic change is needed. American journal of preventive medicine, 40 (6), 637-44 PMID: 21565657

Category: General | 20 Comments

Have management papers ever changed practice in healthcare?

Guest blogger Trish Greenhalgh takes on a Twitter challenge

Sir Muir Gray, of evidence-based medicine fame, is a man who speaks his mind – often in 140 characters or fewer. “Show me a paper by a management academic,” he Tweeted, “that has changed the way we deliver health services” [and, implicitly, improved patient outcomes].

 

Part of me agreed with him, but I’m married to a management academic (“Oops sorry, better man than me,” Muir backpedalled), who helped me rise to Muir’s challenge.

We kicked off with a paper almost every clinician has heard of:

Image Credit: Julie Rybarczyk

Kaplan and Norton’s ‘balanced scorecard’, published in Harvard Business Review in 1992 and cited over 8000 times since [1]. The scorecard was aimed at company directors who wanted some quick (and, one is tempted to suggest, dirty) metrics to monitor what their customers thought of them and where they should direct their efforts for the future. It has certainly changed practice (many healthcare organisations use it), but we were not overly sold on its transferability to the healthcare setting.

 

In danger of winning the point but losing the principle, we tried to think of papers in management journals (which consist mainly of studies undertaken on US private-sector, product-oriented firms) whose findings had been applied to public sector, service-oriented organisations in the UK in a way that improved patient-relevant outcomes. We pretty much drew a blank.

One paper – Ramiller and Pentland’s critique of ‘variables centred’ organisational research [2] – gave a clue as to why.  Abstracted variance models aimed at producing generalisable truths about how organisations behave may appear scientific and rational (and promise findings that could be ‘rolled out’ to new settings), but in reality may have limited value since they divert the focus away from people taking action. These authors argue for a case study approach to complex change, in which human actors and action remain in frame, and the link between ‘input’ and ‘outcome’ is made using here-and-now narrative rather than abstracted, logicodeductive reasoning.

 

Talking of the narrative form in organisational research, there are a number of classics in this genre, including

 

  • Weick on sensemaking. Staff need to make collective sense of organisational life; encouraging this sensemaking process is key to successful change efforts [3].

 

  • Tsoukas [4] and Brown and Duguid [5] on organisational knowledge. Knowledge is embodied, socially developed and – to a metaphor originally coined by Wittgenstein – “rides along the rails laid down by shared practice”. This view of knowledge has been applied by Gabbay and le May in their brilliant work on ‘mindlines’ in health professionals [6].

 

  • Van de Ven on the longitudinal case study method for organisational innovation [7]. However carefully you plan, innovation in healthcare organisations is invariably a messy, non-linear process that takes years rather than months and is characterised by shocks and setbacks. Again, don’t expect to document predictable and reproducible links between inputs and outcomes. My team’s systematic review of diffusion of innovations in healthcare drew heavily on Van de Ven’s empirical studies [8].

 

  • Feldman and Pentland on organisational routines [9]. Routines are recurring patterns of interpersonal interaction that confer stability in an organisation but which also offer scope for change (when human actors choose to enact the routine differently). My team used this approach to surface the sophisticated ‘hidden work’ of receptionists in assuring medication safety in healthcare [10].

 

Incidentally, for a feisty argument over whether ‘variables-centred’ or ‘actor-centred’ paradigms are more robust, see Pfeffer’s Academy of Management Annual lecture from 1993 [11] and Van Maanen’s insouciant response [12].

 

We found many papers we wished had changed practice but probably hadn’t. For example:

 

  • Fulop’s team showed pretty decisively that hospital mergers don’t save money [13].
  • Currie and Guah predicted (accurately) the failure of England’s ill-fated £12.7 billion National Programme for Information Technology if policymakers continued to ignore stakeholders’ conflicting institutional baggage [14].

 

Image Credit: Adrian Boliston

Do healthcare policymakers take any notice of academic papers which warn that current approaches are unwise? My team didn’t think so. We drew on Tsoukas’ model of organisational knowledge to explain why [15].

 

A number of management papers emphasised the complex and context-bound nature of organisational phenomena. For example:

 

  • Hawe and colleagues theorised complex interventions as events in complex systems [16]
  • Lanham et al considered healthcare teams as complex systems and quality as an emergent property of those systems [17]
  • Bate and colleagues looked at social movements as a force for change [18]. These movements – from feminism to the Arab Spring – work by linking an emerging identity (being part of the movement says something about who we are) with collective action (movements organise and do things). But they are inherently non-linear and cannot be ‘controlled’.

The topic of leadership is done to death in healthcare journals but most management academics have little interest in it, perhaps because it’s an example of a variable that has been abstracted from the person who has it!  But one paper – on the subtle approach of ‘tempered radicalism’ by Myserson and Scully – made it onto our list [19].

I’ve been avoiding Muir Gray recently. Whilst the exercise of attempting to “find a paper by a management academic that had changed practice and benefited patients” produced many insights into why organisational change in healthcare is difficult and unpredictable, the links between these papers and hard outcomes in healthcare were usually tenuous. If I were being pedantic, I would suggest that this is because Muir’s question implies a deterministic link between inputs (academic papers) and outcomes (patient benefits) whereas most of the literature listed above is theoretically incommensurable with such a link. But I suspect I should concede defeat and go buy him a drink. Or at least, give his book – on how to get it right when building healthcare systems – a gentle plug [20].

Acknowledgment: This blog is based on a discussion on Twitter and includes
various papers suggested by my followers.

Trish Greenhalgh is Professor of Primary Health Care at Barts and the London School of Medicine and Dentistry, London, UK, and also a general practitioner in north London.

1.         Kaplan RS, Norton DP: The balanced scorecard–measures that drive performance. Harvard Business Review 1993, Jan-Feb:71-147.

2.         Ramiller N, Pentland B: Management implications in information systems research: the untold story. Journal of the Association for Information Systems 2009, 10(6):474-494.

3.         Weick KE: Sensemaking in organizations. Thousand Oaks, CA:    : Sage; 1995.

4.         Tsoukas H: What is organisational knowledge. Journal of Management Studies 2001, 38(7):973-993.

5.         Brown JS, Duguid P: Knowledge and organization: A social practice perspective. Organization Science 2001, 12(2):198-213.

6.         Gabbay J, le May A: Evidence based guidelines or collectively constructed “mindlines?” Ethnographic study of knowledge management in primary care. BMJ 2004, 329(7473):1013.

7.         Van de Ven AH: Central probelms in the management of innovation. Management Science 1986, 32(5):590-607.

8.         Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O: Diffusion of innovations in service organisations: systematic literature review and recommendations for future research. Milbank Q 2004, 82  581-629.

9.         Feldman MS, Pentland BT: Reconceptualizing organizational routines as a source of flexibility and change. Administrative Science Quarterly 2003, 48:94-118.

10.       Swinglehurst D, Greenhalgh T, Russell J, Myall M: Receptionist input to quality and safety in repeat prescribing in UK general practice: ethnographic case study. BMJ 2011, 343:d6788.

11.       Pfeffer J: Barriers to the advance of organizational science: paradigm development as a dependent variable Academy of Management Review 1993, 18(4):599-620.

12.       Van Maanen J: Style as Theory. Organizational Science 1995, 6:133-143.

13.       Fulop N, Protopsaltis G, Hutchings A, King A, Allen P, Normand C, Walters R: Process and impact of mergers of NHS trusts: multicentre case study and management cost analysis. BMJ 2002, 325(7358):246.

14.       Currie WL, Guah MW: Conflicting institutional logics: a national programme for IT in the organisational field of healthcare. Journal of Information Technology 2007, 22:235-247.

15.       Greenhalgh T, Russell J, Ashcroft RE, Parsons W: Why National eHealth Programs Need Dead Philosophers: Wittgensteinian Reflections on Policymakers’ Reluctance to Learn from History. Milbank Q 2011, 89(4):533-563.

16.       Hawe P, Shiell A, Riley T: Theorising interventions as events in systems. American journal of community psychology 2009, 43(3-4):267-276.

17.       Lanham HJ, McDaniel RR, Jr., Crabtree BF, Miller WL, Stange KC, Tallia AF, Nutting P: How improving practice relationships among clinicians and nonclinicians can improve quality in primary care. Joint Commission journal on quality and patient safety / Joint Commission Resources 2009, 35(9):457-466.

18.       Bate P, Robert G, Bevan H: The next phase of healthcare improvement: what can we learn from social movements? Quality & safety in health care 2004, 13(1):62-66.

19.       Myerson DE, Scully MA: Tempered radicalism and the politics of ambivalence and change. Organization Science 1985, 6(5):585-600.

20.       Gray JAM: How to build healthcare systems. Offox Press Ltd: Oxford; 2011.

Category: General | Tagged , , | 4 Comments