Benefits and Harms of Treatments, Screening, and Tests.

978px-Balanced_scale_of_Justice.svgClinicians and Patients Expectations of the Benefits and Harms of Treatments, Screening, and Tests.  A Systematic Review

In 2015 and 2017 Hoffman and Del Mar wrote two systematic reviews (Hoffmann and Del Mar, 2015, 2017), the first paper reviewed unrealistic patient expectations to the benefits and harms of various medical interventions, and the second paper review inaccurate clinician expectations. Their introduction to both papers discussed the increasing demand and cost of medical care and how it related to overdiagnosis, a ‘more is better’ culture, a future funding crisis, and defensive practices. The research question for both papers were similar but only expressed clearly in the second paper as:

‘Do clinicians have accurate expectations of the benefits and harms of medical treatment, tests and screening tests?’

Both authors searched  the electronic databases MEDLINE, Embase, Cumulative Index to Nursing and Allied Health Literature, PsycINFO) with no language or study type restriction. All quantitative primary study designs were eligible as long as the participants were asked to estimate the expected harms and/or benefits of various medical interventions. Qualitative estimate without quantification were excluded. No risk of bias assessment was undertaken, and after data extraction meta-analysis was not undertaken due to the range of outcome and response options.

In the results section, for the first paper relating to patients’ expectations 36 papers were eligible, and for the clinicians review 48 were eligible’.

The results were presented as a narrative and stacked bar chart. The data could not sum as some papers only answered one question (i.e. overestimate) while others answered all three.

Of the summary result provided in the abstract only the significant findings were presented based on the result of the individual paper. (Table 1.)

Table 1. Summary results for both systematic reviews (N/D = no data)

Group/question Underestimate

 (n of studies, %)

Correct estimate

 (n of studies, %)


 (n of studies, %)

Benefit N/D N/D 15/17 (88%)
Harm 10/15 (67%) N/D N/D
Benefit 2/22 (9%) 3/28 (11%) 7/22 (32%)
Harm 20/58 (34%) N/D 3/58 (5%)

The conclusion (authors own words) to the first patients based systematic review was:

‘The majority of participants overestimated intervention benefit and underestimated harm. Clinicians should discuss accurate and balanced information about intervention benefits and harms with patients, providing the opportunity to develop realistic expectations and make informed decisions.’

For the clinician’s systematic review:

‘Clinicians rarely had accurate expectations of benefits or harms, with inaccuracies in both directions. However, clinicians more often underestimated rather than overestimated harms and overestimated rather than underestimated benefits. Inaccurate perceptions about the benefits and harms of interventions are likely to result in suboptimal clinical management choices.’


These were two well-constructed systematic reviews trying to answer a very difficult question were the primary studies were highly variable and included a mixture of quantitative and qualitative data. I would say it’s unusual for such a recent systematic review  to not follow a standardised protocol such as PRISMA or provide a risk of bias/quality assessment. The lack of a protocol made it harder to assess the methodology or results, however I do not want that to distract from the importance of the review.

The presentation of the results as a combination of narrative and complex stacked bar charts lacked clarity. By changing the unit of analysis from the number of papers  to the number of participants I was able to extract the data from the charts and transform it into a format suitable for meta-analysis using the ‘metaprop’ function within the ‘meta’ package R. The heterogeneity of the combined studies was close to 100% reflecting the high variability and the four forest plots plus subgroup analysis were quite large so I have combined  the summary estimates (Table 2) into two charts. The first chart describes patient expectations ( Figure 1), and the second, clinicians’ expectations (Figure 2).

Table 2. Summary estimates with confidence intervals

  Patients Clinicians
Outcome Group Mean 95% CI Mean 95% CI
Underestimate Benefit 23 15 to 32 29 19 to 41
  Harm 60 50 to 69 42 33 to 51
Correct estimate Benefit 19 14 to 26 28 21 to 35
  Harm 23 16 to 30 28 23 to 34
Overestimate Benefit 56 47 to 64 31 19 to 44
Harm 22 15 to 30 20 15 to 26

Figure 1. Patient expectations



Figure 2. Clinicians expectations


The re-analysis of the data does not change the authors discussion or conclusions, it just adds clarity, the heterogeneity in the data extraction and meta-analysis should be treated as a feature of research question rather than a weakness. We can see how dramatically the patients could overestimate benefits and underestimate harms leading to potential risk of overtreatment (resulting in an unfavourable benefit-to-harm ratio) as the two effects combine to increase the chance of the patient making a poor choice. Optimism bias has been well reported and is a normal human reaction when decision making (Weinstein, 2001; Hanoch, Rolison and Freund, 2019). I am sceptical that the principle of shared decision making (NICE, 2011; Ryan and Cunningham, 2014) will override this basic human trait of overestimating the upside of a situation and discounting the downside. One can see from the clinicians results that the effect still exists even after specialist training by the fact that harms are underestimated and benefits overestimated, though this effect is weaker than in the patients case there is an almost equal chance of overestimating or underestimating a benefit or harm as there is of being correct. The authors state in their discussion section:

‘Shared decision making is a logical mechanism for bringing evidence into consultations, but this requires clinicians to know the best current evidence about the benefits and harms of the interventions being contemplated’

I would argue that for the average clinician outside of academia it is very hard to search out and interrogate the best evidence necessary to assist the patient in the shared decision process due to restricted access to full text literature. Beyond the problems of access there is also endemic publication bias (Landewé, 2014) associated with primary research which favours positive outcomes and, thereby  contaminates the systematic reviews, clinical guidelines and healthcare policy.

In summary we need to embrace the concept of shared decision making but more importantly acknowledge the presence of optimism bias and harm discounting  and its ability to undermine the rational decision-making process.



Hanoch, Y., Rolison, J. and Freund, A. M. (2019) ‘Reaping the Benefits and Avoiding the Risks: Unrealistic Optimism in the Health Domain’, Risk Analysis, 39(4), pp. 792–804. doi: 10.1111/risa.13204.

Hoffmann, T. C. and Del Mar, C. (2015) ‘Patients’ expectations of the benefits and harms of treatments, screening, and tests: a systematic review.’, JAMA internal medicine, 175(2), pp. 274–86. doi: 10.1001/jamainternmed.2014.6016.

Hoffmann, T. C. and Del Mar, C. (2017) ‘Clinicians’ expectations of the benefits and harms of treatments, screening, and tests: A systematic review’, JAMA Internal Medicine, 177(3), pp. 407–419. doi: 10.1001/jamainternmed.2016.8254.

Landewé, R. B. M. (2014) ‘Editorial: How publication bias may harm treatment guidelines’, Arthritis and Rheumatology, 66(10), pp. 2661–2663. doi: 10.1002/art.38783.

NICE (2011) Shared Decision Making Collaborative. Available at:

Ryan, F. and Cunningham, S. (2014) ‘Shared decision making in healthcare’, Faculty Dental Journal, 5(3), pp. 124–127. doi: 10.1308/204268514X14017784505970.

Weinstein, N. D. (2001) ‘Health Risk Appraisal and Optimistic Bias 1.’, in International Encyclopedia of the Social & Behavioral Sciences, pp. 6612–6615.

Reflections on EBM Live 2019 – Oxford

Professor John Ioannidis presenting the keynote lecture

This month I attended the attended my first Evidence-Based Medicine conference at Oxford Universities Said Business School. Wow, this was so different from one of the usual dental congresses I attend around Europe, for a start instead of the 10,000 delegates there were only 300. This 300 hundred was made up of senior academic staff, researchers, clinicians and patient representatives. The other big difference was the lack of trade stands, corporate sponsorship and paper (App based programme).

Subjects covered over the three days included:

  • Increasing the systematic use of existing evidence
  • Reducing questionable research practices and bias
  • Finding better evidence (TRIP database)
  • Healthcare Value with Sir Muir Gray
  • Enhancing real world practice
  • Making research evidence relevant, replicable and accessible to the public
  • The complete guide to BREAST CANCER – a personal view of cancer when the doctor becomes the patient.
  • Conflicts of interests and the spinning of weak research results
  • Reproducible evidence for Healthcare: Current and Future  – Prof Ioannidis

The takehome message from this excellent conference was that we still have a long way to go with delivering that best healthcare for our patients.  This challenge requires a general improvement in healthcare literacy and a basic knowledge of statistics for both the profession and the public.

We dont need more research quantity but we do need considerably more reproducible protocol driven quality.  Unregistered/unreported trial findings can’t be analysed thus potentialy hiding benefits and harms to the population.