Turning your MSc dissertation into an academic paper


Students who complete the MSc in Evidence Based Health Care at the University of Oxford often produce high quality research for their dissertation, which we encourage them to publish in academic journals. Dr Anne-Marie Boylan is the Dissertation Coordinator for the MSc in EBHC. She spoke to Mark Howe, a dentist who recently completed his MSc about his experiences of writing up his MSc thesis for publication in the Journal of Dentistry.

What challenges did you face in getting your thesis published?

I found condensing the dissertation down to meet the word count required for the journal whilst maintaining all the important points was a big challenge. This took a substantial amount of time and quite a lot of editing. I also faced lots of challenges because of what was required in the submission process. The formatting the journal required for tables and figures was different to what I had created for my thesis. They didn’t always convert cleanly when uploaded, which took some time to identify and correct. Despite all my efforts to ensure I followed the guidelines for authors, the manuscript was returned immediately due to issues with format changes and missing entries. But I got there in the end.

What did you think about the reviewers’ feedback?

The initial comments ranged from just basic proof-reading corrections to what felt like quite extensive criticism. So you need to be prepared for frustration and rejection. The publication process assumes you work in a close-knit experienced team where there is access to people who have published before, which isn’t always the case for MSc students.

The reviewers asked for amendments to what I thought were very important aspects of the research. I saw this an opportunity to argue that these data should not be changed.

How did you feel when your article was accepted?

I felt relief rather than joy as the profession now had to accept there were some weaknesses in their previous robust results. Getting the dissertation published was for me the true endpoint of the MSc in evidence-based healthcare as my research was now going into the public domain. I was surprised to see how expensive it was to make my paper ‘Open Access’. I had no funding for this so it’s behind a pay wall.

What would you say to other students who are preparing their thesis for publication?

Choose your journal carefully.

Be patient – the submission process is more experiential than intuitive. Try and get some advice from colleagues who have published more papers in your field.

Be prepared to defend your research against the reviewer’s comment where necessary. Try not to take the reviewers comments personally. Maintain a calm perspective, and possibly leave the manuscript for a few days before working through the corrections.

Mark’s paper can be accessed using the following reference: Howe, M.-S., Keys, W. and Richards, D. (2019) ‘Long-term (10-year) dental implant survival: A systematic review and sensitivity meta-analysis’, Journal of Dentistry. Elsevier B.V., 84(March), pp. 9–21. doi: 10.1016/j.jdent.2019.03.008.

Link to CEBM website: https://www.cebm.net/2019/05/turning-your-msc-dissertation-into-an-academic-paper/

Four Ways to Optimise an Outcome



As a dental surgeon, I have spent my entire career trying to keep up-to-date with the latest evidence as surgical techniques have evolved. Over the past 10 years I have started to question the validity of some of this evidence as I was seeing more complications relating to dental implant treatment than the research would suggest. To explore this hypothetical mismatch between clinical and research outcomes I chose to undertake an updated systematic review (SR) and sensitivity meta-analysis on the ‘Long-term survival of titanium dental implants’ [1].

Observations from the research

Following completion of my SR there were four areas where the previous SRs had potentially over optimised their conclusions:

  1. Definitions of implant failure
    Problem: Most of the research defined the failure of a dental implant using the most extreme outcome (loss from the oral cavity). In clinical practice it is generally accepted that an implant has failed if it causes pain or is mobile when in use, lost most of its supporting bone or presents with uncontrollable infection.
    Solution: By universally adopting these real-world definitions of implant failure the research will produce results closer to what a patient might consider a failure.
  2. Patients lost to follow-up
    : In all the papers reviewed the researchers had assumed that any patient unavailable for assessment at 10-years was ‘missing at random’, that is to say that their absence had nothing to do with the treatment they received and therefore the data was ignorable and only complete data would be analysed. In clinical practice there is anecdotal evidence and handful of research papers showing that patient who don’t come back for review may have had a higher failure rate (up to ten times higher) for clinical, psychological or financial reasons [2,3].
    Solution: In the real-world clinical environment there is less control over patient monitoring, and it is not plausible to either assume all the patients are missing at random or that all patients lost to follow-up had complete success or complete failure. There needs to be some plausible imputation model to account for the missing data. In my review we set the relative implant failure rate at 5x higher than the authors published result, based on previous lost to follow-up studies and then imputed the number of additionally failed implant this would add in if all the patients had been followed up [4]. One could argue about using a multiplier of 5 but it is more plausible than ignoring the patient altogether or substituting a probability of 0 or 1 for the missing outcome (Cromwell’s Law) [5].
  3. Risk of Bias (RoB) assessment
    : In the initial review most of the previously published SR’s did not employ a risk of bias tool. If one was used it was either Cochrane Collaborations tool for assessing risk of bias in randomised trials or the Newcastle Ottawa Scale for comparing non-randomised studies. The problem with both these tools is that there was no comparator group to assess, so neither of the tools are suitable to assess the risk of bias in these SR’s. By using an inappropriate RoB tool there is a risk or presenting the evidence in a better light by concentrating on the internal validity of the study.
    Solution: I used a risk of bias tool specifically designed for prevalence studies so there is no comparator group [6]. This tool places an emphasis on the external validity (how closely the group under study represent the national population that may benefit from the treatment).
  4. Presentation of the prediction interval
    : The results of the meta-analysis were only presented as a summary estimate and 95% confidence interval. It must be remembered that this is the mean survival rate of all the studies and the 95% confidence interval represents the precision of that estimate. This does not help us predict the possible outcome of a future study conducted in a similar fashion.
    Solution: It is possible to add a prediction interval (PI) to the summary estimate, which represents distribution of the true effects and the heterogeneity in the same metric as the original effect size measure [7,8].


A traditional analysis produced similar 10-year survival estimates to previous systematic reviews. A more realistic sensitivity meta-analysis accounting for loss to follow-up data and the calculation of prediction intervals demonstrated a possible doubling of the risk of implant loss in the older age groups.

Link to CEBM blog  “https://www.cebm.net/2019/05/four-ways-that-a-systematic-review-can-over-optimise-an-outcome/”

[1] M.-S. Howe, W. Keys, D. Richards, Long-term (10-year) dental implant survival: A systematic review and sensitivity meta-analysis, J. Dent. VO – 84. 84 (2019) 9–21. doi:10.1016/j.jdent.2019.03.008.

[2] A.C.P. Sims, The importance of a high tracking rate in long term medical follow-up studies, Lancet. 302 (1973) 433–435.

[3] E.H. Geng, N. Emenyonu, M.B. Bwana, , Sampling-Based Approach to Determining Outcomes of Patients Lost to Follow-Up in Antiretroviral Therapy Scale-Up Programs in Africa, J. Am. Mediacal Assocoation. 300 (2008) 506–507. doi:10.1001/jama.300.5.506.

[4] E.A. Akl, M. Briel, J.J. You, , Potential impact on estimated treatment effects of information lost to follow-up in randomised controlled trials (LOST-IT): systematic review, Br. Med. J. 344 (2012) e2809–e2809. doi:10.1136/bmj.e2809.

[5] D. V Lindley, Understanding uncertainty. [electronic book], in: Hoboken, New Jersey : Wiley, 2014., 2014: pp. 129–130.

[6] D. Hoy, P. Brooks, A. Woolf, , Assessing risk of bias in prevalence studies: Modification of an existing tool and evidence of interrater agreement, J. Clin. Epidemiol. 65 (2012) 934–939. doi:10.1016/j.jclinepi.2011.11.014.

[7] M. Borenstein, L. V Hedges, J.P.T. Higgins, Introduction to Meta-Analysis, Wiley & Sons, Chichester, UK, 2009.

[8] J. IntHout, J.P.A. Ioannidis, M.M. Rovers, Plea for routinely presenting prediction intervals in meta-analysis, Br. Med. J. Open. 6 (2016) e010247. doi:10.1136/bmjopen-2015-010247.