Quality and Safety in Health Care Journal

Ending nuclear weapons, before they end us

This May, the World Health Assembly (WHA) will vote on re-establishing a mandate for the WHO to address the health consequences of nuclear weapons and war.1 Health professionals and their associations should urge their governments to support such a mandate and support the new United Nations (UN) comprehensive study on the effects of nuclear war.

The first atomic bomb exploded in the New Mexico desert 80 years ago, in July 1945. Three weeks later, two relatively small (by today’s standards), tactical-size nuclear weapons unleashed a cataclysm of radioactive incineration on Hiroshima and Nagasaki. By the end of 1945, about 213 000 people were dead.2 Tens of thousands more have died from late effects of the bombings.

Last December, Nihon Hidankyo, a movement that brings together atomic bomb survivors, was awarded the Nobel Peace Prize for its ‘efforts to achieve a world free of nuclear weapons...

Why hospital falls prevention remains a global healthcare priority

The article by Cho et al1 in the current issue of BMJ Quality and Safety addresses the persistent and debilitating problem of hospital falls, which remain a challenge worldwide. Despite decades of research on hospital falls,2 considerable effort by health professionals,3 and publication of clinical guidelines on falls prevention,4 5 falls and associated injuries continue to be a major threat to patient safety and quality. The reasons why hospital falls continue to be associated with injuries and increased hospital length of stay are incompletely understood and vary across patients and settings. What is known is that patient falls education early after hospital admission helps to prevent falls.6–8 Staff education on how to prevent hospital falls also helps to reduce the risk.9 Exercise, safe footwear, environmental modifications, use of assistive devices such...

Under-reporting of falls in hospitals: a multisite study in South Korea

Background

Inpatient falls are adverse events that often result in injury due to complex interactions between the hospital environment and patient risk factors and remain a significant problem in clinical settings.

Objectives

This study aimed to identify (1) practice variations and key issues ranging from hospital fall management protocols to incident detection, and (2) potential approaches to address these challenges.

Design

Retrospective cohort study.

Setting

Four general hospitals in South Korea.

Methods

Qualitative and quantitative data were analysed using the Donabedian quality outcomes model. Data were collected retrospectively during 2015–2023 from four general hospitals on local practice protocols, patient admission and nursing data from electronic records, and incident self-reports. Content analysis of practice protocol and manual chart reviews for hospital falls incidents was conducted at each site. Quantitative analyses of nursing activities and analysis of patient falls prevention interventions were also conducted at each site.

Results

There were variations in fall definitions, risk-assessment tools and inclusion and exclusion criteria among the local fall management protocols. The original and modified versions of the heuristic tools performed poorly to moderately, with areas under the receiver operating characteristic curve of 0.54~0.74 and 0.59~0.80, respectively. Preventive intervention practices varied significantly among the sites, with risk-targeted and tailored interventions delivered to only 1.15%~49.5% of at-risk patients. Fall events were not recorded in self-reporting systems and nursing notes for 29.5%~90.6% and 4.4%~17.1% of patients, respectively.

Conclusion

Challenges in fall prevention included weaknesses in the design and implementation of local fall protocols and low-quality incident self-reporting systems. Systematic and sustainable solutions are needed to help reduce hospital fall rates and injuries.

Frequency and preventability of adverse drug events in the outpatient setting

Background

Limited data exist regarding adverse drug events (ADEs) in the outpatient setting. The objective of this study was to determine the incidence, severity, and preventability of ADEs in the outpatient setting and identify potential prevention strategies.

Methods

We conducted an analysis of ADEs identified in a retrospective electronic health records review of outpatient encounters in 2018 at 13 outpatient sites in Massachusetts that included 13 416 outpatient encounters in 3323 patients. Triggers were identified in the medical record including medications, consultations, laboratory results, and others. If a trigger was detected, a further in-depth review was conducted by nurses and adjudicated by physicians to examine the relevant information in the medical record. Patients were included in the study if they were at least 18 years of age with at least one outpatient encounter with a physician, nurse practitioner or physician’s assistant in that calendar year. Patients were excluded from the study if the outpatient encounter occurred in outpatient surgery, psychiatry, rehabilitation, and paediatrics.

Results

In all, 5% of patients experienced an ADE over the 1-year period. We identified 198 ADEs among 170 patients, who had a mean age of 60. Most patients experienced one ADE (87%), 10% experienced two ADEs and 3% experienced three or more ADEs. The most frequent drug classes resulting in ADEs were cardiovascular (25%), central nervous system (14%), and anti-infective agents (14%). Severity was ranked as significant in 85%, 14% were serious, 1% were life-threatening, and there were no fatal ADEs. Of the ADEs, 22% were classified as preventable and 78% were not preventable. We identified 246 potential prevention strategies, and 23% of ADEs had more than one prevention strategy possibility.

Conclusions

Despite efforts to prioritise patient safety, medication-related harms are still frequent. These results underscore the need for further patient safety improvement in the outpatient setting.

Patient and caregiver perspectives on causes and prevention of ambulatory adverse events: multilingual qualitative study

Context

Ambulatory adverse events (AEs) affect up to 25% of the global population and cause over 7 million preventable hospital admissions around the world. Though patients and caregivers are key actors in promoting and monitoring their own ambulatory safety, healthcare teams do not traditionally partner with patients in safety efforts. We sought to identify what patients and caregivers contribute when engaged in ambulatory AE review, focusing on under-resourced care settings.

Methods

We recruited adult patients, caregivers and patient advisors who spoke English, Spanish and/or Cantonese, from primary care clinics affiliated with a public health network in the USA. All had experience taking or managing a high-risk medication (blood thinners, insulin or opioid). We presented two exemplar ambulatory AEs: one involving a warfarin drug-drug interaction, and one involving delayed diagnosis of colon cancer. We conducted semistructured focus groups and interviews to elicit participants’ perceptions of causal factors and potential preventative measures for similar AEs. The study team conducted a mixed inductive-deductive qualitative analysis to derive major themes.

Findings

The sample included 6 English-speaking patients (2 in the focus group, 4 individual interviews), 6 Spanish-speaking patients (individual interviews), 4 Cantonese-speaking patients (2 in the focus group, 2 interviews), and 6 English-speaking patient advisors (focus group). Themes included: (1) Patients and teams have specific safety responsibilities; (2) Proactive communication drives safe ambulatory care; (3) Barriers related to limited resources contribute to ambulatory AEs. Patients and caregivers offered ideas for operational changes that could drive new safety projects.

Conclusions

An ethnically and linguistically diverse group of primary care patients and caregivers defined their agency in ensuring ambulatory safety and offered pragmatic ideas to prevent AEs they did not directly experience. Patients and caregivers in a safety net health system can feasibly participate in AE review to ensure that safety initiatives include their valuable perspectives.

General practitioners retiring or relocating and its association with healthcare use and mortality: a cohort study using Norwegian national data

Background

Continuity in the general practitioner (GP)-patient relationship is associated with better healthcare outcomes. However, few studies have examined the impact of permanent discontinuities on all listed patients when a GP retires or relocates.

Aim

To investigate changes in the Norwegian population’s overall healthcare use and mortality after discontinuity due to Regular GPs retiring or relocating.

Methods

Linking national registers, we compared days with healthcare use and mortality for matched individuals affiliated with Regular GPs who retired or relocated versus continued. We included list patients 3 years prior to exposure and followed them up to 5 years after. We assessed changes over time employing a difference-in-differences design with Poisson regression.

Results

From 2011 to 2020, we identified 819 Regular GPs retiring and 228 moving, affiliated with 1 165 295 people. Relative to 3 years before discontinuity, the rate ratio (RR) of daytime GP contacts, increased 3% (95% CI 2 to 4) in year 1 after discontinuity, corresponding to 148 (95% CI 54 to 243) additional contacts per 1000 patients. This increase persisted for 5 years. Out-of-hours GP contacts increased the first year, RR 1.04 (95% CI 0.99 to 1.09), corresponding to 16 (95% CI –5 to 37) contacts per 1000 patients. Planned hospital contacts increased 3% (95% CI 2 to 4) in year 1, persisting into year 5. Acute hospital contacts increased 5% (95% CI 3 to 7), primarily in the first year. These 1-year effects corresponded to 51 (95% CI 18 to 83) planned and 13 (95% CI 7 to 18) acute hospital contacts per 1000 patients. Mortality was unchanged up to 5 years after discontinuity.

Conclusion

Regular GPs retirement and relocation were associated with small to moderate increases in healthcare use among listed patients, while mortality was unaffected.

Development of the Patient-Reported Indicator Surveys (PaRIS) conceptual framework to monitor and improve the performance of primary care for people living with chronic conditions

Background

The Organisation for Economic Co-operation and Development (OECD) Patient-Reported Indicator Surveys (PaRIS) initiative aims to support countries in improving care for people living with chronic conditions by collecting information on how people experience the quality and performance of primary and (generalist) ambulatory care services. This paper presents the development of the conceptual framework that underpins the rationale for and the instrumentation of the PaRIS survey.

Methods

The guidance of an international expert taskforce and the OECD Health Care Quality Indicators framework (2015) provided initial specifications for the framework. Relevant conceptual models and frameworks were then identified from searches in bibliographic databases (Medline, EMBASE and the Health Management Information Consortium). A draft framework was developed through narrative review. The final version was codeveloped following the participation of an international Patient advisory Panel, an international Technical Advisory Community and online international workshops with patient representatives.

Results

85 conceptual models and frameworks were identified through searches. The final framework maps relationships between the following domains (and subdomains): patient-reported outcomes (symptoms, functioning, self-reported health status, health-related quality of life); patient-reported experiences of care (access, comprehensiveness, continuity, coordination, patient safety, person centeredness, self-management support, trust, overall perceived quality of care); health and care capabilities; health behaviours (physical activity, diet, tobacco and alcohol consumption), sociodemographic characteristics and self-reported chronic conditions; delivery system characteristics (clinic, main healthcare professional); health system, policy and context.

Discussion

The PaRIS conceptual framework has been developed through a systematic, accountable and inclusive process. It serves as the basis for the development of the indicators and survey instruments as well as for the generation of specific hypotheses to guide the analysis and interpretation of the findings.

A realist review of how, why, for whom and in which contexts quality improvement in healthcare impacts inequalities

Introduction

Quality improvement (QI) is aimed at improving care. Equity is one of the six domains of healthcare quality, as defined by the Institute of Medicine. If this domain is ignored, QI projects have the potential to maintain or even worsen inequalities.

Aims and objectives

We aimed to understand why, how, for whom and in which contexts QI approaches increase, or do not change health inequalities in healthcare organisations.

Methods

We conducted a realist review by first developing an initial programme theory, then searching MEDLINE, Embase, CINAHL, PsychINFO, Web of Science and Scopus for QI projects that considered health inequalities. Included studies were analysed to generate context-mechanism-outcome configurations (CMOCs) and develop an overall programme theory.

Results

We screened 6259 records. Thirty-six records met our inclusion criteria, the majority of which were from the USA. We developed CMOCs covering four clusters: values and understanding, resources, data, and design. Five of these described circumstances in which QI may increase inequalities and 15 where it may reduce inequalities. We found that QI projects that are values-led and incorporate diverse, patient-led data into design are more likely to address health inequalities. However, when staff and patients cannot engage fully with equity-focused projects, due to practical or technological barriers, QI projects are more likely to worsen inequalities.

Conclusions

The potential for QI projects to positively impact inequalities depends on embedding equity-focused values across organisations, ensuring sufficient and appropriate resources are provided to staff delivering QI, and using diverse disaggregated data alongside considered user involvement to inform and assess the success of QI projects. Policymakers and practitioners should ensure that QI projects are used to address inequalities.

Time to de-implementation of low-value cancer screening practices: a narrative review

The continued use of low-value cancer screening practices not only represents healthcare waste but also a potential cascade of invasive diagnostic procedures and patient anxiety and distress. While prior research has shown it takes an average of 15 years to implement evidence-based practices in cancer control, little is known about how long it takes to de-implement low-value cancer screening practices. We reviewed evidence on six United States Preventive Services Task Force ‘Grade D’ cancer screening practices: (1) cervical cancer screening in women<21 years and >65 years, (2) prostate cancer screening in men≥70 years and (3) ovarian, (4) thyroid, (5) testicular and (6) pancreatic cancer screening in asymptomatic adults. We measured the time from a landmark publication supporting the guideline publication and subsequent de-implementation, defined as a 50% reduction in the use of the practice in routine care. The pace of de-implementation was assessed using nationally representative surveillance systems and peer-reviewed literature from the USA. We found the time to de-implementation of cervical cancer screening was 4 years for women<21 and 16 years for women>65. Prostate screening in men ≥70 has not reached a 50% reduction in use since the 2012 guideline release. We did not identify sufficient evidence to measure the time to de-implementation for ovarian, thyroid, testicular and pancreatic cancer screening in asymptomatic adults. Surveillance of low-value cancer screening is sparse, posing a clear barrier to tracking the de-implementation of these screening practices. Improving the systematic measurement of low-value cancer control practices is imperative for assessing the impact of de-implementation on patient outcomes, healthcare delivery and healthcare costs.

Economic evaluations of quality improvement interventions: towards simpler analyses and more informative publications

With public reporting and value-based payment, healthcare organisations have strong incentives to optimise quality of care, improve patient outcomes and lower costs.1 In response, organisations are implementing diverse and often novel quality improvement (QI) interventions (systematic efforts to improve the structure, process or outcome of care). Many organisations routinely assess the clinical effects and costs of QI interventions to support internal decisions about whether to discontinue, sustain or expand them.

These internal analyses create an opportunity for QI teams to publish their experiences and inform decision-making at peer organisations. Since QI interventions can be labour-intensive and thus costly, published economic evaluations are of great interest to leaders weighing decisions about whether to adopt them and how best to implement them. Published evaluations seek to answer a two-part question about the effectiveness and cost of a specific QI intervention at one healthcare organisation, with the goal of reporting...

Understanding the evidence for artificial intelligence in healthcare

Scientific studies of artificial intelligence (AI) solutions in healthcare have been the subject of intense criticism—both in research publications and in the media.1–3 Early validations of predictive algorithms are criticised for not having meaningful clinical impact, and AI tools that make mistakes or fail to show immediate improvement in health outcomes are heralded as the first snowflakes in the next AI winter (a period of decreased interest in AI research and development). Scientific evidence is the language of trust in healthcare, and peer-reviewed studies evaluating AI solutions are key to fostering adoption. There are over two dozen reporting guidelines for AI in medicine,4 and many other consensus statements and standards that offer recommendations for the publication of research about medical AI.5 Despite such guidance, the average frontline clinician still struggles in interpreting the results of an AI study to...

Workforce well-being is workforce readiness: it is time to advance from describing the problem to solving it

‘We need bold, fundamental change that gets at the roots of the burnout crisis.’- US Surgeon General Vivek H. Murthy, MD, MBA.

Well-being was brought into clearer focus during the COVID-19 pandemic, during which the prevalence of healthcare worker (HCW) emotional exhaustion increased from 27%1 to 39%.2 Currently, there is not a coordinated effort to ensure HCW well-being interventions meet minimum standards of feasibility, accessibility and methodological rigour. In this issue of BMJ Quality and Safety, Melvin et al assessed perceptions of physician well-being programmes by interviewing physicians and people involved in these programmes.3 As is often the case with any real-world application of science, there are substantial gaps between the programmes as intended and the programmes in practice. The authors conclude that the ‘persistence of poor well-being outcomes suggests that current support initiatives are suboptimal’.

The key is understanding what is suboptimal....

We will take some team resilience, please: Evidence-based recommendations for supporting diagnostic teamwork

In this issue of BMJ Quality and Safety, Black and colleagues present a qualitative study of healthcare teams working to uncover diagnoses in patients experiencing non-specific cancer symptoms.1 The study highlights the criticality of teams in helping support or derail diagnostic pathways. Overall, Black et al1 present unique insights that highlight the challenges clinical teams face when caring for patients with non-specific symptoms.

Unfortunately, we know that diagnostic processes such as those studied by Black et al are frequently unsafe. Diagnostic errors are ‘the single largest source of deaths across all (healthcare) settings,’ with estimates for cancer-related mistakes estimated at around 11.1%.2 A key challenge to making diagnoses in patients with non-specific symptoms is the presence of uncertainty throughout the diagnostic process.

As Black et al point out,1 uncertainty in the diagnostic process is felt by both patients and clinicians. It...

Large-scale observational study of AI-based patient and surgical material verification system in ophthalmology: real-world evaluation in 37 529 cases

Background

Surgical errors in ophthalmology can have devastating consequences. We developed an artificial intelligence (AI)-based surgical safety system to prevent errors in patient identification, surgical laterality and intraocular lens (IOL) selection. This study aimed to evaluate its effectiveness in real-world ophthalmic surgical settings.

Methods

In this retrospective observational before-and-after implementation study, we analysed 37 529 ophthalmic surgeries (18 767 pre-implementation, 18 762 post implementation) performed at Tsukazaki Hospital, Japan, between 1 March 2019 and 31 March 2024. The AI system, integrated with the WHO surgical safety checklist, was implemented for patient identification, surgical laterality verification and IOL authentication.

Results

Post implementation, five medical errors (0.027%) occurred, with four in non-authenticated cases (where the AI system was not fully implemented or properly used), compared with one (0.0053%) pre-implementation (p=0.125). Of the four non-authenticated errors, two were laterality errors during the initial implementation period and two were IOL implantation errors involving unlearned IOLs (7.3% of cases) due to delayed AI updates. The AI system identified 30 near misses (0.16%) post implementation, vs 9 (0.048%) pre-implementation (p=0.00067), surgical laterality errors/near misses occurred at 0.039% (7/18 762) and IOL recognition at 0.29% (28/9713). The system achieved>99% implementation after 3 months. Authentication performance metrics showed high efficiency: facial recognition (1.13 attempts, 11.8 s), surgical laterality (1.05 attempts, 3.10 s) and IOL recognition (1.15 attempts, 8.57 s). Cost–benefit analysis revealed potential benefits ranging from US$181 946.94 to US$2 769 129.12 in conservative and intermediate scenarios, respectively.

Conclusions

The AI-based surgical safety system significantly increased near miss detection and showed potential economic benefits. However, errors in non-authenticated cases underscore the importance of consistent system use and integration with existing safety protocols. These findings emphasise that while AI can enhance surgical safety, its effectiveness depends on proper implementation and continuous refinement.

Support for hospital doctors workplace well-being in England: the Care Under Pressure 3 realist evaluation

Introduction

The vital role of medical workforce well-being for improving patient experience and population health while assuring safety and reducing costs is recognised internationally. Yet the persistence of poor well-being outcomes suggests that current support initiatives are suboptimal. The aim of this research study was to work with, and learn from, diverse hospital settings to understand how to optimise strategies to improve doctors’ well-being and reduce negative impacts on the workforce and patient care.

Methods

Realist evaluation consistent with the Realist And Meta-narrative Evidence Synthesis: Evolving Standards (RAMESES) II quality standards. Realist interviews (n=124) with doctors, well-being intervention implementers/practitioners and leaders in eight hospital settings (England) were analysed using realist logic.

Results

There were four key findings, underpinned by 21 context-mechanism-outcome configurations: (1) solutions needed to align with problems, to support doctor well-being and avoid harm to doctors; (2) doctors needed to be involved in creating solutions to their well-being problems; (3) doctors often did not know what support was available to help them with well-being problems and (4) there were physical and psychological barriers to accessing well-being support.

Discussion and conclusion

Doctors are mandated to ‘first, do no harm’ to their patients, and the same consideration should be extended to doctors themselves. Since doctors can be harmed by poorly designed or implemented well-being interventions, new approaches need careful planning and evaluation. Our research identified many ineffective or harmful interventions that could be stopped. The findings are likely transferable to other settings and countries, given the realist approach leading to principles and causal explanations.

Doing 'detective work to find a cancer: how are non-specific symptom pathways for cancer investigation organised, and what are the implications for safety and quality of care? A multisite qualitative approach

Background

Over the past two decades, the UK has actively developed policies to enhance early cancer diagnosis, particularly for individuals with non-specific cancer symptoms. Non-specific symptom (NSS) pathways were piloted and then implemented in 2015 to address delays in referral and diagnosis. The aim of this study was to outline the functions that enable NSS teams to investigate cancer and other diagnoses for patients with NSSs.

Methods

The analysis was derived from a multisite ethnographic study conducted between 2020 and 2023 across four major National Health Service (NHS) trusts. Data collection encompassed observations, patient shadowing, interviews with clinicians and patients (n=54) and gathered documents. We used principles of the functional resonance analysis method to identify the functions of the NSS pathway and analyse their relevance to patient safety.

Results

Our analysis produced 29 distinct functions within NSS pathways, organised into two clusters: pretesting assessment and information gathering, and post-testing interpretation and management. Safety-critical functions encompassed assessing the reason for referral, deciding on a plan of investigation and estimating the remaining cancer risk. We also identified ways that teams build and maintain safety across all functions, for example, by cultivating generalist-specialist expertise within the team and creating continuity through patient navigation. Variation in practice across sites revealed targets for an NSS pathway blueprint that would foster local development and quality improvement.

Conclusions

Our findings suggest that national and local improvement plans could differentiate specific policies to reduce unwarranted variation and support adaptive variation that facilitates the delivery of safe care within the local context. Enhancing multidisciplinary teams with additional consultants and deploying patient navigators with clinical backgrounds could improve safety within NSS pathways. Future research should investigate different models of generalist-specialist team composition.

Quantifying the cost savings and health impacts of improving colonoscopy quality: an economic evaluation

Objective

To estimate and quantify the cost implications and health impacts of improving the performance of English endoscopy services to the optimum quality as defined by postcolonoscopy colorectal cancer (PCCRC) rates.

Design

A semi-Markov state-transition model was constructed, following the logical treatment pathway of individuals who could potentially undergo a diagnostic colonoscopy. The model consisted of three identical arms, each representing a high, middle or low-performing trust’s endoscopy service, defined by PCCRC rates. A cohort of 40-year-old individuals was simulated in each arm of the model. The model’s time horizon was when the cohort reached 90 years of age and the total costs and quality-adjusted life-years (QALYs) were calculated for all trusts. Scenario and sensitivity analyses were also conducted.

Results

A 40-year-old individual gains 0.0006 QALYs and savings of £6.75 over the model lifetime by attending a high-performing trust compared with attending a middle-performing trust and gains 0.0012 QALYs and savings of £14.64 compared with attending a low-performing trust. For the population of England aged between 40 and 86, if all low and middle-performing trusts were improved to the level of a high-performing trust, QALY gains of 14 044 and cost savings of £249 311 295 are possible. Higher quality trusts dominated lower quality trusts; any improvement in the PCCRC rate was cost-effective.

Conclusion

Improving the quality of endoscopy services would lead to QALY gains among the population, in addition to cost savings to the healthcare provider. If all middle and low-performing trusts were improved to the level of a high-performing trust, our results estimate that the English National Health Service would save approximately £5 million per year.

Improving weaning and liberation from mechanical ventilation for tracheostomy patients: a quality improvement initiative

For patients in the intensive care unit (ICU), prolonged mechanical ventilation is associated with poor outcomes. A quality improvement (QI) initiative with the aim of reducing median time on the ventilator for tracheostomy patients was undertaken at a tertiary care ICU in Toronto, Canada. A QI team was formed, and using QI methodology, a deep understanding of our local process was achieved. Based on this information and on the latest evidence on weaning, a standard tracheostomy weaning protocol was designed. The protocol was refined through three developmental and two testing plan–do–study–act cycles. This study was a prospective time series showing the effect of the implementation of our intervention on tracheotomy patients’ time on the ventilator. The baseline median number of days on the ventilator after tracheostomy insertion was 17. Within 12 months of the introduction of the intervention, a shift in the data showing a reduction in the median time on the ventilator to 10.6 days had developed. Length of stay in the ICU was reduced by 4.3 days. Adherence and compliance to the protocol also improved over time. A standard tracheostomy weaning protocol was successfully developed, tested and implemented in a tertiary care ICU. Using strategies such as frequent communication with key stakeholders and incorporating a tracheostomy weaning progress sheet to document and track tracheostomy patients and their outcomes, this QI intervention has become engrained in the local culture at our centre. This weaning protocol has successfully reduced the median time on the ventilator for tracheostomy patients by over 6 days.

Testing and cancer diagnosis in general practice

Healthcare systems worldwide have for decades sought to prioritise prompt diagnosis of cancer as a means to improve outcomes. The gatekeeping role of general practitioners (GPs) that restricts access to testing and referral,1 along with their relatively lower propensity to use diagnostic tests,2 has been offered as partial explanations for the UK’s consistently poor performance in cancer compared with other high-income countries.3

In this issue of BMJ Quality & Safety, Akter and colleagues examined primary care investigations prior to a cancer diagnosis using data on 53 252 patients and 1868 general practices from the 2018 English National Cancer Diagnostic Audit.4 Grouping tests into four categories (any investigation, blood tests, imaging and endoscopy), the study demonstrated large variation in use of tests in general practice prior to diagnosis with cancer. Recorded characteristics of practices accounted for only a small proportion of this variation,...

Just how many diagnostic errors and harms are out there, really? It depends on how you count

The significant adverse consequences of diagnostic errors are well established.1 2 Across clinical settings and study methods, diagnostic adverse events often lead to serious permanent disability or death and are frequently deemed preventable.3–5 In malpractice claims, diagnostic adverse events consistently account for more total serious harms than any other individual type of medical error,5 6 a finding supported by large, population-based estimates of total serious misdiagnosis-related harms.2 Despite this, they generally go unrecognised, unmeasured and unmonitored, causing the US National Academy of Medicine to label diagnostic errors as ‘a blind spot’ for healthcare delivery systems.1

Diagnostic errors have been described as ‘the bottom of the iceberg’ of patient safety. This analogy is intended to connote both their enormous impact and their unmeasured, hidden nature relative to more visible errors such as...

Pages