Evidence-based Medicine ClickEBM Mike Stuart MDEBM Sheri Strite

A Cool Click for Evidence-based Medicine (EBM) and Evidence-based Practice (EBP) Commentaries & Health Care Quality Improvement Nibblets

The EBM Information Quest: Is it true? Is it useful? Is it usable?™


Valdity Detectives: Michael E Stuart MD, President & Medical Director . Sheri Ann Strite, Managing Director & Principal

Quick Picks

Delfini: Dr. Michael E. Stuart & Sheri Ann Strite
Why Critical Appraisal Matters



Delfini Group Publishing

Contact Us
Updates & Contact Info

Free Online Tools

Free Online Tutorial

Delfini Blog

EBM Dolphin
Evidence & Quality Improvement Commentaries


Follow & Share...

Just-in-time UpdatesFollow Delfini Group on Twitter

Like Us Like Us on Facebook  Find UsFind Us at LinkedIn

DelfiniGram™: GET ON OUR UPDATE LIST Contact Us

Volume — Use of Evidence:
Applying the Evidence

De-adopting Ineffective or Harmful Clinical Practices

01/05/2015: Network Meta-analyses—More Complex Than Traditional Meta-analyses


Go to DelfiniClick™ for all volumes.Delfini Group EBM DolphinDelfini Group EBM Dolphin

Institute of Medicine CEO Checklist for High-Value Healthcare

In June, 2012 the Institute of Medicine (IOM) published a checklist for healthcare CEOs as a way of encouraging further efforts towards achieving a simultaneous reduction in costs and elimination of waste.[1] EBMers will find the case studies of great interest. Many of the success stories contain two key ingredients—reliable information to improve decision-making and successful implementation.  The full report is available at—

Foundational Elements

1. Governance priority—visible and determined leadership by CEO and board

  • Culture of continuous improvement—commitment to ongoing, real-time learning
  • Infrastructure Fundamentals

2. Information technology (IT) best practices—automated, reliable information to and from the point of care

  • Evidence protocols—effective, efficient, and consistent care
  • Resource utilization—optimized use of personnel, physical space, and other resources

3. Care Delivery Priorities

  • Integrated care—right care, right setting, right providers, right teamwork
  • Shared decision making—patient-clinician collaboration on care plans
  • Targeted services—tailored community and clinic interventions for resource-intensive patients

4. Reliability and Feedback

  • Embedded safeguards—supports and prompts to reduce injury and infection
  • Internal transparency—visible progress in performance, outcomes, and costs


1. Cosgrove D, Fisher M, Gabow P, et al. A CEO Checklist for High-Value Health Care. Discussion paper. Washington, DC: Institute of Medicine; 2012. https://nam.edu/wp-content/uploads/2015/06/CEOHighValueChecklist.pdf (accessed 08/13/2012).

Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message.

"Lies, Damned Lies, and Medical Science," by David H. Freedman, The Atlantic, November 2010
"How to Evaluate Medical Science," Letter to the Editor by Sheri Ann Strite & Michael E. Stuart MD,
The Atlantic, January/February 2011

How to Evaluate Medical Science
Letter to the Editor by Sheri Ann Strite & Michael E. Stuart MD, The Atlantic, January/February 2011

We applaud David H. Freedman’s “Lies, Damned Lies, and Medical Science” (November Atlantic), having long been admirers of professor John Ioannidis. We too evaluate medical evidence and train physicians and others in how to analyze studies for reliability and clinical usefulness. However, we believe the problem is larger, and the consequences of applying the results of misleading science more deleterious, than implied.

Low-quality science significantly contributes to lost care opportunities, illness burden, and mortality. For instance, in the 1980s, observational studies “reported” dramatic tumor shrinkage and reduced mortality in women with advanced breast cancer who were treated with high-dose chemotherapy and autologous bone-marrow transplant. But these studies are highly prone to bias; valid randomized controlled trials are required to prove efficacy of therapies. More than 30,000 women underwent these procedures before randomized controlled trials showed greater adverse events and mortality. And we believe less than 10 percent of such trials are reliable.

Individual biases have been shown to greatly distort study results, frequently in favor of the new treatment being studied. Yet few health-care professionals know the importance of bias in studies, or the basics of identifying it, and so are at high risk of being misled. In an informal tally, roughly 70 percent of physicians fail our basic test for critical appraisal, which should be a foundational discipline for all health-care professionals.

Sheri Ann Strite
Michael E. Stuart, M.D.
Delfini Group
Seattle, Wash.

Full Delfini Commentary on "Lies, Damned Lies, and Medical Science," by David H. Freedman, The Atlantic, November 2010

We have long been admirers of Professor John Ioannidis and routinely cite his important PLoS article, “Why Most Published Research Findings are False,” in our training programs.[1] We were excited to see him featured in The Atlantic in David Freedman’s important article, "Lies, Damned Lies, and Medical Science," which you can read here.

One of the main points in the article is that Professor Ioannidis “…charges that as much as 90 percent of the published medical information that doctors rely on is flawed.” We are hopeful that this important piece casts a spotlight on the problem of flawed and potentially misleading medical science that we continue to raise wherever and however we can.

However—and importantly—we believe the problem of the creating and using of potentially misleading science to be much, much larger than expressed in the article, and we believe the consequences to the public are much, much greater than implied. It is our experience having reviewed hundreds of clinical trials that roughly 90 percent of randomized controlled trials—forget the observations!—are not only flawed, but are so flawed that they are not reliable or clinically useful. So the reality is even worse, in our estimation. And the documentation of resulting patient harms is significant and significantly underreported.

Adding to this problem is that the vast majority of physicians, clinical pharmacists and other healthcare professionals have no idea how to evaluate a study in even a basic way (informally, roughly over 70 percent of doctors fail our simple 3-question critical appraisal pre-test). If they have even basic skills, we believe they would be able to dodge maybe as high as 75 percent of potentially misleading clinical trial results, or more. Care for patients would be much better and waste much reduced. (Major groups have estimated that somewhere in the neighborhood of 20 to 50 percent of care in the US is inappropriate.[2-6] That translates into at least one hundred billion dollars each year.)

Ironically, it is relatively easy to learn basic critical appraisal skills. Yet, schools generally do not provide effective training in how to conduct studies that are likely to yield valid and useful science and—possibly more importantly—in how to critically appraise the medical literature and apply it. A big issue is that it is from this untrained pool that many of our researchers come, not to mention editors and peer-reviewers (and future academicians). In writing the World Association of Medical Editors in 2007 about their lack of critical appraisal criteria, we received the following response: “Thanks for your interest…regarding the issue of inaccuracy and error in the medical literature (and in fact in all scientific literature). You are correct that this is a large problem. No doubt it exists partly because many reviewers, editors, and authors lack skill, training, and understanding in these issues.”[7]

If healthcare professionals are not competent in evaluating primary studies (original studies such as RCTs) they will not be able to determine if study results are reliable. The problem is carried further: when they read secondary studies (systematic reviews and meta-analyses) without critical appraisal skills, they will be unable to determine if the included primary studies are reliable. And they will have the same problem evaluating secondary sources (guidelines and other derivative works such as monographs and textbooks).

The entire culture of medicine and healthcare needs to change to one in which effective evidence-based practice is the norm with a true understanding of what that really means. This will not be easy because, unfortunately, the magnitude and severity of the problem is not widely appreciated and frequently the concept of evidence-based practice is misunderstood. It all starts with our schools. This article casts light on the tip of a very big and dangerous iceberg, and we are thankful to Dave Freedman for writing it and for Professor Ioannidis for all his important work including participating with Dave in this story.


1. Ioannidis JPA. Why Most Published Research Findings are False. PLoS Med 2005; 2(8):696-701. PMID 17593900.

2. Chassin MR, Galvin RW (1998) The National Roundtable on Health Care Quality. The urgent need to improve health care quality: Institute of Medicine National Roundtable on Health Care Quality [consensus statement]. JAMA 280:1000–1005.

3. McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J et al. (2003) The quality of health care delivered to adults in the United. N Engl J Med 348:2635-45.

4. Kerr EA, McGlynn EA, Adams J, Keesey J, Asch SM (2001) Profiling the quality of care
in twelve communities: results from the CQI study. Health Aff (Millwood) 23:247-56.

5. Skinner J, Fisher ES, Wennberg JE (2001) For the National Bureau of Economic Research. The efficiency of Medicare. Working Paper No. 8395. Cambridge, MA: National Bureau of Economic Research. Available: papers.ssrn.com/sol3/papers.cfm?abstract_id=277305. Accessed 13 January 2010.

6. Centers for Medicare and Medicaid Services, Office of the Actuary, National Health Statistics Group, National Health Care Expenditures Data, January 2010.

7. Personal correspondence from WAME, World Association of Medical Editors.

Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message.

De-adopting Ineffective or Harmful Clinical Practices

Although a fair amount is known about diffusion and adoption of new ideas and practices, less is known about the abandonment of practices that are ineffective or harmful ("undiffusion"). We want to draw your attention to a recent editorial that address this issue [Davidoff].

Here are some of the take-home points:

  • People frequently have difficulty discontinuing familiar practices because of "aversion to loss" which appears to be at least partially explained by—
    • Comfort with what is familiar;
    • Embarrassment for having adopted ineffective or inappropriate practices;
    • Reluctance to abandon practices that may have required investment of energy, dollars and time to implement;
    • Economic considerations; and,
    • Inertia.
  • We lack an adequate model for explaining "undiffusion" or fostering the abandonment of ineffective or harmful practices.

Some of the interesting concepts, examples and case studies explored in the editorial include—

  • Tight glycemic control in ICUs and the role of "negative evidence" in the process of undiffusion;
  • Rapid adoption (without sufficient evidence) of noninvasive preoperative screening for coronary disease in patients who are undergoing noncardiac surgery; and,
  • Continued specialty society support for current medical practices with sufficient evidence to support their undiffusion.

Delfini Comment
This thoughtful editorial nicely illustrates the complex problem of the health care universe quickly adopting "innovative" practices which have been advanced without sufficient evidence to assure net benefit or value and how difficult it is for undiffusion (aka discontinuation, deadoption, deimplementation, reversal, rejection, withdrawal) to occur.


Davidoff F. On the Undiffusion of Established Practices. JAMA Intern Med. 2015 Mar 16. doi: 10.1001/jamainternmed.2015.0167. [Epub ahead of print] PubMed PMID: 25774743.

Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message.

Sounding the Alarm (Again) in Oncology

Five years ago Fojo and Grady sounded the alarm about value in many of the new oncology drugs [1]. They raised the following issues and challenged oncologists and others to get involved in addressing these issues:

  • There is a great deal of uncertainty and confusion about what constitutes a benefit in cancer therapy; and,
  • How much should cost factor into these deliberations?

The authors review a number of oncology drug studies reporting increased overall survival (OS) ranging from a median of a few days to a few months with total new drug costs ranging from $15,000 to $90,000 plus. In some cases, there is no increase in OS, but only progression free survival (PFS) which is a weaker outcome measure due to its being prone to tumor assessment biases and is frequently assessed in studies of short duration. Adverse events associated with the new drugs are many and include higher rates of febrile neutropenia, infusion-related reactions, diarrhea, skin toxicity, infections, hypertension and other adverse events.

Fojo and Grady point out that—

"Many Americans would likely not regard a 1.2-month survival advantage as 'significant' progress, the much revered P value notwithstanding. But would an individual patient agree? Although we lack the answer to this question, we would suggest that the death of a mother of four at age 37 years would be no less painful were it to occur at age 37 years and 1 month, nor would the passing of a 67-year-old who planned to travel after retiring be any less difficult for the spouse were it to have occurred 1 month later."

In a recent article [2] (thanks to Dr. Richard Lehman for drawing our attention to this article in his wonderful BMJ blog) Fojo and colleagues again point out that—

  • Cancer is the number one cause of mortality worldwide, and cancer cases are projected to rise by 75% over the next 2 decades.
  • Of the 71 therapies for solid tumors receiving FDA approval from 2002 to 2014, only 30 of the 71 approvals (42%) met the American Society of Clinical Oncology Cancer Research Committee’s “low hurdle” criteria for clinically meaningful improvement. Further, the authors tallied results from all the studies and reported very modest collective median gains of 2.5 months for PFS and 2.1 months for OS. Numerous surveys have indicated that patients expect much more.
  • Expensive therapies are stifling progress by (1) encouraging enormous expenditures of time, money, and resources on marginal therapeutic indications; and, (2) promoting a me-too mentality that is stifling innovation and creativity.

The last bullet needs a little explaining. The authors provide a number of examples of “safe bets” and argue that revenue from such safe and profitable therapies rather than true need has been a driving force for new oncology drugs. The problem is compounded by regulations—e.g., rules which require Medicare to reimburse patients for any drug used in an “anti-cancer chemotherapeutic regimen"—regardless of its incremental benefit over other drugs—as long as the use is “for a medically accepted indication” (commonly interpreted as “approved by the FDA”). This provides guaranteed revenues for me-too drugs irrespective of their marginal benefits. The authors also point out that when prices for drugs of proven efficacy fall below a certain threshold, suppliers often stop producing the drug, causing severe shortages.

What can be done? The authors acknowledge several times in their commentary that the spiraling cost of cancer therapies has no single villain; academia, professional societies, scientific journals, practicing oncologists, regulators, patient advocacy groups and the biopharmaceutical industry—all bear some responsibility. [We would add to this list physicians, P&T committees and any others who are engaged in treatment decisions for patients. Patients are not on this list (yet) because they are unlikely to really know the evidence.] This is like many other situations when many are responsible—often the end result is that "no one" takes responsibility. Fojo et al. close by making several suggestions, among which are—

  1. Academicians must avoid participating in the development of marginal therapies;
  2. Professional societies and scientific journals must raise their standards and not spotlight marginal outcomes;
  3. All of us must also insist on transparency and the sharing of all published data in a timely and enforceable manner;
  4. Actual gains of benefit must be emphasized—not hazard ratios or other measures that force readers to work hard to determine actual outcomes and benefits and risks;
  5. We need cooperative groups with adequate resources to  provide leadership to ensure that trials are designed to deliver meaningful outcomes;
  6. We must find a way to avoid paying premium prices for marginal benefits; and,
  7. We must find a way [federal support?] to secure altruistic investment capital.

Delfini Comment
While the authors do not make a suggestion for specific responsibilities or actions on the part of the FDA, they do make a recommendation that an independent entity might create uniform measures of benefits for each FDA-approved drug—e.g., quality-adjusted life-years. We think the FDA could go a long way in improving this situation.

And so, as pointed out by Fojo et al., only small gains have been made in OS over the past 12 years, and costs of oncology drugs have skyrocketed. However, to make matters even worse than portrayed by Fojo et al., many of the oncology drug studies we see have major threats to validity (e.g., selection bias, lack of blinding and other performance biases, attrition and assessment bias, etc.) raising the question, "Does the approximate 2 month gain in median OS represent an overestimate?" Since bias tends to favor the new intervention in clinical trials, the PFS and OS reported in many of the recent oncology trials may be exaggerated or even absent or harms may outweigh benefits. On the other hand, if a study is valid, since a median is a midpoint in a range of results and a patient may achieve better results than indicated by the median, some patients may choose to accept a new therapy. The important thing is that patients are given information on benefits and harms in a way that allows them to have a reasonable understanding of all the issues and make the choices that are right for them.

Resources & References


  1. The URL for Dr. Lehman's Blog is—
  2. The URL for his original blog entry about this article is—


  1. Fojo T, Grady C. How much is life worth: cetuximab, non-small cell lung cancer, and the $440 billion question. J Natl Cancer Inst. 2009 Aug 5;101(15):1044-8. Epub 2009 Jun 29. PMID: 19564563
  2. Fojo T, Mailankody S, Lo A. Unintended Consequences of Expensive Cancer Therapeutics-The Pursuit of Marginal Indications and a Me-Too Mentality That Stifles Innovation and Creativity: The John Conley Lecture. JAMA Otolaryngol Head Neck Surg. 2014 Jul 28. doi: 10.1001/jamaoto.2014.1570. [Epub ahead of print] PubMed PMID: 25068501.

Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message.

Involving Patients in their Care Decisions and JAMA Editorial: The New Cholesterol and Blood Pressure Guidelines: Perspective on the Path Forward

Krumholz HM. The New Cholesterol and Blood Pressure Guidelines: Perspective on the Path Forward. JAMA. 2014 Mar 29. doi: 10.1001/jama.2014.2634. [Epub ahead of print] PubMed PMID: 24682222.


Here is an excellent editorial that highlights the importance of patient decision-making.  We thank the wonderful Dr. Richard Lehman, MA, BM, BCh, Oxford, & Blogger, BMJ Journal Watch, for bringing this to our attention.  We have often observed that evidence can be a neutralizing force. This editorial highlights for us that this means involving the patient in a meaningful way and finding ways to support decisions based on patients' personal requirements. These personal "patient requirements" include health care needs and wants and a recognition of individual circumstances, values and preferences.

To achieve this, we believe that patients should receive the same information as clinicians including what alternatives are available, a quantified assessment of potential benefits and harms of each including the strength of evidence for each and potential consequences of making various choices including things like vitality and cost.

Decisions may differ between patients, and physicians may make incorrect assumption about what most matters to patients of which there are many examples in the literature such as in the citations below.

O'Connor A. Using patient decision aids to promote evidence-based decision making. ACP J Club. 2001 Jul-Aug;135(1):A11-2. PubMed PMID: 11471526.

O’Connor AM, Wennberg JE, Legare F, Llewellyn-Thomas HA,Moulton BW, Sepucha KR, et al. Toward the 'tipping point’: decision aids and informed patient choice. Health Affairs 2007;26(3):716-25.

Rothwell PM. External validity of randomised controlled trials: "to whom do the results of this trial apply?". Lancet. 2005 Jan 1-7;365(9453):82-93. PubMed PMID: 15639683.

Stacey D, Bennett CL, Barry MJ, Col NF, Eden KB, Holmes-Rovner M, Llewellyn-Thomas H, Lyddiatt A, Légaré F, Thomson R. Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev. 2011 Oct 5;(10):CD001431. Review. PubMed PMID: 21975733.

Wennberg JE, O'Connor AM, Collins ED, Weinstein JN. Extending the P4P agenda, part 1: how Medicare can improve patient decision making and reduce unnecessary care. Health Aff (Millwood). 2007 Nov-Dec;26(6):1564-74. PubMed PMID: 17978377.

Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message.

Gastro-esophageal Reflux Disease (GERD), The Purple Pill and Plenty of Pennies

We thought we would pass on some information about PPIs, having been asked by a medical director friend of our what we thought about Nexium®—aka the controversy concerning the purple pill (esomeprazole) versus Prilosec (omeprazole) when considering comparative efficacy and cost. Our friend mentioned that he was concerned because Nexium costs about $5.60 per pill and Prilosec costs about 60 cents per pill, and some of his physicians believe that Nexium is more effective for GERD (gastro-esophageal reflux disease) than Prilosec.

Background: In the 1980s, proton pump inhibitors (PPIs) were quickly adopted for reducing symptoms associated with gastric hyperacidity, e.g., heartburn and reflux symptoms. Priolosec® (omeprazole) was to go off patent in 2001, and generic omeprazole would become available at a lower cost.

In 2000 a trial, “Esomeprazole improves healing and symptom resolution as compared to omeprazole in reflux oesophagitis: a randomized controlled trial [1],” concluded that, “Esomeprazole was more effective than omeprazole in healing and symptom resolution in GERD patients with reflux oesophagitis, and had a tolerability profile comparable to that of omeprazole.” In this trial, 1960 patients with endoscopy-confirmed reflux esophagitis were randomized to once daily esomeprazole 40 mg (n= 654) or 20 mg (n= 656), or omeprazole 20 mg (n=650) for up to 8 weeks.

What We Found: This study had a few flaws. For example, details of randomization were not reported and baseline characteristics differed with more patients with mild (versus more severe) esophagitis in the esomeprazole 40 mg group than in the omeprazole 20 mg group. Four of the 9 authors were from Astra Zeneca, and the study was supported by a grant from Astra Zeneca. (We do not reject studies because of industry involvement, but we do pay attention to this.)

Three subsequent trials using the same doses of the two drugs were also conducted, and two of the three trials reported statistically significant differences in symptom relief (favoring esomeprazole) with a pooled estimate of effect of an 8%, 95% CI (3% to 13%) difference in the proportion of subjects achieving resolution of GERD symptoms.

This and other information found in a very helpful recent OHSU review of PPIs paint a more complete picture of the comparative efficacy of omeprazole compared to esomeprazole (and other PPIs) [2]:

  • Among 16 head-to-head trials of PPIs, those with comparable doses did not find differences in symptom relief or healing of esophagitis between the two drugs:
    • Pooled rate for resolution of symptoms at 4 weeks in 7 trials with esomeprazole 40mg was 73%, 95% CI (65% to 82%).
    • Pooled rate for resolution of symptoms at 4 weeks in 2 trials with omeprazole 40 mg was 76%, 95% CI (65% to 87%).
    • Pooled rate for esophagitis healing in 8 trials with esomeprazole 40mg was 78%, 95% CI (73% to 83%). At 8 weeks the rate was 90%, 95% CI (88% to 92%).
    • Pooled rate for esophagitis healing in 2 trials with omeprazole 40 mg was 68%, 95% CI (59% to 78%). At 8 weeks, the rate (1 trial) was 87%, 95% CI (76% to 99%).

“Is esomeprazole really clinically superior to omeprazole in reducing symptoms in patients with symptoms of GERD (gastro-esophageal reflux disease or healing rates in esophagitis?”

Our conclusion: There is insufficient evidence to conclude that there are meaningful clinical differences between omeprazole (Prilosec) and esomeprazole (Nexium) when given in comparable doses for symptom relief or healing of heartburn or esophagitis. Resolution of symptoms and healing rates are very high with either drug—in the neighborhood of 75 percent. The cost differences, however, are significant.

1. Kahrilas PJ, Falk GW, Johnson DA, Schmitt C, Collins DW, Whipple J, D'Amico D, Hamelin B, Joelsson B. Esomeprazole improves healing and symptom resolution as compared with omeprazole in reflux oesophagitis patients: a randomized controlled trial. The Esomeprazole Study Investigators. Aliment Pharmacol Ther. 2000 Oct;14(10):1249-58. PMID: 11012468

2. The most useful systematic review we could find comes from Oregon Health Sciences University (OHSU) Center for Evidence-based Policy: (http://derp.ohsu.edu/about/final-document-display.cfm). Accessed 7/13/10.

Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message.

Guidelines & Effectiveness of Implementation

Tremendous effort goes into the development of high quality clinical guidelines and effective disease management programs. But, there is much uncertainty about how to effectively implement the final product — the evidence-based clinical improvements. In this systematic review, Weingarten et al report that clinicians do change their practice based on provider education efforts, among other strategies. Click to learn more about improvements in care directed at clinicians and patients.

Link to the Abstract (Full Text Available)

BMJ 2002;325:925 ( 26 October )

Weingarten S. et al. Interventions used in disease management programmes for patients with chronic illness — which ones work? Meta-analysis of published reports.


Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message.

Oregon Preferred Drug List: Conference Report

"Adding Value to the Cost Equation with Prescription Drugs: The Oregon Experience" — Tuesday, February 25th, Seattle Washington

Mike was on a panel, including many notables. Keynote address was by John Kitzhaber, MD, Former Governor, State of Oregon. The conference was a great success. Read about the program.

To access, copy the following into an internet search engine:
http://seattletimes.nwsource.com/cgi-bin/ PrintStory.pl?document_id=134641620&
zsection_id=268448406&slug= prescription26m&date=20030226

Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message.

EBM Dreyer
Successful Evidence-based QI Project: Diabetes Management at Dreyer Medical Clinical
Example provided by Rami Rihani, PharmD, Director of Pharmacy

Delfini Introduction
Measuring clinical improvements is complex. One of the most important, frequently misunderstood issues is that cause and effect relationships can only be drawn with reasonable certainty from valid experiments (RCTs). However, if we have valid evidence from RCTs that an intervention leads to improved clinical outcomes, it is then reasonable to use process measures to evaluate the success of our evidence-based clinical improvement project.

Generally, we advise people to measure — not health status outcomes — but to perform a process measurement to evaluate the success of application of the intervention. In other words, we advise people to measure the success of implementation of the clinical improvement. For example, if we are trying to ensure patients get a beta-blocker post-MI, we would recommend looking to see if prescriptions increased for hospitalized MI patients — not to measure whether patient survival was improved. This is because observational data, such as information extracted from databases, can be highly prone to confounding. If health status outcomes are measured, then we advise people to ensure that there is a sufficient understanding of all those utilizing the data that conclusions drawn from observational data can be misleading. In the above example, if patient survival decreased, there could be many explanations.

However, if a health status outcome is measured, and if the before/after change is dramatic, it is reasonable to hypothesize that our project has been successful. For example…

Many diabetics have difficulty achieving a HbA1c <7.0. Frequently diabetics are told their HbA1cs are too high but active medication change is not aggressively pursued.

Evidence-based QI Project: A quality improvement group at Dreyer Medical Clinic developed a disease management initiative using PharmDs to actively titrate dosages of insulin and other drugs based on the Intermountain Health Care (IHC) diabetes management protocol. The process is as follows:

  • Primary care physician (PCP) refers patient to the diabetes management program;
  • PharmD aggressively titrates medication based on IHC protocol;
  • PharmD monitors for safety and efficacy of medication interventions in collaboration with the PCP


Outcome (n=1049)
Prior to Enrollment
Most Recent Follow-up
% at HbA1c < 7%
% at LDL < 100

Delfini Commentary
There was a significant improvement in the percent of patients achieving goal HbA1c and LDL associated with this project.

It is reasonable to believe that the clinical improvement project was successful. Using outcomes data from the UK Prospective Diabetes Study 35 (1), the QI team estimates that since inception, the disease management initiative resulted in the prevention of —

  • four diabetes related deaths and
  • nine microvascular events (defined as renal failure, death from renal failure, retinal photocoagulation, or vitreous hemorrhage)

1. Stratton, I,M., Adler, A.I., et al, Association of glycaemia with macrovascular and microvascular complications of type 2 diabetes (UKPDS 35): prospective observation study. BMJ 2000; 321; 405-12.

Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message.

External Validity: Case of the Carotid Stent

The lead article in the Oct. 7, 2004 issue of The New England Journal of Medicine, (Yadav JS, Wholey MH, Kuntz RE, et al. Protected Carotid-Artery Stenting versus Endarterectomy in High-Risk Patients N Engl J Med 2004;351:1493-501 — PMID: 15470212 — abstract), known as the SAPPHIRE trial, is an RCT with what appears to be adequate randomization/concealment of allocation, similar baseline characteristics of subjects, adequate blinding and with intention to treat analysis comparing carotid stenting using an embolic protection device to endarterectomy in patients with symptomatic carotid stenosis of at least 50 percent or asymptomatic stenosis of at least 80 percent and at least one high risk factor such as recurrent stenosis after endarterectomy e.g., age>80, cardiac disease or other significant medical problems. More than 20% of the patients in both treatment groups were patients with recurrent stenosis after endarterectomy.

The primary end point was a composite of death, stroke or MI within 30 days after the intervention, or death or ipsilateral stroke between 31 days and 1 year. The study appeared to be a sufficiently powered equivalence study. Study team members included both a surgeon and an interventionist who could prevent randomization of enrolled subjects if, in their judgment, the procedures could not be performed safely in these high risk patients. 747 patients were enrolled and 334 underwent randomization. The primary endpoint occurred in 20 patients in the stenting group (12.2 percent) and in 32 in the endarterectomy group (20.1%), yielding a P value of 0.004 for non-inferiority. At one year, revascularization was repeated in 0.6 percent of the stent patients and 4.3 percent of the endarterectomy patients; P=0.04. The authors conclude that in patients with severe carotid-artery stenosis and coexisting conditions, carotid stenting with an emboli protection device is not inferior to carotid endarterectomy.

What are the biggest threats to validity in this study (based on the above information)?

Our Answer
There is a huge problem with selection bias and external validity in this study. More than 20% of the patients in both treatment groups were patients with recurrent stenosis after endarterectomy which may bias the results towards stenting. 55% of enrolled study patients were excluded by the physician team raising further concerns about selection bias and issues surrounding highly selected patients.

Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message.

Update on Decision Support for Clinicians and Patients

We have written extensively and provide many examples of decision support materials on our website. An easy way to round them up is to go to our website search window http://www.delfini.org/index_SiteGoogleSearch.htm and type in the terms “decision support.”

A nice systematic review of the topic funded by AHRQ—Clinical Decision Support Systems (CDSSs)— has recently been published in the Annals of Internal Medicine.[1]  The aim of the review was to evaluate the effect of CDSSs on clinical outcomes, health care processes, workload and efficiency, patient satisfaction, cost, and provider use and implementation. CDSSs include alerts, reminders, order sets, drug-dose information, care summary dashboards that provide performance feedback on quality indicators, and information and other aids designed to improve clinical decision-making.

Findings:  148 randomized controlled trials were included in the review. A total of 128 (86%) assessed health care process measures, 29 (20%) assessed clinical outcomes, and 22 (15%) measured costs. Both commercially and locally developed CDSSs improved health care process measures related to performing preventive services (n =25; odds ratio [OR] 1.42, 95% CI [1.27 to 1.58]), ordering clinical studies (n=20; OR 1.72, 95% CI [1.47 to 2.00]), and prescribing therapies (n=46; OR 1.57, 95% CI [1.35 to 1.82]). There was heterogeneity in interventions, populations, settings and outcomes as would be expected. The authors conclude that commercially and locally developed CDSSs are effective at improving health care process measures across diverse settings, but evidence for clinical, economic, workload and efficiency outcomes remains sparse.

Delfini Comment: Although this review focused on decision support systems, the entire realm of decision support for end users is of great importance to all health care decision-makers. Without good decision support, we will all make suboptimal decisions. This area is huge and is worth spending time understanding how to move evidence from a synthesis to decision support. Interested readers might want to look at some examples of wonderful decision support materials created at the Mayo Clinic. The URL is—



1.  Bright TJ, Wong A, Dhurjati R, Bristow E, Bastian L, Coeytaux RR, Samsa G, Hasselblad V, Williams JW, Musty MD, Wing L, Kendrick AS, Sanders GD, Lobach D. Effect of clinical decision-support systems: a systematic review. Ann Intern Med. 2012 Jul 3;157(1):29-43. PubMed PMID: 22751758.

Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message.

Divulging Information to Patients With Poor Prognoses

We have seen several instances where our colleagues’ families have been given very little prognostic information by their physicians in situations where important decisions involving benefits versus harms, quality of life and other end of life decisions must be made. In both cases when a clinician in the family presented the evidence and prognostic information, decisions were altered.

We were happy to see a review of this topic by Mack and Smith in a recent issue of the BMJ.[1] In a nutshell the authors point out that—

  • Evidence consistently shows that healthcare professionals are hesitant to divulge prognostic information due to several underlying misconceptions. Examples of misconceptions—
    • Prognostic information will make patients depressed
    • It will take away hope
    • We can’t be sure of the patient’s prognosis anyway
    • Discussions about prognosis are uncomfortable
  • Many patients are denied discussion about code status, advance medical directives, or even hospice until there are no more treatments to give  and little time left for the patient
  • Many patients lose important  time with their families and and spend more time in the hospital and in intensive care units than would be if prognostic information had been provided and different decisions had been made.

Patients and families want prognostic information which is required to make decisions that are right for them. This together with the lack of evidence that discussing prognosis causes depression, shortens life, or takes away hope and the huge problem of unnecessary interventions at the end of life creates a strong argument for honest communication about poor prognoses.


1. Mack JW, Smith TJ. Reasons why physicians do not have discussions about poor prognosis, why it matters, and what can be improved. J Clin Oncol. 2012 Aug 1;30(22):2715-7. Epub 2012 Jul 2. PubMed PMID: 22753911.

Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message.

Network Meta-analyses—More Complex Than Traditional Meta-analyses

Meta-analyses are important tools for synthesizing evidence from relevant studies. One limitation of traditional meta-analyses is that they can compare only 2 treatments at a time in what is often termed pairwise or direct comparisons. An extension of traditional meta-analysis is the "network meta-analysis" which has been increasingly used—especially with the rise of the comparative effectiveness movement—as a method of assessing the comparative effects of more than two alternative interventions for the same condition that have not been studied in head-to-head trials.

A network meta-analysis synthesizes direct and indirect evidence over the entire network of interventions that have not been directly compared in clinical trials, but have one treatment in common.

A clinical trial reports that for a given condition intervention A results in better outcomes than intervention B. Another trial reports that intervention B is better than intervention C. A network meta-analysis intervention is likely to report that intervention A results in better outcomes than intervention C based on indirect evidence.

Network meta-analyses, also known as “multiple-treatments meta-analyses” or “mixed-treatment comparisons meta-analyses” include both direct and indirect evidence. When both direct and indirect comparisons are used to estimate treatment effects, the comparison is referred to as a "mixed comparison." The indirect evidence in network meta-analyses is derived from statistical inference which requires many assumptions and modeling. Therefore, critical appraisal of network meta-analyses is more complex than appraisal of traditional meta-analyses.

In all meta-analyses, clinical and methodological differences in studies are likely to be present.  Investigators should only include valid trials.  Plus they should provide sufficient detail so that readers can assess the quality of meta-analyses.  These details include important variables such as PICOTS (population, intervention, comparator, outcomes, timing and study setting) and heterogeneity in any important study performance items or other contextual issues such as important biases, unique care experiences, adherence rates, etc.  In addition, the effect sizes in direct comparisons should be compared to the effect sizes in indirect comparisons since indirect comparisons require statistical adjustments. Inconsistency between the direct and indirect comparisons may be due to chance, bias or heterogeneity.  Remember, in direct comparisons the data come from the same trial. Indirect comparisons utilize data from separate randomized controlled trials which may vary in both clinical and methodological details.

Estimates of effect in a direct comparison trial may be lower than estimates of effect derived from indirect comparisons. Therefore, evidence from direct comparisons should be weighted more heavily than evidence from indirect comparisons in network meta-analyses. The combination of direct and indirect evidence in mixed treatment comparisons may be more likely to result in distorted estimates of effect size if there is inconsistency between effect sizes of direct and indirect comparisons.  

Usually network meta-analyses rank different treatments according to the probability of being the best treatment. Readers should be aware that these rankings may be misleading because differences may be quite small or inaccurate if the quality of the meta-analysis is not high.

Delfini Comment
Network meta-analyses do provide more information about the relative effectiveness of interventions. At this time, we remain a bit cautious about the quality of many network meta-analyses because of the need for statistical adjustments.   It should be emphasized that, as of this writing, methodological research has not established a preferred method for conducting network meta-analyses, assessing them for validity or assigning them an evidence grade.

Li T, Puhan MA, Vedula SS, Singh S, Dickersin K; Ad Hoc Network Meta-analysis Methods Meeting Working Group. Network meta-analysis-highly attractive but more methodological research is needed. BMC Med. 2011 Jun 27;9:79. doi: 10.1186/1741-7015-9-79. PubMed PMID: 21707969.

Salanti G, Del Giovane C, Chaimani A, Caldwell DM, Higgins JP. Evaluating the quality of evidence from a network meta-analysis. PLoS One. 2014 Jul 3;9(7):e99682. doi: 10.1371/journal.pone.0099682. eCollection 2014. PubMed PMID: 24992266.

Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message.


Contact UsCONTACT DELFINI Delfini Group EBM DolphinDelfini Group EBM Dolphin

At DelfiniClick™

EBM Dolphin

Read Our Blog...

Use of our website implies agreement to our Notices. Citations for references available upon request.

Best of Delfini
What's New


Delfini Group Publishing
Sample Projects
About Us & Our Work
Site Search

Contact Info/Updates


Quick Navigator to Selected Resources



Return to Top

© 2002-2020 Delfini Group, LLC. All Rights Reserved Worldwide.
Use of this website implies your agreement to our Notices.

EBM Solutions for Evidence-based Health Care Quality