All through our training as doctors and now into a specialist career we have been taught about evidence-based-medicine and levels of evidence. This article is about levels of evidence and how it has evolved over time.
Evolution of Levels of Evidence
Fletcher and Sackett were probably the first to formally generate 'levels of evidence' when they were working for the Canadian Task Force on the Periodic Health Examination in 1979. Their article can be accessed here with free pdf access: http://www.ncbi.nlm.nih.gov/pubmed/115569.
They generated "levels of evidence" for ranking the validity of evidence about the value of preventive manoeuvres, and then tied them to "grades of recommendations" in the report.
Over time, this concept evolved, driven largely by Sackett's studies on the use of anti-thrombotic agents. Here are two further papers (again available in free pdf format from the journal site).
Sackett 1989 http://www.ncbi.nlm.nih.gov/pubmed/2914516
Sackett 1992 http://www.ncbi.nlm.nih.gov/pubmed/1395818
Note the gradual incorporation of issues like power and confidence intervals in clinical trial results.
This commentary by Sackett on how he incorporated 'evidence' into his career makes great reading:
When the concept of levels of evidence became quite popular and began appearing in textbooks and recommendations there evolved a need for these 'levels' in areas of medicine not directly concerned with therapeutics/prevention e.g. diagnostics and economic analyses.
The Centre for Evidence Based Medicine in Oxford have been expanding this concept and you can find the current version here:
Here is a table with the levels for therapeutic studies (copied from the above site)
Levels of Evidence
|1a||Systematic Review (or Metanalysis) (with homogeneity) of Randomized Controlled Trials (RCT)|
|1b||Individual RCT (with narrow Confidence Interval)|
|1c||All or none|
|2a||Systematic Review (with homogeneity) of cohort studies|
|2b||Individual cohort study (including low quality RCT; e.g., <80% follow-up)|
|2c||"Outcomes" Research; Ecological studies|
|3a||SR (with homogeneity) of case-control studies|
|3b||Individual Case-Control Study|
|4||Case-series (and poor quality cohort and case-control studies)|
|5||Expert opinion without explicit critical appraisal, or based on physiology, bench research or "first principles"|
- What's homogeneity/heterogeneity? - In a systematic review when results from different studies when there are worrsiome variations in the direction and degree of results between individual studies it is called heterogeneity. Studies without significant variations are 'homogeneous'.
- When there is either a single result with a wide confidence interval or a systematic review with troublesome heterogeneity add a minus "-" sign. The grade of recommendation then becomes D.
- All or none studies: when all patients died before the Rx became available, but some now survive on it; or when some patients died before the Rx became available, but none now die on it.
Grades of Recommendations
|A||consistent level 1 studies|
|B||consistent level 2 or 3 studies or extrapolations from level 1 studies|
|C||level 4 studies or extrapolations from level 2 or 3 studies|
|D||level 5 evidence or troublingly inconsistent or inconclusive studies of any level|
The PDQ from the NCI uses a different form of Levels of Evidence
Many of you use the PDQ (Physicians Data Query) - the online evidence based teatment summaries for adult and childhood cancers.
The PDQ uses a different way to grade the levels of evidence - they base it on
- Strength of Study Design, and
- Strength of Endpoints
Look at this page that explains their method:
The NCCN levels of consensus
The NCCN guidelines are also very widely used. Since these are a set of consensus guidelines based on an expert panel, they use their own way to grade recommendations. You can find a summary here:
They have 4 grades of recommendations based on quality of evidence and level of consensus:
|Level||Quality of Evidence||Level of consensus|
The Triad of Evidence Based Practice
It is important to remember, however, that evidence-based-practice is not just blindly following published evidence - it has to be combined with 2 other important elements - clinical expertise and patient wishes and expectations.