There are five steps for applying EBM Principles:
1. Construct a well-built clinical question and classify it into one category (therapy, diagnosis, etiology or prognosis);
2. Find evidence in health care literature;
3. Critically appraise for validity and usefulness;
4. Integrate evidence with patient factors;
5. Evaluate the whole process.
JAMA publishes "Users' guides to the medical literature" series useful for learning about evidence-based practice. Similarly, BMJ publishes “How to read a paper” series.
The Evidence-Based Practice process starts with a clinical scenario that needs the best answer. First break down the clinical case into PICO analysis:
What is the primary problem, disease or co-existing conditions? On what groups do you want information? How would you describe a group of patients similar to the one in question? Sometimes age or sex of a patient may be relevant and should be included.
What medical event do you want to study the effect of? Which main intervention are you considering, prescribing a drug, ordering a test, ordering surgery. . .
Compared to what? Better or worse than no intervention at all or than another intervention? What is the main alternative to compare with the intervention, are you trying to decide between two drugs, a drug and a placebo, or two diagnostic tests. Sometimes there is no comparison.
What is the effect of the intervention? What do you hope to accomplish, measure, improve, or affect with this intervention? What are you trying to do for the patient, relieve or eliminate the symptoms, reduce side effects, reduce cost...
Then ask a well-built structured clinical question that should be directly relevant to the problem at hand and should be phrased to facilitate searching for a precise answer.
Then determine the category of the clinical question, there are four main ones:
Therapy: solves questions about which treatment to administer, and what might be the outcome of different treatment options. For most therapy questions one may want to look for the best evidence namely a randomized controlled study, and if the study is double blind, that would be better.
Diagnosis: solves questions about degree to which a test is reliable and clinically useful, after comparing result of a diagnostic test with that of a standard test regarded as a "gold standard".
Etiology: solves problems about the relationship between a disease and a possible cause. Example: find out if a diet rich in saturated fats increases the risk of heart disease, and if so by how much.
Prognosis: answers questions about a patient's future health, life span and quality of life in the event one chooses a particular treatment option. Example: find how would the quality of life change for a patient who undergoes surgery for prostate cancer.
The next step is to determine the best study design needed to answer that particular clinical question.
Levels of evidence: strength of evidence depends on the type of research method used:
Strong evidence from at least 1 systematic review of multiple well-designed RCT.
Strong evidence from at least 1 properly-designed RCT of appropriate size.
Evidence from well-designed trials without randomization, or case-controlled studies.
Evidence from well-designed non-experimental studies from more than 1 center or research group.
Opinions of respected authorities, based on clinical evidence, descriptive studies or reports of expert committees.
The highest level for evidence is for Meta analysis, then Systematic Reviews, then Randomized Controlled Trials, etc...The base has the largest number of literature studies and provides the least strength of evidence.
If you do not find an upper level of evidence, go to next one. Remember there may be no good evidence to support clinical judgment.
After retrieving the research literature, evaluate/critically appraise the evidence for its validity and adherence to truth. This requires knowledge of basic statistics and a familiarity with the terminology of EBM ex. positive predictive value, likelihood ratio, number needed to treat NNT, etc...This evaluation depends on the category type of clinical question at hand according to the following criteria:
Therapy: When evaluating a therapy question ask yourself:
Was the study randomized and double blind to prevent bias?
Was follow-up > 80%
Were the groups similar at the start of the trial?
Were all enrolled patients included in the conclusion of the study?
Was the study valid? did the authors answer the question?
Do the results present an unbiased estimate of the treatment effect?
How large is the treatment effect?
Will the results help my patient?
Were the study patients similar to your patient?
Are the benefits worth the harm and cost?
Diagnosis: Diagnostic tests are evaluated in a manner to ascertain which are more accurate, faster, less expensive, less invasive than existing diagnostic tests. Good diagnostic tests must provide positive results when the disease is present, and negative results when the patient does not have the disease. In contrast to therapeutic evaluations, all persons involved in a new diagnostic test must receive the test. The results are compared with the results of the "gold standard" test. To evaluate a diagnosis question ask yourself:
Did the authors do a blind comparison with a gold standard?
Did patients in the study undergo both the diagnostic test and the gold standard?
Did the paper describe the method for doing the test?
Were the patients tested similar to your patient?
Are the results of the test useful?
Did the patient sample include an appropriate spectrum of patients similar to those found in general practice?
Etiology: To evaluate a etiology question ask yourself:
Were the exposures and outcome measured similarly in both groups (exposed and non exposed patients)?
Was the comparison group similar to the outcome group in all aspects except for the variable in question ?
Was follow up sufficiently long and complete?
Prognosis: uses cohort studies to see how the disease is progressing. To evaluate a prognosis question ask yourself:
Was the patient sample selected to reflect a well-defined point in the course of disease?
Was the follow-up adequate and complete (>80%)?
Was there objective and unbiased outcome criteria used?
For more information check CASP Appraisal Checklists
Randomized Controlled Trials (RCT): (answers therapy, prevention questions)
Randomization avoids selection bias. Here we have two groups, a treatment and a control group. Treatment group receives the treatment under investigation and the control group receives placebo, and both groups are followed up.
Cohort Study: (answers prognosis, etiology, prevention questions)
Defined populations that are followed in an attempt to determine distinguishing subgroup characteristics. Researchers identify and compare two groups over a period of time; one of the groups has a particular condition or receives a particular treatment, and the other does not. At the end of specified time, researchers compare the two groups to see how they did.
Case Control Study: (answers prognosis, etiology, prevention questions)
Studies that identify patients who already have the outcome of interest and control patients without that outcome, and look back to see if they had exposure of interest or not.
Case Series / Case Reports: (answers prognosis, etiology, prevention questions)
Consist either of collections of reports on the treatment of individual patients, or of reports on a single patient.
How to Apply the results of a study to individual patients: Once you determine that the study methodology is valid, examine results if applicable to your patient, using your clinical expertise. For each category, the following different questions have to be answered.
Is my patient so different from those in the study group that the results cannot be applied?
According to the study results how much could my patient benefit from the treatment?
Are the treatment and its consequences consistent with my patient's values and beliefs?
Is the test affordable, accurate and available locally?
Can estimate the pretest probability of the disease in question?
Will the posttest probability affect my management?
Can the study results be extrapolated to my patient?
What is my patient's risk for adverse effects?
Can my patient's preferences & expectations be met by an alternative therapy?
Is my patient similar to the patients in the study group?
Will the evidence alter my choice of treatment?