Evidence-based practice has been gaining popularity since its introduction in 1992. While it may seem obvious that occupational practices based their knowledge on scientific evidence, it has been controversial among the scientific community (Trinder & Reynolds, 2006). Since then, it has spread to various disciplines such as education, management, allied health, law, and more.
A significant part of evidence-based practice is the levels of evidence or hierarchy of evidence. Generally, it applies to any type of research and evaluates the strength of scientific results. While there are specific levels of evidence in various disciplines, the most developed is from medicine and allied health (Hugel, 2013)
This article briefly introduces evidence-based practice and how it is applied to specific disciplines. Furthermore, it shows various examples of levels of evidence in research and how they rank or evaluate scientific research. At the end of the article, the reader should have a clear idea of what evidence-based practice is, how levels of evidence fit in, and the relevant critiques associated with it.
Evidence-based practice (EBP) is the idea of occupational disciplines being based on scientific evidence (Trinder & Reynolds, 2006). It encourages and, in some cases, forces scientists and other professionals to pay more attention to evidence when making crucial decisions.
EBP aims to minimize outdated or unsound practices to more effective, evidence-based ones. It shifts the decision-making from intuition, tradition, and unsystematic experience to well-established and well-researched scientific studies.
Evidence-based practice has been gaining popularity since its introduction back in 1992. It has spread to various fields such as management, law, medicine, education, public policy, and more. There is also an effort to apply EBP in scientific research itself, which is called metascience.
Some examples of the application of EBP in various disciplines are as follows:
Evidence-based research, also known as metascience, is the utilization of scientific methodology to study science, which aims to increase the quality and efficiency of the research process (Ioannidis, 2020). As metascience concerns itself with all fields of research, it is also referred to as “a bird’s eye view of science.”
Metascience is made up of five major areas of studies:
Medicine and allied health have adapted evidence-based practice, aptly named evidence-based medicine (EBM), which is an approach to the practice that optimizes decision-making by utilizing evidence from well-conducted and well-designed research. It is done by balancing three components, namely research-based evidence, patient values and preferences, and clinical expertise (Haughom, 2015). With this approach, medical practitioners can improve the quality of healthcare, improve satisfaction among patients, and potentially cut down the cost of treatments.
The field already has some degree of empirical support in itself. However, EBM goes further by classifying levels of research. It considers the epistemological strength of evidence from the strongest types (systematic reviews, meta-analyses, and similar studies), which produce strong recommendations and results from weaker types (case-control studies) that yield weak recommendations.
Evidence-based medicine is applied to various parts of the discipline, from education to the administration of health institutions (Eddy, 1990). It advocates that decisions and policies should be based on evidence as much as possible instead of the beliefs of experts, practitioners, or administrators. For example, it assures that a doctor’s opinion, which may be limited due to biases or gaps, is supplemented by knowledge and information from the current literature in order to provide the best recommendation.
Evidence-based education is the utilization of well-designed and well-researched studies to identify which education methods work best and produce the best results. It combines evidence-based learning and evidence-based teaching. While the reception for EBE is generally positive, some critics point out that some educational research is poor in quality and limits the scope of relevant research (Biesta, 2007). Other studies are often difficult or impossible to replicate.
Some examples of evidence-based learning techniques are as follows:
The entire practice revolves around evidence, its various types, and its validity. Evidence is the result or product of scientific research that enables decision-making. It can be divided into two main categories:
Evidence-based practice involves levels of evidence that help practitioners determine the “strength” or value of the evidence. The hierarchy of evidence depends on the discipline itself and how each field develops their standardized process of evidence evaluation. A great example is the evidence pyramid used by medical practitioners and researchers.
Medical experts and researchers rank evidence according to quality, resulting in the evidence pyramid (Queensland University of Technology, 2019).
The top three levels of evidence in medical research are composed of filtered information. These include the following, starting from the evidence or source with the highest quality:
The next three lower levels of evidence include unfiltered information starting from the highest quality:
The background information, or expert opinion, is not considered evidence. However, it is the foundation level of the pyramid, which supports the other evidence. It is also used to provide context to the interpretation of the evidence when needed.
The hierarchy above is just one example of the application of the hierarchy of evidence. Each discipline and its corresponding sub-fields can have their process of evaluating evidence. For example, experts have proposed more than 80 different hierarchies for assessing and evaluating medical evidence (Siegfried, 2017).
As mentioned above, the level of evidence in research is a method of raking the relative strength or validity of results coming from scientific studies. In the past, there have been multiple proposals in assessing levels of evidence in research. These include:
The Grading of Recommendations Assessment, Development, and Evaluation (GRADE) process was proposed in 2001 by guideline developers, methodologists, clinicians, public health scientists, and other interested members. It measures the strength of recommendation (or confidence in estimated effect) and certainty in the evidence of a certain scientific study or research.
It is endorsed by various international health organizations, such as the World Health Organization, the Canadian Task Force for Preventive Health Care, and the U.K. National Institute for Health and Care Excellence (NICE), among others (McMaster University & Evidence Prime Inc.). While it has its roots in medicine and allied health disciplines, it can also be applied to research dealing with other topics.
GRADE provides the following ratings to various evidence:
G.H. Guyatt and D.L. Sackett proposed the first version of the hierarchy of primary studies back in 1995 (Guyatt & Sackett, 1995). T. Greenhalgh further modified the ranking, resulting in the following (from highest to lowest in value):
B. Saunders and his colleagues also proposed a protocol that assigns research results and reports into six categories. The assignment is based on the theoretical background, research design, general acceptance, and evidence of possible harm (Saunders, Berliner, & Hanson, 2003). Much like other levels of hierarchy, it is rooted in allied health.
The research outcome should have a descriptive publication such as a manual or something similar.
While many disciplines have adapted and accepted evidence-based practice and evidence hierarchies, there was an increasing criticism in the past few years. Many of which highlight the shortcomings of EBP in medicine and allied health as it is most practiced in these disciplines. For instance, a 2016 survey found out that 70% of researchers are unable to reproduce the experiments of their peers (Baker, 2016). The same study revealed that 52% of the respondents agree that there is indeed a significant reproducibility crisis in the research industry.
Source: Nature (2016)
Many critical works have been published in the literature in the past decade or so. However, upon a survey of these works (Solomon, 2011), they usually fall into one of the following:
Furthermore, many practitioners and administrators point out that EBM has limitations in terms of informing the care of patients. The hierarchy of evidence also does not consider the research on the efficacy and safety of medical interventions. Also, studies designed using EBM guidelines fail to define key terms, consider the validity of non-randomized controlled trials, and underscore a list of study design limitations (Gugiu et al., 2012).
Others have specifically criticized the hierarchy levels of evidence, such as Borgerson, who wrote that these rankings should not be absolute and do not necessarily epistemically justify such order. Also, he noted that researchers should take a closer look at social mechanisms for managing pervasive biases (Borgerson, 2009).
A. La Caze added that basic science could actually be found on the lower tiers of EBM as it plays a significant role in specifying and contextualizing experiments. These lower ranks of evidence also help interpret and analyze the resulting data (Caze, 2010).
While many of the protocols in evidence-based practice are specific to disciplines, it is a great introduction to how processes and protocols are followed in the research process. At its most basic form, evidence-based practice shows the importance of the value of supported and backed-up evidence, especially in scientific studies.
Examining the evidence, whether in EBP or not, is one of the cornerstones of science. Having a standardized method of evaluating evidence, such as a hierarchy, streamlines the entire research workflow. At the very least, it provides researchers with a tool to determine the value of the evidence, their sources, and whether it is relevant to the study.
It may seem obvious that evidence and proper sources should be considered in every scientific study, EBP and levels of evidence are controversial as well. Experts and practitioners have published critiques that examine the applicability and validity of the protocols in their respective disciplines. However, just like any scientific process, it is undergoing improvements through continuous evaluation.