On Beliefs vs. Evidence

Barbell Medicine
August 20, 2021
Reading Time: 24 minutes
Table of Contents

    We all have beliefs about the world. Some we may have spent significant time establishing – for example, pursuing a degree in a particular field of study or passively adopted beliefs from years of social-cultural indoctrination. Sometimes we may even hold beliefs that we aren’t entirely sure where they developed from, or what evidence we used to establish them. Enter Westworld: “Have you ever questioned the nature of your reality?”

    Key Points

    1. Although we all hold various beliefs about the world, we should reflect on the evidence we use to substantiate those beliefs.
    2. It’s important to recognize a hierarchy of evidence while also weighing the quality of evidence being utilized for decision making.
    3. Humans are prone to cognitive biases that lead to fallacious reasoning. Although we cannot always prevent such flaws in our thinking, being more aware of these issues may mitigate their prevalence. Additionally, science as a method remains one of our best tools for moving closer to “truth” while minimizing flawed reasoning.
    4. Evidence Based Practice (EBP) is a movement that began in the 90s in an effort to overcome experience based medicine, appeals to tradition, and relying solely on perceived bio-plausibility. Although EBP is not without flaws, the premise remains one of our best avenues for helping patients based on current available evidence for informed, collaborative decision making.

    Introduction

    About 17 years ago, I was convinced there was something “wrong” with my left shoulder necessitating surgical treatment. Around the same time, I had bought into the Muscular Development style of training, and read about bodybuilder splits, meal timing, protein consumption windows of opportunity, exercises framed as good vs. bad / injurious vs. non-injurious, how to be like the pros, etc.

    About 11 years ago I was looking to better understand developmental coordination disorders in children using fMRI/MRI to peer into the brain to identify problems that may be fixed. You could also catch me ranting at the time about how an exercise isn’t “bad”, but it’s merely how YOU perform the exercise that is “bad”, promoting perfect ways to breathe, squat, deadlift – or move, in general. Other prevalent beliefs at the time involved seeking to create symmetry to mitigate pain (Janda), or how we are under-trained in the transverse plane. 

    In 2012, I would have tried to convince you that CrossFit was THE way to exercise, and “Paleo” was THE way to eat for your goals — although oddly in 2008 I would have contradicted myself regarding CrossFit while teaching an undergrad class for strength and conditioning. I would have also likely convinced you that you needed joint manipulations, scraping, kinesiology tape, foam rolling, stretching, etc. to “get yourself out of pain”, as these are also interventions I sought for myself to deal with the experience of pain or to “improve performance”

    In 2015, I would have been convinced that pain was an “output of the brain”. Today, I tend to follow two quotes for these discussions, the first from Richard Feynman, American Physicist: “The first principle is that you must not fool yourself and you are the easiest person to fool.” The second quote is from Carl Sagan, an astronomer amongst many other things: “Extraordinary claims require extraordinary evidence.” This trip down memory lane isn’t isolated to just my beliefs. I am certain we can all reflect on various beliefs we’ve held in life that we now cringe when thinking about. I’m also willing to bet that in ten years if I re-read this piece or others I’ve written where I thought I knew more or was more enlightened … I’ll cringe then too.

    More to the point of this month’s article, we all hold beliefs about the world that influence our daily lives and decision making as well as those around us, the question becomes: what evidence are we using to substantiate these beliefs? 

    Epistemology is the study of knowledge and justified belief. In essence, we are seeking to answer the question “How do we know what we think we know?” We previously recorded a podcast on epistemic responsibility that can be found HERE

    I’m also a big fan of seeing epistemic conversations in action via Street Epistemology with Anthony Magnabosco. In his YouTube series, Anthony speaks with folks out and about at universities/colleges or local walking trails and simply asks them to state a belief or claim they hold to be true and then together they will explore the evidence. Anthony spends time asking open ended questions to try and identify the evidence a person uses to substantiate their belief. He has become, in my opinion, a Jedi master at exploring beliefs in a non-confrontational or overly emotional manner. Although he is open to discussing anything, often these discussions involve political or religious topics, as these tend to be areas with strongly held beliefs. 

    Unsurprisingly, many people have adopted beliefs from tradition (sociocultural underpinnings), aren’t sure of the origins of their belief, or have blindly accepted the source or evidence for the belief as “Truth”. This can be an uncomfortable process for someone not used to questioning their beliefs and then having to provide answers to substantiate why they believe what they believe. I could turn this into a rant of schools needing to teach us how to think, but I’ll save that discussion for another time. This does bring up an important question that the reader may be asking, what is “Truth”?

    I don’t presume to have an answer to this question, and I don’t pretend to think we will ever know “Truth”. However, I do believe we have various methods to try and move ourselves closer to “Truth” to make better, more informed decisions in life – one of which is science. I like Paul Offit’s definition of science:

    “Stripped to its essence, science is simply a method to understand the natural world–it’s an antidote to superstition.”

    Before we discuss methodology for moving us closer to “Truth”, we need to have a discussion about evidence. In Anthony’s videos, he often comes to realize that a person’s evidence is word of mouth and hasn’t really been examined for validity. This leads us to the question – what is evidence? We will frame this discussion through the lens of healthcare and clinical practice.

    Evidence

    In the 90s and early 2000s a group of individuals formed what was known as the Evidence Based Medicine Working Group, designed to champion a new paradigm in healthcare based on a “hierarchy of evidence” for collaborative decision making with patients. The premise was to move beyond solely utilizing experience, tradition, or perceived bio-plausibility to rationalize healthcare practices and instead incorporate research evidence to assist decision making when possible.

    One of the first papers released in the Journal of American Medical Association was in 1992 titled Evidence-based medicine. A new approach to teaching the practice of medicine. In this article the working group outlines what EBM is and compares the approach to the old healthcare model. They also discuss ways to integrate this new approach into healthcare, specifically via a residency program. From here a series of articles are released dubbed, The Users’ Guide to the Medical Literature (the first of the series can be found HERE). In 2000, the working group proposed a broad definition for evidence:

    “Any empirical observation about the apparent relationship between events constitutes potential evidence.”Guyatt 2000

    “Potential” is the key adjective in this definition. Our clinical observations certainly can be utilized as evidence for decision making, however, relying solely on our observations has risks and flaws that we will discuss later. This particular definition invokes the idea of a hierarchy to evidence. We are all likely familiar with figure 1 below as the usual evidence based pyramid.

    Figure 1: Evidence Based Practice Hierarchy of Evidence from 1991 – 2004.

    A common counter to evidence based practice is that often there’s no evidence for or against “X” (often labeled an argument from ignorance or absence of evidence). However, as the evidence based medicine working group points out:

    “The hierarchy makes it clear that any statement to the effect that there is no evidence addressing the effect of a particular treatment is a non-sequitur [logically doesn’t follow]. The evidence may be extremely weak – the unsystematic observation of a single clinician, or generalization from only indirectly related physiologic studies – but there is always evidence.” Guyatt 2000

    However, the idea there is always evidence has risks of its own. It can lead to believing “Do whatever you want and anything works”, whereas in research, investigators are often examining the continuum of efficacy to effectiveness. Briefly, efficacy is defined as “…performance of an intervention under ideal and controlled circumstances” and effectiveness equates to “…performance under ‘real-world’ conditions.” Singal 2014 However, as Singel et al discuss, we are likely not able to conduct a “pure” efficacy or “pure” effectiveness trial. With that said, hopefully well conducted studies can control for as many confounders and secondary effects (e.g., placebo/nocebo contextual or meaning effects) as possible to distinguish outcomes/effects attributable to the intervention itself.

    Much of this discussion will be dependent on context and the immediate threat to an individual or population, as we are seeing with COVID-19. I’m going to refrain from speaking too much on the current pandemic as this would detract from our current discussion, but as global society searches for ways to either treat those with the infection or minimize risk of disease, we’ve been inundated with mixed information and claims to treatments to start ahead of clinical studies (e.g.,  hydroxychloroquine). The usual trope is “What’s the harm?” while overlooking adequate weighting of risks vs benefits.

    Why do we need evidence? 

    Two major reasons worth focusing on the need for evidence:

    1. Move us closer to “Truth” – for our context what we should be doing as clinicians, coaches, and individuals related to a particular recommendation or intervention
    2. Minimize the anchoring effect – The anchoring effect occurs when we hinge ourselves to a single construct, idea, or piece of evidence and make future decisions based on the original information, even if new or contradictory information arises along the way. Said differently, all future decision making is based on the original anchor as the context rather than exploring other pieces of evidence to reframe and update our beliefs. One way of thinking about this would be related to my doctorate education which was heavily anchored to the construct of vertebral subluxations. As a clinician, this would be a very easy anchor to utilize as a lens for viewing patient complaints – searching for the vertebral subluxation to correct and thus achieve an end goal (e.g., pain relief or, as the old adage goes, “lack of dis-ease”).

    Why research evidence?

    Above we said that almost any observation may serve as potential evidence regarding the relationship between two events. There are underlying philosophical concepts here, but I will try to keep this discussion pragmatic. 

    We use our observations and information gained from our senses on a daily basis. An easy example would be waking in the morning to the sounds of water droplets hitting the roof of your house and thinking – it must be raining outside. You then walk to the window by your bed, peer out to find a grey sky, water droplets falling, and pooling water on the ground – your observations further confirm your suspicion that it is indeed raining outside. You then use the sensory information you gained (i.e., empirical data) to make future decisions. Perhaps you forgo walking or cycling to work today, and instead drive. Maybe you wear a rain jacket and bring an umbrella. 

    Where this can get interesting is if you are suspicious of rain, don’t have direct observable sensory information it is currently raining, but you check your weather app on your iPhone and find a 60% chance of rain. How does this affect your decision making? When weighting risks versus benefits of your decision making for this situation, the risk is restricted (for the most part) to the individual level – for example, if you decide to forgo the rain jacket or umbrella and subsequently get wet (obviously others may suffer if you then become upset with this and are bad tempered the remainder of the day). However, what happens if we transpose this to the clinical setting where decisions affect more than just you? 

    A patient may present with acute onset low back pain; what do you do next? Obtain radiologic imaging? What are the risks versus benefits of imaging? What’s the probability you will find something on imaging, and how does that “something” fit into the context of the patient’s case? Suddenly, our decision making affects more than ourselves (yes, ideally this is shared decision making). Layering in our framework for clinical practice (e.g., biomedical vs. biopsychosocial vs. some other construct), and perhaps we decide we must peer beneath the surface with imaging to find spinal alterations we’ve dubbed spinal “degenerative changes”. What’s the harm in such narratives, and is this information more harmful or beneficial for the individual’s management and prognosis? 

    Suppose we decide that the individual simply has “inflammation” and needs an anti-inflammatory, or perhaps we believe a vertebra is misaligned and needs to be “put back in place”. What supportive or contradictory evidence do we have for these interventions, and has the clinician reviewed this evidence? Herein lies the purpose of Evidence Based Practice. 

    Unfortunately, as highly evolved as we are as humans, we come with our own set of cognitive biases that influence our beliefs and daily decision making. It is extremely unlikely we will rid ourselves of these biases BUT if we are aware of them perhaps we can minimize their effect on our lives. A major rationalization for research evidence is to help stifle these individual cognitive biases with systematic, controlled, and hopefully replicated research studies. I freely admit that there are systematic issues throughout the conduction of science via research. The major flaw is that humans are the ones conducting science, and as we will find, our observations and thinking aren’t always the most reliable. 

    However, to the best of my knowledge, this remains our primary method for learning about and understanding the world around us — and in the clinical context, helping make more informed decisions in collaboration with patients. There are a few cognitive biases and logical fallacies worth mentioning here in hopes that discussing them we can then be more aware of them – as a caveat these aren’t meant to be utilized against others as a finger pointing in discussions to say “Ha, I got you – that is a ‘X’ fallacy – CHECKMATE” – as is often seen on the internet.

    Logical Fallacies

    This next section is a conglomeration of various cognitive biases and logical fallacies. There are A LOT of cognitive biases (188 recorded – see HERE) and various fallacies and thoughts on fallacies (which can be read about HERE). We will cover just a few pertinent items related to clinical practice.

    Confirmation Bias – seeking out information which confirms your prior held beliefs without consideration of the validity of the information. I typically call this the “Google effect”. If, as a chiropractor, I believe joint manipulations are effective due to my schooling and wanted to search for information about it – we can find many websites confirming my prior held belief that they are effective and should be used as an intervention. Further complicating the issue is that we can find published research claiming the effectiveness of joint manipulations – we will discuss the quality of evidence later.

    Cognitive Dissonance & Motivated Reasoning – think of cognitive dissonance as the discomfort and response to simultaneously holding two beliefs that are in opposition of one another. Back to the Google example: what happens when we find a google link that appears to contradict our prior held belief? There’s a good chance this leads to motivated reasoning to sustain inconsistent beliefs about a particular topic, or rationalization of prior belief in spite of the new evidence. For example: many of us are aware of the benefits of physical activity, and yet we rationalize why we do not engage in regular physical activity (lack of time, overworked, no opportunities, etc). Overall, motivated reasoning can be thought of as a coping mechanism when recognizing cognitive dissonance – or as the opening image to this piece alludes to – “I want to believe.”

    Post Hoc ergo Propter Hoc Reasoningalso known as a causation fallacy and translates to “after this, therefore because of this”. This is a fallacy of temporality, meaning when one event (B) occurs after another event (A), we erroneously assign causation between the two (“A caused B”). Some context to this fallacy will help with understanding.

    I originally learned about this fallacy in an undergraduate criminology class. The example presented was seeking to better understand why murder rates were spiking during the summer. While on the search for related variables, the investigators noticed ice cream sales also spike during the summer. Ipso facto, ice cream causes murders. This may be silly but drives home the point regarding this example of fallacious thinking when it comes to establishing causality. Some other examples can be found HERE

    For the clinical context we can think about the spurious correlations we draw from interventions  Often we tend to think we have a greater effect on the patient and situation than is the case.  Enter the primary argument for utilizing quality studies that are designed in an appropriate way to answer the clinical question at hand. As previously mentioned, hopefully these studies can reasonably control for potential confounders or secondary factors influencing intervention effectiveness (examples being placebo and nocebo effects related to expectations, conditioning, Hawthorne effect – to name a few, see Hartman 2009 & Testa 2016).

    Research may be rationalized based on clinical (aka collective) equipoise,”…defined as the genuine uncertainty within the scientific and medical community as to which of two interventions is clinically superior.” Master 2014 The primary goal being advancement of scientific knowledge about the efficacy of a particular intervention to generalize into clinical practice with individual patients. Although the concept of clinical equipoise isn’t without complaints, especially related to RCTs, there also exists another type known as individual equipoise. Miller et al discuss individual equipoise in a research context as “….physician-investigators must be indifferent to the therapeutic value of the experimental and control treatments evaluated in the trial.” Miller 2003 

    In research, investigators need to ensure personal equipoise among experimenters consulting with or delivering interventions to involved participants to minimize biasing outcomes. However, in clinical practice this is unlikely, as Cook and Sheats outline with the topic of manual therapy, “Given that ‘clinician experience’ is a form of ‘evidence’, preconceived personal preferences associated with an intervention, whether wrong or right, are likely always present. Consequently, it is arguable whether an environment can exist in a complete state of equipoise, especially when manual therapy is the treatment intervention.” Cook 2011 Controlling for clinician preferences is likely quite difficult in a real world clinical setting. Our bias as clinicians will show through and influence decision making as well as intervention effects. This is more reason to follow what the totality of research evidence shows about a particular intervention in order to minimize the effect of our underlying biases. With that said, there is a case for enhancing placebo-like contextual effects in an ethical manner (See Testa above as well as – Peerdeman 2016 & Peerdeman 2017).

    Appeal to antiquity – an assumption that the age of an idea implies greater validity or “truth”. We see this claim often with interventions such as cupping or acupuncture/dry needling and other Traditional Chinese Medicine interventions.

    Appeal to novelty – in opposition to number four above, appeal to novelty assumes the newest, “latest and greatest” idea or intervention must be better than the older approaches, and therefore has greater efficacy. Often, this is where we hear clinicians claiming that their practice is ahead of the research base, when in reality the research is more often debunking clinicians’ claims while taking about 17 years to be adopted in practice. Recent interventions that come to mind include platelet-rich plasma Injections, stem cell injections, or — more relatable to my own experience — fMRI being “the answer” to peering beneath the surface of humans

    Argumentum Ad Populum & Appeal to Authority – these two tend to go hand in hand in today’s world of social media. Appeal to authority implies a perceived position of authority necessarily espouses “Truth” – for example, someone with the title of “Doctor”. Another example might be someone with an arbitrary number of followers, let’s say 100,000 on Instagram, or perhaps a celebrity with a lot of social “clout”. These prior examples are representative of appeals to popularity – because an idea or person becomes popular, therefore popularity implies efficacy or “Truth”. We also tend to see the phenomenon of promiscuous expertise occur with popularity as well, where popularity or authority results in people becoming emboldened to venture outside their areas of expertise.

    Red Herring Fallacy – A red herring is considered something that is misleading and/or a distraction from a relevant question and often leads to false conclusions. In the world of healthcare, this can be akin to incidentalomas, which originated as a term from finding asymptomatic, incidentally-found tumors. The term can be broadly applied to many findings from radiologic imaging or other investigative tools that are identifiable in asymptomatic populations, have low correlation to symptom development, and self-resolve or do not progress to cause any detrimental effects. This means management and prognosis ultimately do not need to change. Unfortunately, finding an incidentaloma on imaging can lead to what has been termed “VOMIT”: Victims of Modern Imaging Technologies. These imaging findings cause unnecessary worry by clinician and patient, leading to increased downstream testing (healthcare utilization) and thus increased health costs – with real potential of having detrimental effects on the patient. Hayward 2003

    Much of this discussion comes down to attempting to answer the clinical question – How can I best help this person? Answering this question is directed by the lens through which we view the patient (e.g., biomedical vs biopsychosocial). Often we are led astray by red herrings because we’ve not examined what research evidence is showing us about the relevance of a particular finding in the context of the patient’s case. As an example, we can frame this discussion in the context of low back pain, examples include spinal “degenerative disc disease” (i.e. normative age-related changes/adaptations), lumbar disc herniations, modic changes, lumbar spinal stenosis, spondylolysis, spondylolisthesis, Schmorl nodes, scoliosis — the list goes on — as we approach a complex phenomenon such as pain from a overly biomedical reductionist view. Yes, some or all of these findings may be present, but how much they actually matter for the person’s current situation and future prognosis is a different discussion. This further validates the need for quality research evidence. In this context, for identifying a supposed problem, we need long-term observational prospective studies to figure out risk factors, correlation to symptoms, and label something as a problem needing intervention – leading to our last discussion point in this section.

    False Dichotomy Fallacy (false dilemma) – for our context this is proposing two categories as the only options (either/or). For our low back pain discussion, this would mean framing imaging findings as problems or not problems, injuries vs not injuries, abnormalities vs normalities. Instead of examining findings in this context, we would likely be better served to frame this discussion through a spectrum. However, the inherent issue here is the individual human. Where one individual falls on the spectrum of findings and crosses into imaging findings necessitating alternative management (e.g., “conservative” vs. surgical) is a bit muddier in many non-traumatic situations. Instead we are relegated to addressing the individual by applying current evidence on their situation, generalizing external evidentiary findings as best as possible. This likely explains why we can have the same findings in an asymptomatic population versus symptomatic (I realize the irony of creating dichotomous categories of asymptomatic vs. symptomatic and this is actually a major research question ongoing in various contexts like low back pain with identifying when someone is truly “symptom-free” or having an initial case of acute low back pain). Ultimately, this leaves us trying to identify when something necessitates a particular intervention, or perhaps we can think of this as a continuum of interventions (education → activity modification → medications → surgeries; but these don’t have to be mutually exclusive, and can occur simultaneously).

     Let’s discuss this through the lens of vertebral subluxations in the chiropractic world. In order to assert that vertebral subluxations (VS) exist and necessitate a particular intervention, we first need to use science as a method to assess such a belief. 

    • Step 1 – We need evidence demonstrating VS exist, a prevalence rate identified in asymptomatic and symptomatic populations. Base rates of a finding helps us identify correlation to symptoms. Saying everyone has a VS nullifies this from being a problem and instead it is a normal part of being a human. 
    • Step 2 – Once evidence has been identified demonstrating VS existence, then we need evidence demonstrating there’s a risk to having them. They’ve been demonstrated to have a non-negligible detrimental harm to those they are identified in and if not addressed may affect prognosis, quality of life, function, etc. 
    • Step 3 – We need valid assessment tools for identifying VS (reliable, accurate, and replicated from clinician to clinician).
    • Step 4 – Upon identifying the existence of VS (thoeretically), demonstrating that they confer a risk to the individuals presenting with the finding (theoretically), we now need to identify interventions to address VS in a meaningful manner (prognosis, quality of life, function, life span, etc.). Here we need to have a conversation of efficacy and weighting risk vs benefits for the proposed intervention compared to not intervening. Meaning, have the interventions been demonstrated to have meaningful effects beyond placebo/contextual meaning effects while addressing VS that doesn’t then bring more risk than leaving VS alone, while also considering patient context and preferences.
    • Step 5 – We then must identify clinicians adequately trained and prepared to help individuals with VS in the healthcare system.

    If the reader has heard of Bayesian inference or decision-making, this process may sound familiar. Since the push for evidence based practice, there’s also been advocacy for the uptake of Bayesian statistics, which can be read more about HERE.

    Overall, these are a few examples of biases and fallacious reasoning that science as a method attempts to overcome. The methodology is beneficial for healthcare to further our understanding about pathology and interventions. However, we must recognize shortcomings of science as a method, specifically in the healthcare context, over-weighting information based on calculative, physical body objectification to understand the human experience. Although this isn’t the point of this overall article, I would be remiss to not state that we must strive to find balance between using science as a method for healthcare while recognizing the type of questions science is bound to and the information we can garner, being cautious of slipping into scientism. There are strong arguments for expanding beyond just calculative thinking and layering in meditative/reflective thinking (see Afflicted by Nicole Piemonte, PhD). Our final section will discuss evidence based practice further.

    What is Evidence-Based Practice?

    Now that we’ve discussed a bit about defining evidence, the need for research evidence to overcome cognitive biases and fallacious reasoning, let’s discuss evidence based practice a bit more. As stated previously, the EBP movement was born out of a need to move beyond experienced based practice (“like … my opinion, man”) or supposed bio-plausibility alone, and move more towards integration of research evidence for shared clinical decision making.

    Most are likely familiar with the “three legged stool” or “3 pronged” version of EBP and if you’ve ever been involved in discussion on this topic, then a common argument is for equal weighting of each leg. Interestingly, the most well known member of the EBP group, David Sackett, wrote a paper with others in 1996 titled, Evidence Based Practice – What it is and what it isn’t, and in this paper they define EBP as: 

    “Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research.”

    A common counter to EBP is the idea of “cookbook” medicine, and being married to a pastry chef has taught me that we often need a recipe to follow. However, given the fact that clinical practice operates at the individual level, we are then forced to generalize the research evidence on a given topic to the individual’s case. Therein lies the role of clinical expertise: combining current research evidence on a topic with our clinical experience as it relates to an individual’s case, i.e. EBP. 

    Sackett and colleagues again: “External clinical evidence can inform, but can never replace, individual clinical expertise, and it is this expertise that decides whether the external evidence applies to the individual patient at all and, if so, how it should be integrated into a clinical decision.” 

    The interesting aspect of this discussion is the utter lack of mentioning of equal weighting that we often hear in this conversation. Instead, of a 3-legged stool approach, I prefer the Pillars of Creation (captured by the Hubbell telescope in 1995 and rephotographed in 2014). The image depicts 3 unequal pillars that I would assign left to right: current best research evidence, clinical expertise, and patient preferences. Interestingly, the image captures a collection of gas and dust attempting to form a new star but nearby stars are emitting light threatening degradation – we can think of this light as new evidence threatening our current beliefs about a topic. 

    “BuT In mY ExPeRiEncE”

    This is also a common argument from a perceived position of authority and as stated earlier, experience has a place but shouldn’t be a stand alone (typically) for decision making. Even in scenarios where we have no available research evidence specific to the issue, we can likely extrapolate and generalize from other related contexts (see current recommendations from WHO/CDC regarding minimizing the spread of COVID-19). 

    Where does experience tend to get us? Choudhry et al published an article in 2005 titled Systematic Review: The Relationship between Clinical Experience and Quality of Health Care and found: “Relationship between clinical experience and performance suggests that physicians who have been in practice for more years and older physicians possess less factual knowledge, are less likely to adhere to appropriate standards of care, and may also have poorer patient outcomes.” 

    In other words, it would appear we do worse when relying solely on our flawed observations and reasoning that have accumulated over time. We also can’t forget how often healthcare providers overestimate the benefits of our care and underestimate our harms. Hoffman 2017 This is also likely why we see patients with similar perceptions — overestimating benefits of treatments and underestimate harms Hoffman 2015 — given that in many scenarios we (clinicians) are setting patient expectations and influencing their preferences (there are obviously other influential variables such as the internet, family members, and social network). 

    If we zoom out, we can see how easily a therapeutic illusion can occur. Defined by Thomas KB as, “…an unjustified enthusiasm for treatment on the part of both doctors and patients, which is a proposed contributor to the inappropriate use of interventions.” Thomas KB 1978 Often a perfect storm of overdiagnosis and overtreatment can occur due to our own confirmation bias + heuristics + therapeutic illusion. A recent editorial outlines quite well how this occurs in clinical practice – Musculoskeletal Healthcare: Have We Over-Egged the Pudding?. Returning to the pastry world, (I verified this with my wife Erica) over-egging pudding would spoil it by using too many eggs or in our case – medical overuse (doing too much). This editorial discusses 4 aspects of overdiagnosis. This issue is complex & multifactorial. Often we are indoctrinated from school to look for supposed issues (See above discussion on VS), even where we lack evidence there is a true problem. We then base consultations on assessments designed to look for flaws that are merely normative variants/adaptations (posture, ROM, strength/weakness, alignment, lumbar “degeneration”, rotator cuff tears, FAI, meniscus tears, etc.). These erroneous searches (acting as Sherlock Holmes) can lead to false narratives, false beliefs, and erroneous expectations about unnecessary interventions, culminating in maladaptive conditioned behaviors.

    Zooming out further, much of healthcare in this realm is based on reduction of pain & validating medical necessity on any pain reporting. Recall the prior narratives described above to validate interventions to treat specific tissue issues. The example of low back pain applies, with many clinicians saying all low back pain has a cause, and that expert assessment can find the cause. This, despite evidence that doesn’t support such ease of thinking as it relates to pain, and instead such an approach causes many downstream consequences. This leads to my last point: a lot of this can be related to the idea that we are better than we actually are in clinical practice, underestimating our harms & overestimating our benefits. “I’m ahead of the evidence! Just come and watch my success in clinical practice.” No, you aren’t — and it’s ok, I’m not either — and if we could stop and slow ourselves down to read the evidence we’d see this.

    We must be willing to tolerate uncertainty in clinical practice. In the words of Voltaire, “Uncertainty is an uncomfortable position, but certainty is an absurd one.” According to Simpkin et al, “Too often we focus on transforming a patient’s grey-scale narrative into a black-and-white diagnosis that can be neatly categorized and labeled.  The unintended consequence — an obsession with finding the right answer, at the risk of oversimplifying the richly iterative and evolutionary nature of clinical reasoning — is the very antithesis of humanistic individualized patient-centered care. The authors go on to state, “We can speak about “hypotheses” rather than “diagnoses,” thereby changing the expectations of both patients and physicians and facilitating a shift in culture.” I’m extremely skeptical the transition from diagnostic labels to hypotheses will occur anytime soon but hopefully this will at minimal stifle our certainty. 

    How have we been doing with EBP since the 1990s? 

    It depends on who you ask. I tend to remain optimistic that the movement is having a beneficial effect on clinical practice, but uptake is still limited. We will likely continue to see an evolutionary process of EBP as we gain new understanding of evidence and how best to integrate into clinical practice. Djulbegovic and Guyatt released an article in 2017: Progress in EBM: A Quarter Century On. Recall our prior evidence hierarchy chart above with expert experience as the base and RCTs as the pinnacle in biomedicine. In the article the authors outline 3 major epistemological updates to EBP: 

    1. Not all evidence is created equal
    2. Pursuit of “truth” is best accomplished by evaluating the totality of evidence on a topic
    3. Clinical decision making requires consideration of patients’ values and preferences

    Overall, a major takeaway is learning to weigh evidence based on quality of methodology. The authors proposed this update to the concept of a hierarchy of evidence (see above), where we consider methodology of studies such as study design, control/risk for biases, etc. The authors specifically discuss the GRADE criteria (Grading of Recommendations Assessment, Development, and Evaluation), but there are MANY similar type assessment tools/criteria.

    Furthermore, EBP isn’t without critiques. One in particular comes to mind, John Ioannidis, a Stanford professor who speaks out quite vocally against the hijacking of EBP. To his credit, he does shine a spotlight on several concerning factors regarding how science is conducted by humans, which inherently brings cognitive biases, fallacious/motivated reasoning, and competing interests into play. However, this doesn’t then mean we scrap the approach but instead calls for continued efforts to evolve EBP for the betterment of society.

    Take-Home Message

    In conclusion, we all have beliefs about the world, the question becomes: what level of evidence are we using to substantiate those beliefs? For the context of guiding others’ lives and decision making (a situation we are often involved in clinical practice), we should use the current best research evidence when possible to inform shared decision making. Having a general understanding of scientific methodology and statistical analysis goes a long way towards assessing evidence and relevance to patient cases. Much of this is similar to improving towards goals in the gym – dedication to the process, completing sets and reps, and practice. Said differently, preemptively carving out time daily/weekly to read research relevant to your field can go a long way to improving at assessing evidence. Like most things in life, there’s no magic bullet here. Finally and of relevance in today’s social media world, we should be cautious as champions of EBP with claims of being “evidence-based” somehow elevates us above others. The phrase has almost become an oxymoron at this point. Perhaps, on some level, we should follow Epictetus’s advice, “Don’t explain your philosophy. Embody it.” 

    References:

    1. Offit, Paul A. Bad Advice: or Why Celebrities, Politicians, and Activists Arent Your Best Source of Health Information. Columbia University Press, 2018.
    2. Guyatt G. Evidence-Based Medicine JAMA. 1992; 268(17):2420-.
    3. Oxman AD. Users’ Guides to the Medical Literature JAMA. 1993; 270(17):2093-.
    4. Guyatt GH, Haynes RB, Jaeschke RZ, et al. Users’ Guides to the Medical Literature JAMA. 2000; 284(10):1290-.
    5. Singal AG, Higgins PDR, Waljee AK. A Primer on Effectiveness and Efficacy Trials Clinical and Translational Gastroenterology. 2014; 5(1):e45-.
    6. Hartman SE. Why do ineffective treatments seem helpful? A brief review Chiropr Man Therap. 2009; 17(1).
    7. Testa M, Rossettini G. Enhance placebo, avoid nocebo: How contextual factors affect physiotherapy outcomes Manual Therapy. 2016; 24:65-74.
    8. Master, Zubin & Smith, Elise. (2014). Ethical Practice of Research Involving Humans. 10.1016/B978-0-12-801238-3.00178-1. 
    9. Miller FG, Brody H. A critique of clinical equipoise. Therapeutic misconception in the ethics of clinical trials. Hastings Cent Rep. 2003; 33(3):19-28. 
    10. Cook C, Sheets C. Clinical equipoise and personal equipoise: two necessary ingredients for reducing bias in manual therapy trials Journal of Manual & Manipulative Therapy. 2014; 19(1):55-57.
    11. Peerdeman KJ, van Laarhoven AI, Keij SM, et al. Relieving patientsʼ pain with expectation interventions PAIN. 2016; 157(6):1179-1191.
    12. Peerdeman K, van Laarhoven A, Bartels D, Peters M, Evers A. Placebo-like analgesia via response imagery Eur J Pain. 2017; 21(8):1366-1377.
    13. Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the question: understanding time lags in translational research J R Soc Med. 2011; 104(12):510-520.
    14. Botvinik-Nezer, R., Holzmeister, F., Camerer, C.F. et al. Variability in the analysis of a single neuroimaging dataset by many teams. Nature (2020). https://doi.org/10.1038/s41586-020-2314-9
    15. Chojniak R. Incidentalomas: managing risks. Radiol Bras. 2015;48(4):IX‐X. doi:10.1590/0100-3984.2015.48.4e3
    16. Hayward R. VOMIT (victims of modern imaging technology)–an acronym for our times BMJ. 2003; 326(7401):1273-1273.
    17. Ashby D, Smith AFM. Evidence-based medicine as Bayesian decision-making Statist. Med.. 2000; 19(23):3291-3305.
    18. Malloy D, Martin R, Hadjistavropoulos T, et al. Discourse on medicine: meditative and calculative approaches to ethics from an international perspective Philos Ethics Humanit Med. 2014; 9(1):18-.
    19. Sackett DL, Rosenberg WMC, Gray JAM, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t BMJ. 1996; 312(7023):71-72.
    20. Choudhry NK, Fletcher RH, Soumerai SB. Systematic Review: The Relationship between Clinical Experience and Quality of Health Care Ann Intern Med. 2005; 142(4):260-.
    21. Hoffmann TC, Del Mar C. Clinicians’ Expectations of the Benefits and Harms of Treatments, Screening, and Tests JAMA Intern Med. 2017; 177(3):407-.
    22. Hoffmann TC, Del Mar C. Patients’ Expectations of the Benefits and Harms of Treatments, Screening, and Tests JAMA Intern Med. 2015; 175(2):274-.
    23. Thomas KB. The consultation and the therapeutic illusion. BMJ. 1978; 1(6123):1327-1328.
    24. Maher CG, O’Keeffe M, Buchbinder R, Harris IA. Musculoskeletal healthcare: Have we over‐egged the pudding? Int J Rheum Dis. 2019; 22(11):1957-1960.
    25. Simpkin AL, Schwartzstein RM. Tolerating Uncertainty — The Next Medical Revolution? N Engl J Med. 2016; 375(18):1713-1715.
    26. Djulbegovic B, Guyatt GH. Progress in evidence-based medicine: a quarter century on The Lancet. 2017; 390(10092):415-423.
    27. Ioannidis JP. Evidence-based medicine has been hijacked: a report to David Sackett Journal of Clinical Epidemiology. 2016; 73:82-86.
    Barbell Medicine
    Barbell Medicine
    0
    Subtotal:
    $0.00

    No products in the cart.

    25% Off Apparel, Templates & Supplements w/ MDW25