When Logic Fails: Part 1

Austin Baraki
August 14, 2018
Reading Time: 6 minutes
Table of Contents

    Take home points:

    1. In the context of complex systems, logical analysis can be useful in generating hypotheses, but not for drawing conclusions with a high degree of confidence, because there are almost always confounding factors that have not been discovered or accounted for.
    2. We must therefore always be cautious with the claims and conclusions we draw from our own observational experience. This is why scientists and evidence-based practitioners carefully qualify their statements in order to avoid overstepping, and should clearly identify conjecture that lacks supporting evidence. In contrast, we must be deeply suspicious of anyone making very confident claims in the absence of controlled evidence.”
    3. We must continually seek out evidence — even uncomfortable evidence — that controls for our human tendencies and biases, in order to better inform our clinical practice and interactions with clients and patients.

    I wrote an article titled Aches & Pains wherein I contrasted the traditional biomedical approach to pain with the modern biopsychosocial approach, which accounts for the complex interactions between biological, psychological, and social factors in the human pain experience. I discussed examples of how these issues influence the practice of clinicians including chiropractors, physicians, physical therapists, and massage therapists. In particular, I described how practitioners often operate within their own “silo”, tending to identify and anchor on issues they are trained to look for and treat, frequently failing to recognize and appreciate the broader, more complex picture.

    Today I’d like to expand those ideas a bit.

    Clinicians and coaches all have their own biases, regardless of specialty, sport, or training approach. We are inherently biased towards our own practices and approaches — otherwise, why would we recommend them over others? We all like to believe we’ve come to our present thinking through experience and careful, objective observation and reasoning, and — unless specifically trained otherwise — we naturally tend to rely on this over more seemingly abstract scientific data.

    These tendencies are driven by the erroneous belief that our senses allow us to perceive the world directly, as it is, rather than observing a subjective interpretation of reality as constructed by our brains (known as naive realism). We tend to believe that our observations are “objective”, that our perception of the world has perfect fidelity, and that those who disagree with us are simply failing to observe as carefully as we are, or are otherwise biased or uninformed. Numerous lines of experimental evidence in neuroscience and cognitive psychology over the past century has shown this concept to be inaccurate.

    In a similar vein, we are also highly prone to confirmation bias, in which we tend to search for and interpret information in a way that confirms our pre-existing beliefs. This is particularly prevalent in the context of emotional topics or other deeply entrenched beliefs, many of which we learn from a particular social group we identify with.

    Furthermore, we tend to discount information that conflicts with our beliefs, and become even more polarized or entrenched in the face of contradictory evidence — in fact, we’d often rather be wrong if it keeps us in favor with our social group (see: tribalism).

    Science provides the best tool we have available to us to control for these sorts of factors and gradually approach “Truth” (which itself deserves an entire discussion of its own). Of course, we humans are the ones performing science. Therefore, it is also fallible unless extraordinary efforts in study design (e.g., strategies like proper sampling, randomization, blinding, etc.) are taken to maintain rigor and to control for our own tendencies. But even when high-quality data are obtained, their interpretation presents further problems given our biases — particularly when the results conflict with our previously held beliefs.

    Our brains value consistency. Encountering new information that contradicts our previously held beliefs results in psychological discomfort known as cognitive dissonance. Overcoming this discomfort and actually changing our beliefs is a fairly difficult and complex thing to do, so our immediate reaction is to be far more skeptical of claims we dislike compared to those we (or our social group) agree with.

    Interestingly, when we do change our minds about something, our brains often “delete” the prior beliefs, and we might convince ourselves that the updated belief is what we have always thought (see: Belief Change Blindness). This narrative makes us feel better about ourselves for being perfectly rational, consistent thinkers over time — despite the fact that none of us actually are. If you doubt this, go back and read some of your writings or social media postings from high school or college and see how consistent you’ve been since then.

    Practice Reversal

    Medical history is littered with examples of dramatic reversals in practice. The effectiveness of a particular practice may have initially seemed perfectly obvious based on simple logic and an understanding of basic physiology, which was then “confirmed” by clinical observation.

    For example, cardiologists have historically recommended coronary artery stent placement to open up the blood vessels of patients with heart disease and stable chest pain symptoms. Blood flow to the heart is good and the patient came back to the clinic after stenting saying they felt great, so this makes perfect sense … right?

    Lumbar fusions were (and often still are) performed on patients with chronic low back pain to eliminate the “pain generator” and replace it with a non-innervated, rigid segment. The patient would return for their 4-week post-op appointment feeling less pain. We’ve seen it with our own eyes — so it clearly works, just like we thought … right?

    Once controlled data indicated a lack of benefit for these and many other interventions compared to placebo interventions (known as “sham” procedures), practitioners were left in an uncomfortable position and had to wrestle with the resulting psychological discomfort. Many of these situations result from post hoc fallacies, inappropriately assuming the sequence of intervention and observed effect to represent direct causation. Clinicians are also known to overestimate the potential for benefit from their treatments, and underestimate the downsides [1]. There are hundreds of other similar practices that have been refuted over the years [2].

    The typical pattern involves practitioners who perform these procedures as a substantial portion of their practice digging their heels in, describing how they have SEEN IT WORK (while ignoring or explaining away cases where it didn’t work), and plan to continue the practice based on their “objective experience”, regardless of what the data show [3,4]. The data simply must be wrong, somehow. These practitioners – particularly older ones – frequently discount researchers as inexperienced non-clinicians, or search for the slightest flaw in the data to justify a wholesale rejection of the findings. Perhaps it should not be surprising, then, that a 2005 systematic review of physician practice and outcomes found that [5]:

    “Relationship between clinical experience and performance suggests that physicians who have been in practice for more years and older physicians possess less factual knowledge, are less likely to adhere to appropriate standards of care, and may also have poorer patient outcomes.”

    Now, just imagine the gut reaction of an older, experienced clinician to these findings.

    Of course, science is far from perfect in practice – unscrupulous researchers, conflicts of interest, industry influence, and predatory journals are major problems, among many others. The critical appraisal of scientific research is crucially important to weed out “bad data”. However, these practitioners’ motivated skepticism presents a very interesting contrast to their own willingness to adopt a particular practice in the first place despite equally flawed, worse, or even absent data. Often, their initial enthusiasm for performing a procedure is based on logical reasoning (or what I call “physiologic hypothesizing”) alone.

    The Role of Logical Analysis

    These examples show some the problems associated with human observational experience in a clinical context. The analysis applied to justify such interventions is almost always based on an incomplete understanding of the complexity of human physiology and neurobiology. For example, traditional chiropractic theories of spinal vertebral subluxation and manipulation at one time appeared superficially plausible — but have now been soundly refuted in favor of a more nuanced explanation for what we observe, based on a modern understanding of pain and placebo neurophysiology in the clinical setting.

    There are always unaccounted-for factors when you don’t know what you don’t know. In the context of complex systems, logical reasoning can be hypothesis-generating. However, these unknown factors severely limit the utility of logical reasoning alone when attempting to draw accurate conclusions with a high degree of confidence.

    This is where data help us fill in the gaps — if we are willing to accept that our uncontrolled real-world observations are not as objective as we think. Fortunately, we have studied vertebral motion (or the lack thereof) with spinal manipulation, tested clinician’s palpatory skills and inter-rater reliability, performed sham-controlled trials of spinal manipulation, and researched the role of contextual factors and therapeutic alliance to better explain mechanisms of the observed effects in practice. Unfortunately, many practitioners refuse to acknowledge this evidence and still practice under the ongoing illusion of traditional subluxation theory today. We must be willing to change our minds despite what we think we’veseen with our own eyes”.

    We coaches similarly have our own biases based on our experience and observations from working with many people. Being human, we tend to see what we (and our social in-group) want to see, and reject or rationalize what we (and our social in-group) don’t like, just like the clinicians described above.

    And to be clear, these errors in thinking do not make coaches or clinicians bad people. We tend to judge ourselves based on our intentions, while instead judging others based on their actions. Most coaches, clinicians, and other assorted practitioners generally have good intentions and genuinely want to help their clients and patients. And it’s hard to blame them when they witness the effects of their interventions and ascribe the most logical, unidirectional cause-and-effect explanation to these apparent effects – especially when unaware of the underlying complexity involved.

    In part 2, we’ll move on from the clinic and examine the practice of coaching under the same lens.


    Austin Baraki is a physician, powerlifter, and coach with best lifts of a 620 lb squat, 420 lb bench press, and has 675 deadlift. He specialized in Internal Medicine and will be taking his boards soon. He enjoys trolling Jordan as much as possible from Instagram. 

    Thanks to Michael Ray, MS, DC, Derek Miles, PT, DPT, and Thomas Campitelli for their assistance in editing this article.

    References


    1. https://www.ncbi.nlm.nih.gov/pubmed/28097303
    2. https://www.mayoclinicproceedings.org/article/S0025-6196(13)00405-9/abstract
    3. https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2195117
    4. https://implementationscience.biomedcentral.com/articles/10.1186/1748-5908-9-1
    5. http://annals.org/aim/fullarticle/718215/systematic-review-relationship-between-clinical-experience-quality-health-care
    Austin Baraki
    Austin Baraki
    Dr. Austin Baraki is a practicing Internal Medicine Physician, competitive lifter, and strength coach located in San Antonio, Texas. Originally from Virginia Beach, Virginia, he completed his undergraduate degree in Chemistry at the College of William & Mary, his doctorate in medicine at Eastern Virginia Medical School, and Internal Medicine Residency at the University of Texas Health Science Center in San Antonio.
    0
    Subtotal:
    $0.00

    No products in the cart.

    25% Off Apparel, Templates & Supplements w/ MDW25