Part I of this series See Here, we discussed the role of resistance training in the management of common injuries in endurance athletes, as well as its role for injury risk reduction. In part II, we will move on to examine the effects of resistance training on endurance performance.
The Interference Effect
A common concern among endurance athletes is that resistance training will hamper their performance — not unlike lifters’ concerns about conditioning/endurance activities affecting their strength. This is frequently described as the “interference effect”, but this is still a product of how the situation is framed. Eddens 2018 For example, there is also evidence for a “concurrent effect” of training, whereby a different training stimulus can have additive or synergistic effects on overall athletic performance. Skovgaard 2014, Mulastis 2018 Based on the available evidence, it actually appears that endurance training may have more of a limiting effect on the development of strength, hypertrophy, and power than resistance training does on endurance. Wilson 2012 However, the details of this interference effect depend on things like total training volume, frequency, timing, and level of adaptation, and will remain outside the scope of this article. The takeaway is that we have ample evidence showing that appropriately dosed resistance training has a positive effect on speed, running economy, VO2max, and other variables.
By this point we have hopefully established the utility for resistance training in endurance athletes. The obvious question becomes: how should this be included in an overall training program? First, we need to define the types of strength that we can develop. Beattie et al describe three types based on the concept of the force-velocity continuum: Beattie 2014
- Maximal Strength – High load/low velocity to develop maximal force development
- Explosive Strength (i.e., power) – medium to high load, medium to high velocity to develop rate of force development (RFD)
- Reactive Strength – low load/high velocity to emphasize stretch-shortening cycle and tendon stiffness
While explosive and reactive strength can have a place in programming for endurance athletes, we can likely maximize our time on investment by prioritizing the development of maximal strength in early phases of training. And while it is common to hear the saying “Just Load It!“, there is far more nuance than this statement suggests. For example, differences in training paradigms are necessary if an individual is training for health versus performance. As overall endurance training volume increases over time, more attention to the resistance training prescription is needed so as not to exceed an athlete’s capacity for adaptation. A “top-down” approach to endurance training is likely best paired with a simple approach to resistance training that places its emphasis on the basics, since we are not aiming to develop a specialized strength athlete.
The “top-down” approach to programming is what is typically thought of when discussing long-term periodization. This is where training is broken down from a “macrocycle” or “grand plan”, which is comprised of a series of “mesocycles” or month-to-month plans, each of which contains a series of “microcycles” or week-to-week plans. Lorenz 2015 This allows for the planning out of individual steps to meet the endurance athlete’s goals. DeWeese 2015 In a typical endurance training program this is organized into a series of “blocks” including a “volume block”, a “speed block”, and a “taper” — although the naming may vary depending on the preferences of the coach. What is more important is that specific metrics are used to measure performance outcomes and to apply an appropriate stimulus for adaptation, while being mindful of an athlete’s recovery.
A “top-down” approach to programming utilizing periodization is not without flaws, and there is not a large body of evidence supporting its use. Kiely 2018 What is likely beneficial is that it calls for a time in training that could be considered an “off-season” after a meet or race, where volume and intensity are low and the athlete is recovering. This allows for a perfect time to introduce resistance training into an endurance athlete’s programming.
Alternatively, if an athlete has been injured and unable to run, this period of reduced running volume is another opportune time to introduce resistance training. As the athlete begins their return to running progressions and mileage increases, the overall volume and variability of resistance training should decrease, serving primarily as a supplement to running. This is a common approach in rehabilitation contexts; however, we frequently find that upon discharge from rehabilitation, most athletes do not continue with resistance training in spite of its overall protective effect against future injury.
Designing a Program
So we now have a framework of when to introduce resistance training. The how should be based on the principles of overload, variability, specificity, and reversibility. Each of these principles can fit into an approach to the design of a simple program that will be effective for most endurance athletes. Recognizing that absolute strength outcomes on particular lifts are not the primary target here, resistance training approaches for endurance athletes should be designed around basic principles, with a focus on maximizing return on time investment with the lowest amount of risk. This will allow the athlete to derive substantial benefit from resistance training, while also allowing them to maximize their endurance training volumes. [Alda 2019]
Each athlete has a baseline level of fitness that must be stimulated beyond current abilities in order to improve.
Most endurance athletes’ strength training baseline is effectively zero, so any initial stimulus is likely to generate some amount of short-term adaptation, even at relatively low doses.
In addition to a “baseline” level of fitness, every athlete has an upper limit to the “dose” of training stimulus they can tolerate at any given time, which we will term the “maximum capacity“.
These two landmarks give us a window of opportunity within which to begin applying load safely and effectively for inducing adaptation. However, this window shifts upwards as adaptation occurs, and generating further adaptation requires progressive increases in workloads.
These concepts have been studied using the framework of “acute” versus “chronic” training loads, which compare the current (or acute) dose of training stimulus compared to the baseline level of fitness achieved by chronic training loads.Gabbett 2016 More specifically to runners, there is evidence that greater than a 30% increase in mileage a week from their average may place them at an increased risk of injury, although there is conflicting evidence here as well. Damsted 2018. While it is tempting to attempt to isolate a single variable (like workload) to explain injury, we must avoid the temptation of reductionism and recognize that mechanisms of sports injury are complex and multifactorial. Wiese-Bjornstal 2010 Windt 2017.
It is important to recognize that there is an enormous inter-individual variability in how well a given athlete may tolerate changes in training load, which may make such absolute ratios and thresholds less useful on an individual level. While research into these concepts is ongoing, we have compelling evidence that maintaining moderate-to-high chronic training loads and avoiding high spikes in acute training loads is a reasonable approach to reduce the risk of injury (and comports with basic principles of good programming). Johnston 2019
Overload is necessary in order to induce adaptation, but this often comes with the consequences of Delayed Onset Muscle Soreness (DOMS) that deters many endurance athletes from continuing on with training. This initial decrease in performance is due to the fatigue generated while the positive adaptations take place. This can be seen in figure 2 adapted from Clarke and Skiba, where the stimulus causes an initial decrease then a rapid rise in adaptation. Clarke 2013
Figure 2. Effect of resistance training on performance over time. Initially the negative training effect (NTE) outweighs the positive training effect (PTE). Over time, and so long as proper training continues, the positive training effect will continue to outweigh the negative.
From a coaching or rehabilitation specialist’s perspective, it is imperative to set the expectation that there may be an initial decline in performance, and that this will diminish with time with consistent work. These expectations should include a discussion on the role of fatigue and fatigue management as well.
Fatigue is the decrement in performance attributable to all stress that an endurance athlete is exposed to, including physical, emotional, and environmental stress. While added stress is often given a negative connotation, it an essential part of inducing adaptation as long as the athlete can tolerate and recover from it. And while “overload” is typically thought of in terms of volume, mileage/tonnage, intensity, and frequency, outside stressors such as an increase in academic load, poor sleep hygiene, nutrition, or stress at home can also have negative effects on training and adaptation as well.
Current dogma is that an athlete exclusively gets better at running by running — and indeed, this has a large body of supporting peer-reviewed and experienced-based evidence. Ronnestad 2018 The caveat is that in order to be a runner, one must first be an athlete. This means the person should possess some basic adaptations in strength, endurance, and power.
Many endurance athletes believe that resistance training will ultimately slow them down. However, not only has the evidence not shown a significant interference effect in this context, but has even suggested a concurrent training effect resulting in benefits for endurance athletes. Ronnestad 2018 (If anything, endurance training seems to have more detrimental effects on strength development, although this is similarly dose- and context-dependent. Wilson 2012)
Of course, tying together the evidence discussed above for strength, power, and endurance, endurance-focused athletes do not need to train seeking the center of the diagram in the picture above, but targeting the green area would likely help create a more durable athlete.
There is a case to be made for adding variation to programming, especially when developing new skills. A cursory internet search for resistance training for endurance athletes yields many programs with greater than 10 exercises to be performed multiple times a week. This often assumes a linear relationship between a strength “deficit” and a special exercise needed to address said deficit. For example, a “weak“ hip abductor (typically defined using some arbitrary standard) does not necessarily need to be addressed through isolated hip abduction exercise. Performing fewer, multi-joint exercises allows for addressing of multiple deficits using a more global and economical approach.
Figure 3. Variability in exercises based on intensity of load that can be applied
Decreasing the variability of exercises allows the endurance athlete to train at a lower overall volume, defined here as sets and repetitions of exercise. For example, 3 sets of 15 repetitions would put the total volume for that exercise at 45 working reps. In general, more volume at a given intensity produces more fatigue than less volume at the same intensity. Just like an athlete has to learn the technique for each new exercise, they have to decide what resistance makes each exercise hard. Exposing athletes to varying intensities of the same exercise offers a means for progress without unnecessarily increasing total overall volume of training using numerous less-economical isolation-type exercises.
If a training stimulus is removed, adaptation to the stimulus decreases as well. This is why resistance training needs to be a foundational and ongoing part of an overall training program for the endurance athlete. Gains made through a short bout in rehab or offseason will not carry over into the season unless work is continually done to maintain these adaptations. In short, one “bolus” of training is insufficient to elicit a long-term, sustained change.
Figure 6. Graphical representation of loss of GainzZzTM after the removal of resistance training from an endurance training platform.
The best-designed, most theoretically “optimal” program ever created can be ineffective if it is not performed consistently over the long-term. One isolated bout of resistance training, or even six weeks of resistance training does not make an athlete resistance-trained. This makes adherence to a program one of the most paramount variables. At Barbell Medicine we obviously have a predisposition towards using barbells in training; however, if an athlete does not have access to them, or is not willing to use them, the program must be altered to meet their needs. If this involves kettlebells, machines, or 5-gallon buckets filled with water — it is still quantifiable load, and the shape of the implement is secondary.
In the next section of this series, we’ll look at how to practically implement resistance training for the endurance athlete.
Special thanks to the the following individuals for their contributions: Austin Baraki, MD, Michael Ray, MS, DC, Josh Barabas, PT, MPT, OCS, CSCS. Brittany Barrie, PT, DPT, Christy Morgan PT, DPT, SCS, Samuel Lyons, MS, and David Lewis.