Myofascial self release and how to ruin a good study

POSTSCRIPT: The manuscript of this paper has now been changed and a number of my comments are redundant. Please see the comments below.

I feel really sad when I see what is a well conducted and thought out study that the authors do the wrong analysis on and it got through the peer review process. The whole idea of that peer process is to stop that happening.

Given that there is a lot of interest in myofacial self release, it was good to see the effects of it being researched. This study just appeared:

The immediate effect of bilateral self myofascial release on the plantar surface of the feet on hamstring and lumbar spine flexibility: A pilot randomised controlled trial
The Journal of Bodywork and Movement Therapies; Article in Press
Rob Grieve, Faye Gooodwin, Mostapha Alfaki, Amey-Jay Bourton, Caitlin Jeffries, Harriet Scott
Self myofascial release (SMR) via a tennis ball to the plantar aspect of the foot is widely used and advocated to increase flexibility and range of movement further along the posterior muscles of a proposed “anatomy train”. To date there is no evidence to support the effect of bilateral SMR on the plantar aspect of the feet to increase hamstring and lumbar spine flexibility.
The primary aim was to investigate the immediate effect of a single application of SMR on the plantar aspect of the foot, on hamstring and lumbar spine flexibility. The secondary aim was to evaluate the method and propose improvements in future research.
A pilot single blind randomised control trial.
Twenty four healthy volunteers (8 men, 16 women; mean age 28 years ± 11.13).
Participants underwent screening to exclude hypermobility and were randomly allocated to an intervention (SMR) or control group (no therapy). Baseline and post intervention flexibility was assessed by a sit-and-reach test (SRT). Descriptive statistics for baseline and post intervention SRT and an independent t-test comparing differences in SRT change scores were conducted.
A statistically significant (p=0.02), greater increase of SRT change scores in the SMR intervention compared to the control group was found with a large effect size (d= 1.05).
An immediate clinical benefit of SMR on the flexibility of the hamstrings and lumbar spine was indicated and suggestions for methodological improvements may inform future research.

I really do not know why they called this a ‘pilot’ study, as it was simply a randomized controlled trial; nothing ‘pilot’ about it, but that is beside the point.

This study was really well done and conducted. The participants were randomized properly; they appear to have been blinded to the purpose of the study; the measurements were done by a researcher blinded to the intervention; everything was properly reported. It ticked all the boxes.

BUT, they were let down by the analysis. This is what they did:

There was a mean difference in the baseline SRT score for the control (21.58 cm)
compared to the intervention group (17.92cm), although this was not found to be statistically significant (p=0.43). The mean SRT change scores in the SMR intervention group increased by 2.42 cm from pre- to post-measurement, compared to 0.83 cm in the control group. An independent samples t-test showed a statistically significant (p=0.02), greater increase of SRT in the SMR intervention compared to the control group. The results showed a large pre-post effect size (d= 1.05), Cohen’s d.

They compared the baseline number with the outcome number within each group. The results were statistically significant in the intervention group and not in the control group, so they concluded the intervention worked and reported a large effect size, I presume on the effect of the intervention in the treatment group. This is called a within groups comparison.

That is not how you analyse the data in a clinical trial! The whole point of having a control group is that you compare the outcome in the intervention group with the outcome in the control group; ie they should have done a between groups statistical test – which is comparing the 22.42 (+/-10.37) in the control group to the 20.33 (+/-11.37) in the intervention group. As there was some improvement in the control group (that was not statistically significant for the within groups analysis they did), this means that there may or may not be significant difference between the outcomes of the two groups and I am not convinced that it would be. Even if there was, the effect size will be very much smaller than what they reported.

I have reviewed other papers let down by doing a similar within groups analysis (here, here, here and here). It is disappointing when this happens.

As always, I go where the evidence takes me until convinced otherwise, and the results of this study may or may not be statistically significant, but we will not know without the right analysis.

Last updated by .

5 Responses to Myofascial self release and how to ruin a good study

  1. Michigan Biomech December 19, 2014 at 6:22 pm #

    The Impact factor of the journal concerned of 0 might explain why it got through the peer review process.

  2. Leon Chaitow December 29, 2014 at 7:32 pm #

    As Editor-in-Chief of JBMT my response to Michigan Biomech is that he/she should seek out reality before casting aspersions – it’s a lot more scientific.
    For details of JBMTs impact – while we wait for Thomson Reuter to issue their version can be found here:

    OUR current Source-Normalized Impact per Paper (SNIP) is 1.186

    NOTE: The paper in question is still in proof form and opinion is being sought as to the accuracy or otherwise of the assertions made by Craig Payne above. If these are found to be correct (not yet established) modifications will be made before publication

  3. Rob Grieve January 8, 2015 at 8:39 pm #

    Hi Craig,

    Pleased to see you thought our study was of good quality. As in all research there are limitations. As a pilot study one of the aims was to review and report these in order to inform future research in the area. To state in the heading of your review that we ruined a good study is not fully correct. You stated; “They compared the baseline number with the outcome number within each group”. No, we compared the between group change scores using an independent t test between the intervention and control group.
    I am aware there are acknowledged validity issues with comparing change scores.

    As the paper was in proof form, it has enabled us to make modifications to the statistical analysis before final publication. As you say this was a well designed and thought out study in an area that is becoming very relevant to the lower limb and foot.

    Kind regards,
    Rob Grieve, PhD

  4. Leon Chaitow January 24, 2015 at 9:23 am #

    Craig, the link you provided to the Researchgate site where JBMTs impact factor is reported as ‘zero’, is misleading, inasmuch as they provide the Thomson Reuter figures and as of now JBMT is not reported…therefore the ‘zero’ is not a reported figure, it is the absence of one.

    As you know there are other measures such as those offered by Elsevier – as per the link I provided – and do so again:

    JBMT applied to Thomson 18 months ago. They announce annually at mid-year – who they will include in their listing, and never disclose reasons for not accepting specific publications.
    When/if we are listed I expect our impact factor to be much the same as that reported by Elsevier’s SNIP figure.

Leave a Reply