In yesterdays post on the quest for the ideal heel drop in a running shoe I used a study to illustrate a point about subject specific responses of 2 subjects in the study. A comment was made about this being cherry picking. This is not what cherry picking is and rather than continue to respond in the comments, I thought I would write a post to expand on the concept for a wider consideration.
Lets say there was a lab based study done on a particular intervention (it could be a foot orthotic or running shoe design feature; or a running form change; a stretching exercise; or strengthening activity) and to keep it simple lets say the study measured the impacts of that intervention on one parameter. Lets say the parameter measured was related to load in a particular tissue. So this means that if the intervention can be shown to reduce loads in that particular tissue, then that intervention might have the potential to be a good treatment for those who have a problem with that tissue.
If this hypothetical study found that there was a systematic reduction in the loads with the intervention, great. The recommendation is going to be that this intervention should be possibly considered in those who need the load reduced in that particular tissue. (I won’t get into the issue here about you can’t reduce the load in one tissue without increasing it in another; I wrote about that here).
However, If the hypothetical study did not find a systematic reduction in the loads with the intervention then the traditional recommendation is going to be that this intervention should probably not be used for reducing load in that tissue. Traditionally this is probably where the story ends, but more attention is started to be given to the subject specific responses to interventions, even when the systematic response is a non-response.
If you look at the data in most of the studies that find no systematic differences, some subjects did get a response in one direction and some subjects did get a response in the other direction, so the mean response, was “no response” (ie no mean or systematic differences). So if we pretend to look at the data in our hypothetical study we might find that some subjects did get a reduction in load in the tissue with the intervention and some subjects did get an increase in load; and some subjects would not have changed; hence the mean response of no systematic differences. This means that the traditional recommendation from this sort of study might mean that some people that can be helped with the intervention are going to miss out (which is a bad thing); but also those that could also be hurt by the intervention are also going to miss out (which is a good thing).
What is needed is, even though there was no systematic or mean differences, why did some go one way and some go the other way? What are the indicators or “clinical tests” that might be able to predict the response in which direction?
Looking at, understanding and explaining the subject specific responses is going to go a long way to find the answer to the why one size does not fit all. If we can determine the indicators or “clinical tests” that predict the response in which direction we can give better advice on which intervention to achieve what we think needs to be achieved.
A good overly simplified example to raise here would be the question of do running shoes control pronation? If you look at the data in the studies that show they don’t, there are some individuals that have less pronation and some individuals that have more pronation, but the mean response was no change in pronation with the running shoe. So is the conclusion that running shoes do not control pronation valid?
Does that make sense?
Last updated by Craig Payne.