Sensory integration research: Who is it for?

The March/April issue of AJOT has two articles on sensory integration that are worth discussing.

The first is Verification and clarification of patterns of sensory integrative dysfunction (Mailloux, Mulligan, Smith Roley, et.al.). This article is another factor analysis study that has to be considered in the context of a number of other studies including Ayres (1989) original cluster and factor analyses that went into SIPT standardization, Mulligan's 1998 and 2000 cluster and factor analyses, and the critically appraised topic written by Davies and Tucker (2008).

I'm not sure how many street level practitioners read cluster and factor analysis studies but I don't think that most people put this on top of their reading list. I think this is because we don't spend a lot of time educating practitioners on these methods and what they mean. I personally think that these statistical models are interesting but I also understand that they have a serious fundamental flaw in that they are based on heuristic models of interpretation. In other words, in the case of the SIPT, we are trying to label conditions based on a defined set of variables that supposedly 'make up' a construct that is called 'sensory integration' or perhaps 'praxis.'

The truth is that we are using those 17 tests as a point of convenience even though we have a lot of data that tells us that there are individual problems with some of the reliability of some of those tests. On top of that problem we also have expanded our thinking into more dynamic systems models and to be honest I have no idea how you apply factor analysis inside a world of non-linear dynamics. I guess I know enough to know that I don't have the math background for this kind of thinking.

Maybe it isn't a math problem as much as it is a philosophical problem - and that brings us around to the heuristics problem. I just can't help thinking that we are making contrived conclusions that might not really be a reflection of a full data set. If you go through and read all the factor and cluster analyses and the interpretations of these studies that have been done you will see that factors and clusters have been identified, then clarified and redefined, and in this most recent study we have come full circle to claiming consistency with the orginal conclusion of Ayres.

If there are any street level people reading this stuff they are probably wondering:

1. So which is most 'true' - the Ayres data set or the Mulligan data set or the interpretation of Davies/Tucker or now the Mailloux/Mulligan/et.al. data set.
2. In the 20+ years of variability on conclusions has any of this made a difference anyway to how clinicians are practicing?
3. Is this even in sync with the notion of occupation based practice?

I am concerned that decisions will be made for restandardizing the SIPT based on the heuristic interpretation of these data sets. Since we haven't done a historically good job of even definining what SI is that this is kind of like building a castle on a sand foundation.

All of this leads to the overwhelming question of WHO CARES and WHO IS THIS REALLY WRITTEN FOR ANYWAY? This research has no application to practice. My concern is that in the next 20 years someone else will decide to be an eigenvalue purist who thinks THERE MUST BE a 6 factor solution and they will contribute to another 20 years of gear spinning. Will this bring our practice further along?

******************
On to the next article...

Parham, Smith Roley, May-Benson, et.al. wrote Development of a fidelity measure for research on the effectiveness of the Ayres Sensory Integration Intervention (ASI). This is a long anticipated article that developed a fidelity measure for use in research on ASI. I understand that this is not a practice tool, but the point is to more clearly operationalize our terms and definitions for research - which in theory is supposed to eventually inform our practice.

Structural and process elements were identified but only process elements were validated. Unfortunately, the only people who can tell you if you are appropriately incorporating the process elements are a handful of specially trained experts who defined what the process elements are. This kind of drives the whole fidelity instrument into a ditch of confirmation bias - and really I just don't know what to say about it from that point.

Structural elements are identified but were not validated. Presumably these would be elements that could be more easily confirmed by untrained people. The problem with the structural elements is that you need to have post professional training in SI, there are restrictive space and equipment requirements, and there are requirements for levels of communication that are rarely achieved in many practice settings.

These elements make ASI as it is described as being apropos of nothing, because if only a couple experts can tell you if you are doing it, and if your practice setting precludes the structural elements - then a fidelity measure won't matter much because the model is not applicable to the realities of street level practice. In my thinking, these two articles do not contribute to practice and demonstrate quite clearly that we should re-work the model until we come up with something that reflects actual practice and perhaps incorporates a broader occupation-based framework. While we are at it we might drop those sensory processing interventions that have not been supported by research.

References:

Ayres, A. J. (1989). Sensory Integration and Praxis Tests. Los Angeles: Western Psychological Services.

Davies P. L., Tucker R.(2010) Evidence review to investigate the support for subtypes of children with difficulty processing and integrating sensory information. American Journal of Occupational Therapy 64, 391–402.

Mailloux, Z., Mulligan, S.,; Smith Roley, S., et.al. (2011) Verification and clarification of patterns of sensory integrative dysfunction. American Journal of Occupational Therapy, 65, 143-151.

Mulligan S. (1998). Patterns of sensory integration dysfunction: A confirmatory factor analysis. American Journal of Occupational Therapy, 52, 819–28.

Mulligan, S. (2000). Cluster analysis of scores of children on the Sensory Integration and Praxis Tests. Occupational Therapy Journal of Research, 20(4), 258–270.

Parham, L.D., Smith Roley, S., May-Benson, T.A., et.al. (2011) Development of a fidelity measure for research on the effectiveness of the Ayres Sensory Integration Intervention, American Journal of Occupational Therapy, 65, 133-142.

Comments

Sarah said…
Thank you for continuing to post compelling posts about SI. I am an OT with one year of experience and am developing my views on SI. I have found that (in the community in which I work) SI is unquestionably used as a modality for many atypical behaviors, with no apparent results. Some clients have been receiving "SI" from these practitioners for over 10 years, with no change in function. I greatly appreciate your scientific, and realistic view of SI and I hope you continue to post on this topic.
Anonymous said…
As an author on the first paper discussed, I will try to address some of your comments. I will post my response in two parts. While there are some inherent limitations with any form of statistical analysis, the large of body of research on the sensory integration patterns, in a variety of populations and with a range of assessment tools serves to increase the confidence with which we can view the findings.

Let me specifically address a few of the statements:
“The truth is that we are using those 17 tests as a point of convenience even though we have a lot of data that tells us that there are individual problems with some of the reliability of some of those tests.”

While the factor and cluster analyses published in the SIPT manual and those conducted by Mulligan and published in AJOT used only the 17 SIPT, numerous studies prior, as well as our recent study did not rely on the same or solely on those 17 tests. The point of these types of studies is to more fully grasp the nature of the underlying functions which group together to form patterns of both ability and dysfunction. A review of the many previous studies by Ayres going back to the 1960s with a wide range of measures and across many different groupings of subjects, as well as studies post Ayres, reveal a great deal of consistency in the grouping of functions and areas of difficulty. The tests and measures are not exactly the same, nor are the subjects from study to study. This, in fact, increases our confidence that what is seen is undeniable. As far as the “reliability” of the tests-all of the 17 tests of the SIPT have very high inter-rater reliability and most have very acceptable test-retest reliability. The ultimate sample used for measuring reliability on the SIPT was quite small and conducted on a sample that was made up almost entirely of children who had identified issues. Given that all 17 tests have exceptional validity and are highly discriminative, we know that the main concern might be in using a few of the SIPT tests as an outcome measure. However, the tests are clearly a solid tool for determining both subtle, as well as more significant issues, and in particular, the specific underlying explanation to those concerns

I will respond to the next comments in a following post.

Sincerely,
Zoe Mailloux
Anonymous said…
Here is part 2 of my response.

So which is most 'true' - the Ayres data set or the Mulligan data set or the interpretation of Davies/Tucker or now the Mailloux/Mulligan/et.al. data set..
This is a difficult comment to address as it seems to reflect a broad lack of understanding of how professionals use literature and research to guide practice. Ayres and Mulligan’s studies, as well as our recent paper reflect an evolving and refining process of clarifying sensory integration patterns. Davies and Tucker present only 4 studies-2 that use parent report measures of sensory modulation and 2 of the Mulligan studies. There is not a right or wrong or either or answer in the comment-rather just some clear confusion about what the literature presents

In the 20+ years of variability on conclusions has any of this made a difference anyway to how clinicians are practicing.
It is actually more like 50+ years and I would say certainly the answer is YES. It has made a huge difference in the thousands of children I have seen receive intervention and if I even begin to try to extrapolate to the thousands of therapists I have seen study and apply this information, the answer is that it has made an incredibly significant difference. For those who have not bothered to fully understand the concepts, then probably not.

Is this even in sync with the notion of occupation based practice?
Again, for anyone who has taken the time to study the complex and important work of Ayres, there is never any question about how critical this information is to understanding and supporting occupation based practice. If we do not take the time to fully understand the children, how can we best serve them and their families?

I hope that this author will not serve to discourage anyone who is interested in reading and utilizing research in one of the most studied aspects of our profession in order to provide best practice.

Sincerely
Zoe Mailloux
I appreciate Zoe Mailloux taking the time to post - not sure how long the comments were sitting in a separate box because 'anonymous' identity on top doesn't get sent to my email.

Anyway, we may agree to disagree about the importance of reliability of some of the tests. I notice that whenever there is criticism of this nature the responses come back as 'most' of the tests, or 'generally' the tests - and I just don't think that is addressing the criticism. 'Most' of the tests may be reliable or 'generally' the tests are reliable but indeed some are not - and if we are including tests with known reliability problems in our analysis then it seems fair to say we are still using this configuration as a point of convenience simply because it already exists.

Why not just GET RID OF those tests that have less than acceptable reliability data?

This issue sits at the base of my criticism about the problems with heuristic models. We know this to be true because there have been studies conducted that show us that the SIPT alone does not always provide enough data for clinicians to come to a consistent 'diagnosis' or interpretation of the test. If the ability to consistently diagnose a problem is also sometimes dependent on factors OUTSIDE of the test, then my point is why not change the test to find some ways to incorporate those measures? Wouldn't that strengthen the test?

There may also be differences between clinical and statistical consistency - getting something 'right' 70% of the time wouldn't work too well on a clinical basis and doesn't cause people to have confidence in what you are saying.

My questions about what is 'true' are rhetorical - which I hoped would be evident by typing it as 'true.' I am not talking about truth from a statistical model - I am asking for an answer to a street level question because clinicians may appreciate that our understanding is evolving - but as we evolve we probably should be careful about what we are telling families about their children - and perhaps we should be careful about whether or not it is fair to ask people to pay us so we can evolve our understanding. This gets at the core of the problem that people just don't like to talk about - but what are our ethical responsibilities when we are in an evolution phase - and when do we have 'enough' information that the ethical questions are 'taken care of?' This seems to be a very fair question, because I do not stand alone when I ask questions about these things - rather - it is in fact true that we have problems with insurance reimbursement DIRECTLY RELATED to the fact that we lack sufficient research. I know people don't like that, but it is just true.

Finally, I trust that no one who bothers to read here gets the opinion that I am discouraging reading and utilizing research about SI. Actually I always kind of hoped that people reading this stuff in my blog would go read things for themselves and study it themselves and also generate conversation. There is precious little evidence of constructive debate about these issues in our profession - so in my opinion if anything posted here gets that happening it is a good thing.

Thanks for your comments.

Chris

Popular posts from this blog

On retained primitive reflexes

Deconstructing the myth of clothing sensitivity as a 'sensory processing disorder'

Occupational therapy education: How to navigate in a Perfect Storm