More Than Half a Century Later and Still Waiting for Sensory Integration Evidence

I came into occupational therapy as a street level clinician. Like many of my generation, I learned Ayres Sensory Integration (ASI) in school. I took additional steps to get certified to administer and interpret the Sensory Integration and Praxis Tests (SIPT) when they were first published, and tried to make sense of what the model promised. Over the years, I watched that test become outdated, all while  waiting for the robust evidence base to emerge. I was told at countless workshops and conferences, "The research is coming."

It never did.

In the meantime, neuroscience did not stand still. We now have dynamic systems theory, heterarchical processing models, robotics, and AI-driven neural networks that give us far richer and more precise understandings of how brains actually develop and adapt. These frameworks have left Ayres’ mid-20th century metaphors behind.

I say this not as an outsider taking potshots. I am a clinician who has lived with the sensory integration model in practice. I am also an academic who has taught the next generation. Now I am again in a  program director role and feel a deep responsibility for how students understand our profession’s evidence base. I’ve put in the decades, and I know what it means to use a model at the street level and to see how it plays out in real children’s lives. I have lived with the limitations, and kept hoping the research would catch up.

It hasn’t. And now, as new studies repackage the same flawed frames and statistical missteps, I feel compelled to say plainly what many clinicians quietly know: we cannot keep telling students there is still hope for this model when the evidence has not delivered after half a century.

I have been documenting concerns with the sensory integration model and research base in this blog for two decades. In 2016, I called for the profession to throw in the towel on its current framing and assessment tools - specifically stating

Now it is time to turn the page, examine the research on anxiety and regulation and motor learning that is not so controversial, and find conservative evidence based interventions that insurance companies pay for and our medical colleagues accept..

I have also repetitively cautioned about the Pygmalion effect and the way our aspirations distort our evidence base. But still, nearly a decade after some of these very specific critiques, here we are again - this time with a trial pitting ASI against Applied Behavior Analysis (ABA), published in Autism Research. And once again, the same patterns repeat.

The new study linked above is built on a false dichotomy. ASI is described as targeting “developmental neuroplasticity” while ABA is framed as simply “changing environmental variables.”

This caricature is simplistic, misleading, and does not represent of how interventions happen in actual clinical practice.

Every so-labeled behavioral intervention alters brain function. This should be a given.


Environmental variables drive neuroplastic change just as much as any sensory-based activity. To set ASI up as “neurological” and ABA as merely “behavioral” is rhetorical positioning, not science.

As with past studies, the results lean heavily on Goal Attainment Scaling (GAS).

The problems haven’t changed:

  • Parents and therapists set the goals, so bias and sunk cost are baked in.

  • GAS produces ordinal, individualized data, but researchers treat it like continuous, standardized scores.

  • Outcomes are too idiosyncratic to compare meaningfully across participants, yet the statistics plow ahead as if they weren’t.

For those interested in the statistical details, in order to get their results, the authors converted individualized GAS ratings into standardized T-scores and ran general linear models (GLMs) as if the scores were continuous, normally distributed, and comparable across participants.

The repeated problem here is that they weren’t. This kind of parametric treatment of ordinal, biased data inflates significance and effect sizes, especially in a small underpowered sample. Add in multiple uncorrected comparisons, and it’s no surprise ASI 'looks good' on paper while standardized measures remain flat.

But even if the stats weren’t misapplied, we face a deeper problem with a lack of described intervention specificity.

We don’t really know what “ASI” or “ABA” meant in this trial. Yes, the paper cites fidelity checklists and training protocols, but in practice, complex interventions are messy, overlapping, and influenced by therapist style and inability to fully control the range of interventions and experiences that are happening in any given child's life. Without precise specification of what was delivered, “ASI” risks collapsing into a generic developmental approach, while “ABA” risks being little more than general behavior management.

As the intervention specificity literature (linked above) reminds us: if you can’t clearly define and isolate what was tested, you can’t reasonably claim causality, and you certainly can’t conclude that one model outperformed the other.

Instead, the only thing this study really shows is that a poorly specified intervention, measured with a biased tool, can be made to look effective with the wrong statistics.

So once again we have a small sample, underpowered analyses, multiple outcomes with no serious correction and it is all dressed up in the language of neuroplasticity and comparative effectiveness.

And once again we have a 'limitations' postscript, which I generally interpret as an excuse. This time it’s COVID. The authors note that pandemic disruptions may have affected intervention delivery and outcomes. I suppose that could be true, but the pandemic complicated countless studies. But here, it functions less like a neutral limitation and more like a preemptive shield that is designed to explain away null findings and protect the ASI narrative.

The reality is simple: if interventions weren’t delivered consistently, the comparative frame collapses. You can’t claim ASI outperformed ABA (or even performed at all) when neither treatment may have been implemented as designed. At that point, all that’s left is speculation, and speculation is not evidence.

This trial doesn’t tell us ASI works, or that it works better than ABA. It tells us the profession is still stuck in decades-old theorizing and clinical framing and pilot-level exploratory studies. We are still beating the same drum, hoping no one notices the tune hasn’t changed.

If occupational therapy is serious about building an evidence base that matters, we have to stop recycling flawed models and methods and start doing the hard work:

  • Follow modern neuroscience. Neural development is reciprocal and dynamic, not a flawed hierarchy of “building blocks” that theoretically favor proximal neuroscience while other interventions supposedly don't.  We need to stop framing this model in mid-century metaphors. PLEASE.

  • Specify interventions precisely. Fidelity checklists are not enough. We need clarity on what was actually delivered, how it differs from competing models, and what mechanisms are hypothesized to drive change.

  • Use appropriate measures and statistics. GAS may be fine as a clinical tool, but it’s indefensible as the primary outcome for research. We need standardized, psychometrically sound measures analyzed with the right statistical models.

  • Stop leaning on excuses. If trials are disrupted or interventions diluted, acknowledge that as disqualifying for comparative claims. Don’t use it as a rhetorical shield.

Occupational therapy has to let go of legacy models that no longer serve us. Otherwise, we risk being remembered less as innovators and more as curators of yesterday's clinical mythology.

More than fifty years into this model, and it is time to stop hoping the research will arrive.

We have to start building something new.

Comments

Popular posts from this blog

Reflecting on the AOTA Code of Ethics: A 2025 Crossroads

On retained primitive reflexes

The Ant Mill of Occupational Therapy: A Profession Trapped in Its Own Spiral