Occupational Therapy's Real AI Problem Isn't ChatGPT - It's Us.
The profession has always had its share of plucky innovators and small-scale entrepreneurs, but the landscape has shifted. An increasing number of young therapists are building a “personal brand,” launching a subscription service, or producing downloadable content with a logo and a tagline. These are not just side hustles - for some, it is the work.
None of this is inherently bad - and I understand why it happens. Salary-to-debt ratios look like a punchline and clinical autonomy is chipped away by policy and productivity expectations. It makes perfect sense that people look elsewhere to find meaning, income, or both at the same time. But something gets distorted when the gravitational pull of monetization starts to reshape what counts as knowledge in the field.
The distortion becomes even more concerning when we mix in artificial intelligence. We’re now in an era where anyone with a collection of blog posts, opinions, and half-formed frameworks can feed them into a language model and produce a chatbot that speaks with confidence, warmth, and unlimited patience. And the chatbot, of course, doesn’t pull punches. It doesn’t show its sources or disclose its blind spots. It simply packages the worldview of its creator and distributes it to anyone who asks.
Educators are concerned about AI - I just completed an AOTA survey about this yesterday - but we are not collectively attuned to the real problem. ChatGPT, Gemini, Claude, Grok - pick your favorite - are not the problem. Those big models, for all their flaws, have guardrails, alignment teams, and reputational risk. Even though they hallucinate, they self-correct, or at least try. They are, in a rudimentary and imperfect way, moderated.
The real danger is the locally created, boutique AI systems. These are fine-tuned models built by a single clinician who uses nothing more than their own blog posts, private theories, and preferred explanations of human behavior. Those models don’t have guardrails. They don’t have error correction. They don’t know the difference between a research-backed intervention and a clinician’s pet idea. They simply echo whatever the creator believes, often with more certainty than the creator ever would.
Imagine what this looks like in practice.
A therapist with a strong personal belief in reflex-integration being the master key of all developmental challenges creates their own “NeuroIntegrationBot.”
Another clinician convinced that the gut microbiome explains 80% of pediatric dysregulation launches a “GI–Behavior Connection AI Coach.”
Someone deeply committed to polyvagal explanations for everything from homework avoidance to picky eating releases a “Nervous System Parenting Guide.”
A handwriting specialist builds “GraspGPT,” a model trained on thirty blog posts about pencil grips, visual-motor skill hierarchies, and core activation.
The sensory-integration universe, which already struggles under the weight of uneven, evolving evidence, spawns a dozen “Sensory Companion” bots that confidently prescribe deep-pressure protocols for every problem under the sun.
And adult rehab won’t be immune either.
We will soon have trauma-healing bots, motor-relearning bots, pelvic-floor advice bots, and “executive function mastery” bots that take one clinician’s favorite framework and stamp it onto every user who logs in.
Each of these microsites will be polished, with a color palette, merch, and a tiered subscription service. Each will promise to “decode,” “reframe,” or “unlock” something - and each will produce a constant stream of algorithmically confident advice that is at best half true, and at worst, dangerous.
This is not hyperbole. This is the direction we’re already heading.
Once a clinician has a monetizable worldview, the bottleneck is no longer writing the content - it’s scaling it, and AI removes the bottleneck entirely.
This is where my professional unease begins. A chatbot doesn’t just echo a person’s ideas; it makes them feel authoritative. It erases the line between “my personal theory about child development” and “an expert guidance system parents should trust.” The user can’t tell whether they’re talking to the collective findings of a field or the automated projection of one person’s convictions.
Occupational therapy is particularly vulnerable to this phenomenon because our scope is wide enough to contain a multitude of knowledge areas to monetize - motor, sensory, cognitive, emotional, environmental, relational. That breadth comes with interpretive drift that becomes monetizable the moment a clinician discovers Wix, a cute pastel Canva template, and an API key.
I’m not opposed to creativity and I am an entrepreneur myself. I know that innovation happens at the edges - but I am very aware of what happens when highly motivated practitioners, operating in a profession that already undervalues rigor, use AI to amplify their ideas without any mechanism to filter, critique, or contextualize them.
We are drifting into a future where private frameworks masquerade as clinical consensus and where small AI models distribute strong opinions dressed up as therapeutic guidance, and we are doing it without realizing how epistemically fragile the profession has become.
So where does that leave us?
Ironically, academia helped create this problem.
We trained generations of practitioners to memorize facts but never to question the origin of those facts. We taught EBP as a citation method but we didn’t teach students to interrogate why certain explanations appeal to people, or how professional narratives form, or what happens when science, commerce, and charisma collide.
I think that academia can also fix it.
At RIT, this is exactly where our attention is focused. Not on producing graduates who merely “know the evidence,” but on producing clinicians who understand how knowledge is constructed, distorted, weaponized, commodified, and now automated. We need clinicians who can look at a polished microsite with a “NeuroBot” and say, with intellectual clarity, “This is a worldview; this is not evidence.”
We need to teach students how to track the genealogy of ideas. We need to teach them how to identify monetized narratives. We need to teach them how AI amplifies epistemic drift. We need to teach them to analyze who benefits from certain explanations and why.
We also need to find a way to police this as a profession. I do not have those answers yet, but I know it is a need.
If the profession is going to keep its footing over the next decade, it won’t be because we doubled down on traditional EBP assignments. It will be because academic programs rebuilt the profession’s immune system and teach students not just what to know, but how to recognize knowledge, and how to spot its counterfeits.
The counterfeits are coming and they will be beautifully designed. They will be confident, scalable, and if you don't know better you will be proud of your plucky students for what they create.
However, if we don't train our next generation of practitioners to see through them (or even better - to only develop them responsibly), they will redefine what “OT knowledge” even is.
If we want a different future, this is where the work starts. Worry less about ChatGPT. Worry more about localized micro ChatXXX, coming to your social media feeds really soon.
Once these micro-models exist, they won't just reach dozens of parents or people desperate for solutions. They will reach thousands of people overnight. Each person logging in and asking questions will be convinced they are hearing from an expert AI informed model.
They won't know that it is really just a single clinician's unexamined theory or monetized worldview, automated and amplified.
This is why academia can’t treat epistemology like a luxury topic anymore. It’s now clinical risk management.

Comments