Occupational Therapy’s Real AI Problem, Part II: When a Chatbot Can’t See the Profession’s Own Ethical History
I recently wrote that occupational therapy’s real AI problem is not ChatGPT itself, but us.
More specifically, it is our profession’s vulnerability to confident, polished, automated explanations built on partial knowledge, private frameworks, and unexamined assumptions. I argued that the real risk is not that a machine will invent nonsense from nowhere, but that it will absorb and amplify the profession’s existing epistemic weaknesses.
Now we have Part II of this discussion.
I was reminded of this topic when I posed a straightforward question to AOTA’s new chatbot. I asked it "Has there ever been any tension around how to operationalize the ethical principle of social justice in occupational therapy, and what would a violation of that look like in practice?"
The answer I received was plausible, polished, but very superficially informed - and thus, fully incorrect. It discussed system barriers, inequities in access, conflicting values, organizational constraints, and the moral distress clinicians experience when institutions fail clients.
It sounded like an answer - but in fact it wasn't an answer. Rather, it was a striking example of what happens when a chatbot can competently summarize one slice of a topic while missing the deeper structure of the issue - and that distinction matters.
The bot appears to have anchored its answer in recent practice literature on ethical tensions surrounding social inequities in occupational therapy. That literature is real and relevant so that is good. Occupational therapists encounter ethical tensions when restrictive policies, funding limitations, eligibility criteria, or institutional rules interfere with equitable service delivery. The article it referenced explicitly examined how practitioners navigate ethical tensions when attempting to address social inequities in practice.
But that is not the whole question I asked. At all.
I suspect that other educators have experienced students answering questions around an issue without ever really getting to the actual question. It is like a memory dump of partially related issues without any real understanding of the concept being queried. And then the student hopes that the instructor will grant some partial credit for at least knowing something about the topic. That is precisely what this felt like.
I did not ask whether clinicians feel tension when working in unjust systems. I asked whether there has ever been tension around how to operationalize the ethical principle of social justice in occupational therapy. That is a different and deeper question, because it asks not merely about practitioner distress, but about the profession’s own conceptual history.
And here the chatbot largely missed the point.
AOTA’s own ethics documents show that this has not been a settled matter. The 2020 Code of Ethics states plainly that Social Justice was added as a principle in 2010 (re-ordering concepts around Beneficence) and then combined with the Principle of Justice in 2015. The 2025 Code repeats that same historical note. In other words, the profession itself has already signaled, through revision, restructuring, and reframing, that “social justice” was not a stable or permanently self-evident ethical category. It was introduced, separated, and then folded back into Justice. That is not a trivial editorial detail. It is evidence of conceptual tension inside the profession itself - and in fact there is a whole story behind it.
If a chatbot cannot recognize that history or why that happened, then it may answer a narrow question while failing to register the profession’s own uncertainty about the principle being discussed. That is precisely the kind of failure I worry about.
The danger with these systems is not always hallucination in the crude sense. Sometimes the danger is a more refined kind of flattening. A chatbot can locate a contemporary article about practitioner experiences, summarize it fluently, and thereby create the impression that it has answered a broader historical and conceptual question - but in fact it has not. It has simply found a nearby lane and driven smoothly down it - as if an informed reader would not notice.
In this case, the missed history matters because the very operationalization of “social justice” in OT has long been unstable. Once a profession turns an aspirational moral concept into an enforceable ethical principle, difficult questions arise. What exactly counts as a violation? Is it discriminatory behavior? Is it inequitable allocation of resources? Is it failure to advocate? Is it participation in systems with unequal access? Is it disagreement with a particular ideological interpretation of justice? These are not minor details - they are the whole problem.
This is why I remain concerned about profession-specific chatbots.
They do not need to be wildly wrong to be misleading - they only need to be selectively right. They can summarize the literature that is easiest to retrieve, safest to state, or most aligned with current rhetorical habits, while bypassing the profession’s more uncomfortable historical and conceptual fault lines. In doing so, they present a cleaned-up version of occupational therapy to the user, one in which today’s language appears settled, coherent, and uncontested. In this case, it is not. Would anyone on the AOTA Ethics Committee REALLY want to stand behind the answer that the AOTA ChatBot just gave me?
A strong answer from an AI system should have said something like this: yes, there has been tension both in practice and in the profession’s ethical development. Occupational therapists do confront inequities in daily work, but the profession has also struggled to define and position social justice itself, as shown by the fact that Social Justice relabeled many aspects formerly under Beneficence and was then added as a separate principle in 2010 and then merged back into Justice in 2015.
Instead, the chatbot produced something smoother and safer. It answered a narrower question than the one I asked, and because it did so fluently, it invited trust.
That is the lesson.
The problem with AI in occupational therapy is not only that it may invent falsehoods. It is that it may quietly reproduce the profession’s own habits of conceptual imprecision, historical amnesia, and rhetorical overreach, then deliver them back to us in a tone of confident and polished authority. My earlier concern was that boutique or profession-specific AI tools would automate personal worldviews and partial frameworks. I would now add a companion concern: even official or quasi-official systems can automate institutional forgetfulness.
If we are going to use AI in this profession, then we need more than polished outputs. We need systems that can distinguish history from current rhetoric, dilemma from violation, and aspirational language from enforceable ethical standards. Otherwise we will not be using AI to clarify occupational therapy. We will be using it to launder our confusions.
And that, too, is an epistemic problem of our own making.

Comments