Occupational Therapy’s Real AI Problem, Part II: When a Chatbot Can’t See the Profession’s Own Ethical History
I recently wrote that occupational therapy’s real AI problem is not ChatGPT itself, but us. More specifically, it is our profession’s vulnerability to confident, polished, automated explanations built on partial knowledge, private frameworks, and unexamined assumptions. I argued that the real risk is not that a machine will invent nonsense from nowhere, but that it will absorb and amplify the profession’s existing epistemic weaknesses. Now we have Part II of this discussion. I was reminded of this topic when I posed a straightforward question to AOTA’s new chatbot . I asked it "Has there ever been any tension around how to operationalize the ethical principle of social justice in occupational therapy, and what would a violation of that look like in practice?" The answer I received was plausible, polished, but very superficially informed - and thus, fully incorrect. It discussed system barriers, inequities in access, conflicting values, organizational constraints, and the ...