Is ChatGPT Better Than Your Physical Therapist?

Is ChatGPT Better Than Your Physical Therapist?
Researchers at the University of Virginia School of Medicine tested 50 doctors on a series of diagnostic case studies. The experiment allowed some to use ChatGPT, while others could only use their traditional clinical resources.

The group using ChatGPT had a median score of 76%, while the group using traditional resources had 74%. So, despite this world-changing technology, doctors were only slightly better at making diagnoses.
 
Although there is a kicker…

They also tested ChatGPT, which got 90% of the case studies correct.
 
That’s right!  The machine beat both the doctors and the doctor-plus-machine team.

The lead researchers said,

“We were surprised to find that adding a human physician to the mix actually reduced diagnostic accuracy...”

The researchers called it “unexpected,” but maybe it shouldn’t be.

AI has read more case reports, journal articles, and PubMed abstracts than any human ever could. It’s not tired, biased, or rushing to get to the right answer. And even when armed with AI technology, doctors often fed it short, vague prompts, or only used it to confirm what they already thought.

So what does this mean for dealing with an injury? 

Are we better off feeding an LLM our symptoms and history to manage pain, rather than seeing a doctor or physical therapist?
 

Is ChatGPT a Better PT?

I brought this question to one of our team’s physical therapists, and her reply was:

“Ok, unfortunately, my job is probably at risk. I asked ChatGPT a few questions, and it definitely gives a great jumping-off point for assessment for 70% of the population.”

Right now, Kristen really doesn’t need to fret.  

At some point, machines will probably provide better medical care than humans are capable of.

We’re just not there yet—for a couple of key reasons.


The Prompt Problem

One of the biggest issues with AI isn’t the model, it’s the user.

If you tell any AI, “I have a torn labrum—what should I do?” It’s going to follow your lead. It will likely recommend seeing a doctor, but it won’t challenge your assumption or perform an assessment. It’ll just politely agree and write you a program for your “torn labrum.”

That’s called confirmation bias, a problem as old as human civilization. Only now we can confirm our biases with answers written like a doctoral dissertation.
 
With a good question, something like “What are common causes of pain in the front of the shoulder for active adults?” The response becomes far more accurate and useful.

AI certainly has its problems, but the biggest one seems to be the lack of training by the people running it.

Computers Can’t “Feel”

Evidence-based practice combines current research with clinical expertise—and that expertise largely hinges on experience.

They’re equal slices of the EBP pie. You want to avoid both extremes: the old-school clinician stuck in their ways and the data-only practitioner who treats you like a walking piece of evidence.

For now, ChatGPT is the latter. If you provide it with a case where you know all the evidence, it’ll do a great job, or at least offer a reasonable starting point. But if you miss key details that require hands-on assessment, you’ll get a generic answer.

That’s because some things can only be felt, like the stiffness of a joint, the quality of a contraction, or the subtle differences in movement.

As Kristen explained:

“In PT school, we were told to feel a thousand shoulders before we’d know what normal feels like.”

That kind of experience can’t be scraped from PubMed. It’s what turns guidelines into personalized care.

And there’s also a psycho-emotional layer that no algorithm can fully capture. I’ve known plenty of athletes who had legitimate injuries, but their pain was also tied to burnout, perfectionism, stress, or simply life piling up.

The athlete grinding for a goal that just got derailed…
Skipping meals to “look the part”…
Trying to juggle kids, work, and a blown car radiator…

All of that affects recovery, and you can’t prompt an AI to understand it.

We Suck at Following Through

The last big reason physical therapists aren’t going anywhere?

Rehabbing an injury takes commitment, and most of us aren’t great at sticking to the plan.

For non-red-flag shoulder pain, a simple, progressive system that restores strength and range of motion — like Crossover Symmetry — solves the problem for most people.

A simple, progressive template that improves strength and range of motion while avoiding provocative activities is exactly what most injuries need.

The problem? You have to actually do it. 
 
When things don’t feel that bad, we skip our exercises.  Then, we get antsy, promising to “take it easy” during that CrossFit workout or short run… right before doing the opposite.

The accountability that comes with not wanting to disappoint your therapist, or the guidance to accommodate your training despite an injury, is hard to replicate when the “coach” is a chatbot.

Case in point…I made myself a PT plan for a nagging Achilles issue. My success rate with ankle stretches and eccentrics? Let’s just say it’s… not high.

But if I had to report back to a PT every week, I’d be much more likely to stick to it.

Collaboration, Not Competition

As Kristen summed it up:

“I’m not a ChatGPT hater. So far, I’ve seen it calm people down much better than WebMD and give them a reasonable roadmap—but as with everything it does, it’s still generic.”

AI can help patients learn and come in more informed, but it can’t replace the hands, eyes, and intuition of a skilled clinician.

So come in curious. Bring your research.  Then let your PT blend it with experience, context, and a touch of human empathy.

Right now, AI might help you understand your pain, but it’s lacking the tools to truly help you overcome it.

Originally published as Movement #281

Featured Products