Regardless of how you are feeling about it, AI is rising and turning into a part of our lives, together with in health and well being. ChatGPT and different AI chatbots are frequently mentioned in health circles, however the issue is that their limitations aren’t.
Whereas they will undoubtedly be a strong instrument, they are not infallible. In truth, they will even misinform you. This is why it is essential to grasp what ChatGPT can do, what it could possibly’t, and the right way to get probably the most out of this new know-how.
If you’re excited by utilizing AI to assist information your health journey, listed here are some issues you’ll want to know!
Key Factors You Want To Know!
|
7 Issues With Utilizing ChatGPT As A Private Coach
1. ChatGPT is given a false presentation of being “AI”.
ChatGPT is continually offered as being AI, synthetic intelligence. Naturally, it conjures up photos of futuristic robots working the world and leads individuals to suppose it is infallible.
It is not, or at the very least in the best way most individuals suppose AI ought to be. Technically, AI is a sort of know-how that may carry out duties that usually require human intelligence, and ChatGPT can try this.
Often known as Giant Language Mannequin or LLMs, ChatGPT is skilled on huge quantities of knowledge to be taught to grasp and predict the best way people communicate. Whenever you give it a immediate or query, it predicts a response based mostly on patterns it realized throughout coaching (Chelli et al., 2024).
So LLMs aren’t analyzing knowledge to create a novel response. Somewhat, it is;
- Reconstructing responses based mostly on realized language patterns
- Weighing chances of various phrase sequences
- Producing outputs that statistically “match” the enter
2. ChatGPT doesn’t “suppose” or have unique ideas
As a result of ChatGPT operates the best way it does, it is not pondering; it is simply good at making it appear like it’s. On the similar time, it lacks originality and isn’t clever.
They possess:
- No understanding
- No consciousness
- No intent or reasoning in a human sense
- No grounding in actuality (solely patterns in textual content)
So why is that this vital to grasp? As a result of ChatGPT is not forming an opinion based mostly on reality however reasonably repeating data it deems authoritative.
In different phrases, it is simply recalling data in a extremely refined method. In truth, numerous chatbots cite SET FOR SET on quite a few matters corresponding to:


3. AI Chatbots will create “hallucinations” to fill in gaps.
This is vital because it illustrates the true nature of AI chatbots. As a result of AI is designed to supply solutions by recalling data, it can by no means say “I do not know.”
When it comes throughout a piece of a solution it would not have knowledge for, it can make one thing as much as fill in the gaps. These are generally known as “hallucinations” (Emsley, 2023).
This may embody;
- Offering false references
- Citing research that do not exist
- Giving hyperlinks to research that do not exist to lifeless websites
This is an actual situation with penalties. Legislation corporations utilizing AI are probably the obvious examples with instances like;
- A lawyer is being fined $110k for submitting paperwork containing AI-generated hallucinations, together with citations to non-existent cases.
- Legal professionals are having their licenses stripped.
- And much more
And this is not simply us hating on AI; ChatGPT will admit it while you query it.
What this implies is, except you understand the content material, you actually cannot be certain it is 100% true except you fact-check the Chatbot.
4. There’s a “cut-off” date for data.
Bear in mind, CGPT operates by predicting speech patterns it realized from knowledge. This means it would not at all times present solutions based mostly on up-to-date data, however reasonably the final time they had been skilled.
Completely different variations and apps have totally different dates, however most are wherever from 2021 to 2025. Whereas ChatBots can often nonetheless browse to get the newest data, this have to be instructed. This principally means it would not have direct entry to new data.
For instance, we requested CGPT if we had been going to see Ozzy Osbourne (RIP) subsequent month, and it replied:

We then informed it to examine for updates when it acknowledged, “I am glad you requested me to examine–this modifications all the things.”
This means ChatGPT might provide you with outdated data, even after analysis modifications. This is vital within the health world as a result of issues change on a regular basis, such as the proper rep range.
5. Typically it simply makes blatant errors
Typically, CGPT will simply make errors and even double down on them after we cannot clarify why.
Maybe the obvious instance is after we had been writing a program for the Military Health Take a look at (AFT). This is a health take a look at to find out bodily readiness.
There’s some historical past to it, however this is a rundown.
- Military Bodily Health Take a look at (APFT): 1980-2020, 3 occasions
- Military Fight Health Take a look at (ACFT): 2020-2025, 6 occasions
- Military Health Take a look at (AFT): 2025-Current, 5 occasions
- Fight Health Take a look at (CFT): April 2026-Current, 7 occasions, for fight troopers
At present, they’ve the AFT consisting of 5 occasions, and a brand new CFT for fight troopers containing 7 occasions. Nevertheless, CGPT denies that they exist, even after being corrected.



Maybe it is a cut-off, however often it can present appropriate data when corrected; right here, it doesn’t.
An instance of this in actual life was when a man asked ChatGPT for diet advice, solely to be informed to eat bromide as an alternative to salt. He ended up within the hospital (Eichenberger et al., 2025).
6. CGPT tends to supply optimistic solutions.
CGPT has a status for desirous to please the person by hyping them up and going with the circulate. It is like a hype man.
This tends to fluctuate by matter and particular person, however it could possibly throw a brand new person off. CGPT might let you know one thing seems or sounds nice when it would not.
For instance, you may give CGPT a program to critique. Whereas it could have a lot to appropriate, it could change a couple of issues and let you know, “You nailed it”.
The purpose is that in some eventualities, CGPT can encourage you reasonably than critique.
Or, attain out to us and we’ll remember to hook you up with a health program that works for you!
This is actually why we started providing single consultations!!
It allows you to communicate to us for an hour, permitting us to reply questions you may have or tweak an present program.
7. You may Solely Ever Hear One Methodology
Final, as we’ve talked about a number of instances, ChatGPT solely repeats mainstream opinions on a topic. Whereas that is undoubtedly splendid and the most secure choice, it additionally means you may by no means hear different strategies.
That may not sound like a giant deal, however this could embody by no means listening to about:
- Low-volume coaching, corresponding to HIT
- Use of different coaching corresponding to tire flips, farmer carries, or sled pushes
- Minimalist training
Now, when you ask ChatGPT about these, it can present data. Nevertheless, it can probably by no means give it up freely because it sticks to mainstream thought.
How To Greatest Use ChatGPT For Health Coaching?
Opposite to the way it’s typically offered, CGPT should not be seen as a dependable supply of coaching recommendation for everybody. It requires a fundamental stage of expertise and data to identify false or deceptive data. On the similar time, it’s best to by no means blindly belief its recommendation, particularly when it may have an effect on your security or well-being.
For instance, fake you had been coaching for the Military Health Take a look at and requested CGPT to develop a program for you. It will prepare you for the flawed occasions, leaving you fairly stunned on take a look at day!
So crucial takeaway is do not deal with ChatGPT as an infallible piece of AI. Deal with it as a great tool to assemble your ideas and assist with design, and ALWAYS double-check each little bit of vital data!
Observe These Steps!
- Present clear data
- Ask it to clarify its reasoning
- Verify the info
- Inform it to examine the newest data
- Examine its recommendation with trusted sources or a certified coach (Like us!)
FAQ: Can I Use ChatGPT As A Private Coach?
1. Can ChatGPT substitute a private coach?
No. ChatGPT can clarify workout routines, assist set up your concepts, construct pattern exercises, and examine coaching strategies. Nevertheless, it can’t monitor your kind, assess your motion high quality, gauge your fatigue in actual time, or know your full coaching historical past like a certified coach can. Use it as a instrument, not as a alternative for skilled teaching.
2. Is ChatGPT at all times correct with health recommendation?
No. ChatGPT can present useful data, however it could possibly additionally make errors, use outdated info, or give solutions that sound appropriate however aren’t. This is particularly dangerous for damage rehab, medical circumstances, navy health exams, train approach, and programming particulars which will change over time.
3. Why does ChatGPT typically give flawed solutions?
ChatGPT is a big language mannequin. It creates solutions by predicting probably phrase patterns from its coaching knowledge. It doesn’t really “know” issues as an individual does. Due to this, it could possibly misread context, use outdated data, or fill in gaps with made-up particulars, a phenomenon known as hallucination.
4. How ought to I take advantage of ChatGPT for exercise programming?
You should use ChatGPT to brainstorm, set up, and tweak your coaching plan. It could assist you examine workout routines, plan weekly exercises, write progressions, and clarify coaching concepts. Nonetheless, at all times double-check something associated to security, official take a look at requirements, damage threat, or present tips.
5. What’s the most secure method to make use of ChatGPT as a health instrument?
The most secure method is to deal with ChatGPT like an assistant, not an skilled. Present clear data, ask it to clarify its reasoning, examine the info, and examine its recommendation with trusted sources or a certified coach. By no means comply with a program blindly if it feels unsafe, ignores ache, goes in opposition to official necessities, or appears too generic to your aim.
References
- Ahmad, Z., Kaiser, W., & Rahim, S. (2023). Hallucinations in ChatGPT: An unreliable instrument for studying. Rupkatha Journal on Interdisciplinary Research in Humanities, 15(4), 12. https://www.researchgate.net/publication/376844047
- Chelli, M., Descamps, J., Lavoué, V., Trojani, C., Azar, M., Deckert, M., Raynier, J. L., Clowez, G., Boileau, P., & Ruetsch-Chelli, C. (2024). Hallucination charges and reference accuracy of ChatGPT and Bard for systematic evaluations: Comparative evaluation. Journal of Medical Web Analysis, 26, e53164. https://www.jmir.org/2024/1/e53164
- D’hoe, B., Kirk, D., Boone, J., & Colosio, A. (2026). ChatGPT outperforms private trainers in answering frequent train coaching questions. Journal of Sports activities Science and Drugs, 25(1), 235–261. https://doi.org/10.52082/jssm.2026.235
- Eichenberger, A., Thielke, S., & Van Buskirk, A. (2025). A case of bromism influenced by use of synthetic intelligence. Annals of Inner Drugs Scientific Instances, 4, e241260. https://doi.org/10.7326/aimcc.2024.1260
- Emsley, R. (2023). ChatGPT: These aren’t hallucinations—they’re fabrications and falsifications. Schizophrenia, 9, 52. https://doi.org/10.1038/s41537-023-00379-4
