Friday, April 17, 2026

Class Discussion on AI in Healthcare

This week in class were several projects on the benefits and risks of using AI, and I thought what I found in researching this subject might be of interest.

As HIM students in one way or another, none of us are too keen on AI, watching it already take over jobs. Thus far, though, in my major and others, it's just being used as a tool, not replacement.

Part of our grade is a discussion on the subject, and I'm not going to share what others said, just my own stuff. 

First will be my main post, then my reply to a friend who commented (but not her reply).


First, I just want to say this - IMO, there are 2 areas in healthcare where I think AI, in its present state, can be of the most benefit, which are imaging diagnostic accuracy and new drug discovery by creating new molecules - and yet they do make mistakes and still need human evaluation behind them.

The discussion for the original post asked us to "pretend" we were health technologists evaluating the security, reliability, and authenticity of AI.

I can't promise this is the best thing I've ever written, my brain is pretty fried, at this point, as this class is upping the ante with 3 projects this week, and 4 next week, our last week - I barely have time to pee between both classes, this week, but am taking more breaks today! 😂

_________________________________________________________

Hello Everyone!

In my role as a Health Information Technologist, I have been tasked with taking a more in-depth look at the role of AI in healthcare, specifically with regards to security, reliability, and authenticity (see AI Risks section). 

As exciting and promising as new technological innovations may be, there are considerations that will need to be evaluated before AI is implemented.

The decision regarding use of AI, and for what purpose, can be likened to the launch of a new potentially disease-modifying drug, which always promise revolutionary outcomes in terms of clinical and financial value, upon which their manufacturers use in attempt to substantiate their exorbitant cost.

However, just like in the pharmaceutical industry, there is a difference between what happens in controlled clinical trials versus real-world application, as well as there are differences of opinion regarding what constitutes overall clinical and financial value.

So as not to solely bear bad news regarding AI, I have compiled a quick overview of risk vs. benefits comparison regarding use of AI in healthcare from an overarching perspective.

THE BENEFITS OF AI IMPLEMENTATION IN HEALTHCARE:

Clinical Value:  

  1. Greater diagnostic accuracy with imaging diagnostics, leading to earlier diagnoses and treatment (HITRUST, 2023).
  2. Greater surgical precision. (Chustecki, 2024)
  3. Faster data management, data mining, and data analysis, assisting both provider and patient in decision-making when it comes to risk factors, diagnoses, and treatment (HITRUST, 2023).
  4. Predictive analysis based on risk-factor, disease, and treatment information aggregation, to include personal patient history, lab values, and biomarkers, combined with data regarding known disease biomarkers, treatments, and outcomes (HITRUST, 2023. Chustecki, 2024).
  5. Real-time symptom, lab value, and imaging aggregation to suggest diagnoses and treatments not previously considered, as well as cross-referencing possible contraindications and drug interactions more quickly. (HITRUST, 2024).
  6. More efficient clinical-trial data analysis (HITRUST, 2023).
  7. Assistance with the mapping combination of biochemical molecules to produce new molecular structures in the pharmaceutical and biotechnology industries. (HITRUST, 2023. Chustecki, 2024).
  8. Faster and more accurate methods of disease prevention, monitoring, and control in epidemiology. (Chustecki, 2024).
  9. Virtual assistance and real-time monitoring devices in conditions like hypertension, diabetes, and sleep apnea so that adjustments in treatment can be made sooner. (HITRUST, 2023).

Financial Benefits:

  1. Reduced labor costs and streamlined workflows (Chustecki, 2024).
  2. Reduced post-treatment costs by earlier disease detection, providing data on the treatment plans with the most effective outcomes, to include which patients most likely return for hospitalization with one treatment versus another (Chustecki, 2024).
  3. Predictive total-cost-of-care analysis regarding outcomes of treatment and need for further treatment and/or complications with treatment (HITRUST, 2023. Chustecki, 2024).

Administrative Benefits:

  1. Streamlined, efficient workflow, scheduling, and real-time room/bed-number analysis (HITRUST, 2023).
  2. Templates can be automated for memos, letters, meeting minutes, newsletters, and even legal documents. (HITECH, 2023).
  3. Predictive modeling projection based on prior historical input with adjustments based on “what-if” scenarios. (HITECH, 2023. Chustecki, 2024).

 

THE RISKS OF AI IMPLEMENTATION IN HEALTHCARE:

Unfortunately, the three topics I was tasked with investigating - security, reliability, and authenticity – are also the biggest known risks with implementing AI in healthcare at present, and all 3 can have far-reaching implications in all subcategories of clinical, financial, and administrative risks.

Reliability:

AI “hallucinations” or “misinformation:

Essentially, this means that AI makes up what it doesn’t know. This can happen in all areas of clinical data, but is particularly concerning on imaging (Chen et al., 2026).

Real-World Examples: 

  1. A false-positive PET scan versus low-dose and high-dose nuclear medicine administration. As you can see, the AI-enhanced version of a SPECT scan whole-body image suggested radioisotope uptake in regions of the body where there was no uptake/cancer was not present (Xia et al., 2026).

 

AI hallucination.jpg

Image Credit: Xia et al., 2026.

2.  A recent collaboration pilot study by the University of Massachusetts and Mendel asked 2 large-language models (LLMs), GPT-4o and Llama-3, to create 500-word summaries of based on 50 medical notes including patient histories and lab values; GPT-40o had 21/50 with incorrect information, and Llama-3 summarized 19/50 incorrectly (Clinical Trials Arena (Deswal, 2024). 

Data Poisoning:

This refers to either intentional sabotage during development or inputting AI training/machine learning, internal data hack/tampering, or cyberattacks altering clinical data for nefarious purposes (Chen & Esmaeilzadeh, 2024).

Real-World Example:

Though no large-scale data poisoning attacks have been reported, recent academic-healthcare testing revealed that 100 to 500 poisoned data samples were able to gain back-door entry via AI (Abtahi et al., 2026).

It is important to also note that AI hacks can refer to either using AI to hack into applications and systems or hacks can be made into AI-using applications and systems, both ways.

Inaccurate AI-training material:

Speaking of training AI, AI is only as accurate as the historical information that is put into it; thus, if the information is incomplete, inaccurate, or biased in any way, either scientifically or socioeconomically in demographics, it will not yield accurate, reliable results (Chen & Esmaeilzadeh, 2024).

Real-World Examples:

  1. Current skin-sensing blood oximeters are known to have difficulty reading accurate blood oxygen levels on darker skin. As a result, AI analysis “undershot” hypoxemia in black patients, leading to black patients experiencing hypoxemia 3 times more than white patients (Norori et al., 2021).
  2. An AI algorithm that used healthcare costs and expenditures concluded that black patients are healthier and have less healthcare needs than white people, because insurance companies spent less money was on them (Norori et al., 2021)

(That analysis, of course, is absurd. If anything, it actually provides more evidence of  racial bias and disparity in healthcare access and coverage between people of color and white people, with white people receiving more.)

 

Authenticity:

Lack of legal regulation and governance:

At present, no formal legal precedent has been set specifically governing AI. Also, there appears to be some confusion regarding whether HIPAA covers AI data; some references said it did, others said it did not, or at least fully.  Thus, until U.S. Department of Health and Human Services (HHS and OCR) clarify,  we can assume the same HIPAA rules apply, and for any other governance issues, it is imperative that we, as an organization, set specific AI policies regarding accountability, data integrity, and ethical considerations, as well as privacy and security. (Morley et al., 2024).

The HHS does have, however, a task force on the situation, and has published guidelines for governance (Grindle, 2024).

Lack of accountability:

At present, there are few, if any, track-and-trace features regarding AI-training input, as well as who is responsibility for which data.  Additionally, the consequences based on mistakes or violations have not been set (Habli et al., 2020).

Authentication features can be easily hacked or faked including deepfake identify impersonation, resulting in manipulation of data, sending out false information from, to, or regarding individuals or groups:

The American Hospital Association issued a warning regarding all of the above in December 2025 (AHA, 2025).

Privacy and Security:

Large-language model AI training involves using existing PHI on patients for machine learning:

This, of course, means current patient data must be used to training, and no legal consent framework yet exists (Chen & Esmaeilzadeh, 2024).

Large-language models are easily hacked, manipulated, and lack alert/response protocols for hacks.

Again, as previously mentioned, a recent testing of an AI healthcare system revealed 100 to 300 back-door hacks were easily able to gain entry to large language model (LLM) data within AI.(Abtahi et al., 2026),

Perhaps because of lack of adoption of AI or partial adoption, I was unable to find any specific real-world instances of AI-generative data yet within the hospital itself being attacked, only AI being used as an external tool by hackers such as malware, deep fakes, "shadowing" and advanced phishing including the 2024 cyberattack on Change Healthcare, and the 2026 attack on Stryker, a medical device manufacturer (Arctic Wolf, 2024).

And again, as mentioned above, the American Hospital Association issued a warning in December 2025 regarding “deep fake” impersonations of staff. (AHA, 2025).

This illustration (Liu, Q. et al., 2018) provides us with a good overarching visual on where the vulnerabilities lie in AI-generative systems and at what phase of access, processing, and retrieval. It also gives examples of general defensive techniques that can be used.

Illustration of cyberthreats.png

Image Credit: Liu et al., 2018.

Conclusion:

In conclusion, in addition to the very high-view-level risk/benefit analysis I provided, another consideration is the very high cost of implementation of AI, as well as any add-ons that may be required such as upgrading from a LAN to a VLAN and blockchain. Though labor costs might be saved by lack of need for clerical staff, more IT staff and greater salaries for real-time monitor and review might need to be considered. Thus, our actuaries might want to consider a total cost analysis to see if there are any cost offsets.

Additionally, we must be careful not to become over-reliant on AI-generative devices due to the issues mentioned above. Thus, an evaluation of the results generated by AI should be part of the governance policies that we create. 

In the end, the question is this: Is current-state AI ready to accommodate our needs, and is it worth the risk?

Thank you for your time!

 

References:

Abtahi, F., Seoane, F., Pau, I., & Vega-Barbas, M. (2026, January 23). Data poisoning vulnerabilities across health care artificial intelligence architectures: analytical security framework and defense strategies. Journal of medical Internet research28, e87969. https://doi.org/10.2196/87969

AHA.org, (2025, December 3). Resources available to help detect malicious AI schemes. American Hospital Association. doi: https://www.aha.org/news/headline/2025-12-03-resources-available-help-detect-malicious-ai-schemes

Arctic Wolf Cybersecurity, (2024, April 10). The top 18 healthcare industry cyber attacks of the past decade. Arctic Wolf. doi: https://arcticwolf.com/resources/blog/top-healthcare-industry-cyberattacks/#:~:text=1.,Neil%20in%20this%202022%20hack

Chen, Y., & Esmaeilzadeh, P. (2024, March 8). Generative AI in Medical Practice: In-Depth Exploration of Privacy and Security Challenges. Journal of medical Internet research26, e53008. https://doi.org/10.2196/53008

Chustecki, M, (2024, November 18). Benefits and risks of AI in health care: narrative review. Interactive journal of medical research13, e53616. https://doi.org/10.2196/53616. 

Deswal, P. (2024, August 7). Hallucinations in AI-generated medical summaries remain a grave concern. Clinical Trials Arena. doi: https://www.clinicaltrialsarena.com/news/hallucinations-in-ai-generated-medical-summaries-remain-a-grave-concern/

Grindle, D. (2024, July 10). Do you know the risk?: the urgent need for data security in healthcare AI. HHS Cyber, 403d Cyber Task Force. doi: https://405d.hhs.gov/post/detail/3900fcc7-08dd-4747-a1bb-2eb001dae582#:~:text=Issues%20concerning%20data%20security%20and,confidential%20data%20and%20patient%20safety.  

Habli, I., Lawton, T., & Porter, Z. (2020, January 7). Artificial intelligence in health care: accountability and safety. Bulletin of the World Health Organization98(4), 251–256. https://doi.org/10.2471/BLT.19.237487

HITRUST Blog, (2023, November 23). The pros and cons of ai in healthcare. HITRUST. https://hitrustalliance.net/blog/the-pros-and-cons-of-ai-in-healthcare

Liu, Q.,  Pan, L., Zhao, W., Yu, S., & Leung, M., (2018, March 18). A survey on security threats and defensive techniques of machine learning: a data driven view.  IEEE Access, vol. 6, pp. 12103-12117. doi: 10.1109/ACCESS.2018.2805680, https://ieeexplore.ieee.org/document/8290925  

Morley, J., Murphy, L., Mishra, A., Joshi, I., & Karpathakis, K. (2022, January 31). Governing Data and Artificial Intelligence for Health Care: Developing an International Understanding. JMIR formative research6(1), e31623. https://doi.org/10.2196/31623

Norori, N., Hu, Q., Aellen, F. M., Faraci, F. D., & Tzovara, A. (2021, October 8). Addressing bias in big data and AI for health care: A call for open science. Patterns (New York, N.Y.)2(10), 100347. https://doi.org/10.1016/j.patter.2021.100347

Xia, M., Bayerlein, R., Chemli, Y., Liu, X., Ouyang, J., Lin, M., El Fakhri, G., Badawi, R. D., Li, Q., & Liu, C. (2026, February 2). On hallucinations in artificial intelligence-generated content for nuclear medicine imaging (the DREAM report). Journal of nuclear medicine: official publication, Society of Nuclear Medicine, 67(2), 166–174. doi: https://doi.org/10.2967/jnumed.125.270653o an external site.
ernal site.

Links to an external site.  

My reply to a friend who commented - we are also expected and grades on replies.

____________________________________________



Hi @XXXX

Thank you! So glad you could sift through my wordiness for the point, you nailed it!

The racial disparities really jumped out, right? The assumptions AI made were absurd. This is why you still need human evaluation - and empathy! 

That's another issue with AI, in my opinion - it's math-based and thus gives one finite answer, when things can be multifactorial, and it doesn't know what to do with nuance or how to critically think through things that might need further study - so it definitely has no business assessing socioeconomic data. It can aggregate the data quickly, sure, but let the humans handle that one, ones with empathy. And it can use empathetic language, but it doesn't really "get it."

As for the "excitement" piece - yep, in my previous life, I transcribed the interviews between big pharma, insurance companies and PBMs and key-opinion-leader clinicians both nationally and internationally before new products launch, for an independent pharmaceutical research company (only to watch AI slowly take over my job to where there was little left but spillover every few months).

During that time, there was a trend; doctors became very excited about the newest, shiniest object, only to find out it was just another high-priced "me-too" drug doing exactly the same thing, which insurance people often had to point out to them the details in the clinical trials.

(FYI, the reason the price doesn't come down despite flooding to the market has to do with contracting and rebates, our system is very messed up here in the U.S. In fact, we pay more for pharmaceuticals than anyone else in the world, we're keeping these pharma companies fat and greedy, but that's another post).

Pharma often tries to justify the price by saying "Oh, but it's oral instead of self-injection" or "It's a new mechanism of action."

Insurance is like "Cool - but the results aren't any different; in fact, the results in clinical trial are either noninferior or even slightly inferior to the injectables administration, but you want to charge a premium for it? Denied. We'll approve it, but they'll have to step through 4 other drugs in this category first if you charge this test price."

The other thing pharma tries to put a premium on is a new indication for a drug, like "Oh, we can use this in a new disease state now."

Insurance is like "Great, so you have more populations who will be taking your product, so you'll be making more money that way and don't need the upcharge. Denied at that test price. It'll be stepped behind the other ones if you do."



The same is true with AI - "new" doesn't mean better. "Faster" doesn't mean better, either.



In fact, while researching for our our AI intelligence in healthcare project, I was unable to find a single large-scale study on the accuracy of of AI with clinical documentation (which is what's taking over my job), only that it was faster and led to less doctor cognitive load, workload, and burnout. (Hudson, et. al, 2025; Albrecht, et al., 2025; Stultz et al., 2025)

Hooray - but you know what also saves doctors time on clinical documentation?

Transcriptionists.

It was only when y'all went to cheap offshore transcription you had to edit yourself, or self-editing voice recognition, that you all put that much work back on yourselves.



Making me feel better about this taking over my prior job was an article by University of Colorado Health, in which the doctor was happy he had less workload, but it still made errors like thinking the doctor said "nitroglycerin" instead of "nitrofurantoin" for a patient with a UTI. (Neff, UCHealth, 2025).

Um ... one is a cardiac drug and the other is an antibiotic almost exclusively used for urinary tract infections.

So much for "context learning" software!

Transcriptionists are trained to know the difference, and if the doctor genuinely does misspeak, you flag it. Most often, this happens when a doctor dictates a drug is an allergy, but then later prescribes it. AI is supposed to be trained to catch that based on context, but clearly, it does not.

Transcriptionists can, though, because we go "Wait a minute, didn't he just say the patient was allergic to that under allergies? Let me check." And then you flag it.

So that's great it's faster - but what about accuracy?!?

I can tell you from 27 years of personal experience, only about 50% of doctors actually even read their own notes, or respond to flags before they sign them, so there's already an overreliance on other humans AND technology by physicians, so how much worse will that become?

As far as cheaper labor, Abridge, the clinical documentation AI being adopted by all the reputable healthcare systems, isn't cheap. The big upfront price isn't transparent, and then it's $2,500 per month, per physician. (Reeves, 2026).

So let's say you had 50 physicians and 3 transcriptionists. That's about $120,000 on salary, less for contractors, plus no benefits.

If you had 50 physicians and Abridge, it's $125,000 per year, so about equal - BUT that's NOT including the big upfront fee.

And your patients might get nitroglycerin instead of nitrofurantoin to treat a UTI, if a doctor doesn't read their notes, as per usual.

Also, there's actually a civil lawsuit just filed by 3 people who were not told about the ambient software "listening" and auto transcribing their private conversation with their doctor, and that information being stored in a cloud - so there's privacy and security issues as well. (Alder, 2026).

Transcriptionists never, ever hear the patient - ever. We just transcribe what the doctor said later.

I could go on and on, like I usually do, but that's enough to get the gist. :)

We are far from "Captain's Log, star date 2450" or handheld whole-body scans that can diagnose what's wrong with you in seconds, like on Star Trek, as much as we wish we could.

Thanks for replying!

References:

Abridge. (2026). Clinician Platform. https://www.abridge.com/platform/clinicians 

Albrecht, M., Shanks, D., Shah, T., Hudson, T., Thompson, J., Filardi, T., Wright, K., Ator, G. A., & Smith, T. R. (2025). Enhancing clinical documentation with ambient artificial intelligence: a quality improvement survey assessing clinician perspectives on work burden, burnout, and job satisfaction. JAMIA open8(1), ooaf013. https://doi.org/10.1093/jamiaopen/ooaf013 

Alder, S. (2026, April 14). Lawsuit alleges ai platform illegally recorded patient-clinician conversations. HIPPA Journal. doi: https://www.hipaajournal.com/lawsuit-ai-platform-illegally-recorded-patient-clinician-conversations/

Hudson, T. J., Albrecht, M., Smith, T. R., Ator, G. A., Thompson, J. A., Shah, T., & Shanks, D. (2025). Impact of Ambient Artificial Intelligence Documentation on Cognitive Load. Mayo Clinic proceedings. Digital health3(1), 100193. https://doi.org/10.1016/j.mcpdig.2024.100193

Neff, J. (2026, January 13). How an AI note‑taking tool helps doctors focus fully on their patients. UC Health Today. University of Colorado. doi: https://www.uchealth.org/today/ai-note-taking-tool-helps-doctors-focus-fully-on-patients/#:~:text=With%20Abridge%2C%20providers%20can%20look,UCHealth's%20secure%20medical%20record%20system

Reeves, J. (2026, March 28). Abridge AI scribe review 2026: pricing, accuracy, and limitations. VeroScribe. https://www.veroscribe.com/blog/abridge-review-2026 

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.