Terry Gerton You’ve just published a really important notice at the VA. What prompted you in the Office of the Inspector General to take an early look at how the Veterans Health Administration is using generative AI?

Cheryl Mason Well, Terry, we’ve been looking, the OIG’s been looking at AI in the department since late 2023. And this includes AI’s chat tools for both clinical care and documentation. So this wasn’t prompted by allegations or complaints. This is just part of our normal oversight that we are charged to do under the statute. And so what we really wanted to do was make sure we understood how and in what clinical settings these two tools are being used, right? So, again, it goes to our oversight, understanding the environment the clinicians are working in and other staff and how they use these tools to support and enhance their care for veterans.

]]>

Terry Gerton And how in this case are they using the chat AI tools to support clinical diagnosis?

Cheryl Mason So it looks like from our review is they are reviewing it, just using the chat tools in general to apply to cases. I mean, how they use it specifically is a little tricky because they all use it differently, which is part of our concern. But the big concern here is however they’re using it in whatever manner, the concern is that AI hallucinates. There’s research done on this, There’s been many, many legal cases on this. And so when you use an AI chat in clinical documentation, if the process is hallucinating, that can have an impact on patient diagnosis and management. And that’s why it’s so important to have the person involved, which according to the department, their recent press release, they are doing that.

Terry Gerton You made this announcement as a preliminary advisory. Why did you take that step instead of waiting for the whole report and assessment to come out?

Cheryl Mason So our PRAMs, our preliminary advisory memos, are really put out in a way when we have concerns that there is something happening or a situation we’re seeing, whether it’s in the inspection part of the house or in the audit part of a house, that we need to get out quickly. And so the concern here is we didn’t issue recommendations in it, but we see the need for NCPS and VA to communicate and coordinate. And so, we really want the USH to take a look, the Undersecretary for Health, to really take a look and really evaluate the need for integration of AI-related risk monitoring. Really, our concern here goes to how they’re qualifying the risk monitoring of using AI and if they’re categorizing it the right way. It should be, in our viewpoint, we recommend that it should be listed as a high impact issue when there’s challenges or issues with the VAI and chat tools, and that would raise its level of paying attention to it, put it at higher risk. And currently, the VHA and VA do not have it categorized as a high impact. And so our concern is there’s a lack of control or oversight here on the AI tools, the chat tools in particular.

Terry Gerton One of the acronyms that you included in that last response was NCPS, which is the National Center for Patient Safety. Talk to me about who that is inside the VA architecture and what role they should be playing here as AI might roll out into clinician practice.

Cheryl Mason So patient safety is extremely important in the department. They really, the patient safety experts track, they look at trends, they can intervene and alert on the situations, they can educate the staff on risks. And so they are, these are foundational functions that the patient’s safety program has to look at different ways of operating around patient safety. And that’s where our concerns come in is because patient safety really isn’t, these are not listed as high impact issues when there’s chat problems, the AI chat issues or safety issues. They’re really not tracked as an AI tool is not being tracked as a patient safety issue.

Terry Gerton Speaking with Cheryl Mason. She’s the Inspector General for the Department of Veterans Affairs. So with that as background, let’s go back into sort of an organizational assessment here because VA has been standing up new AI policies over the past year and yet what you found here indicates that perhaps not everybody who ought to be at the table has been at the table especially when it regards deploying AI in cases of patient safety. Are there new guardrails that you think are necessary or new processes to make sure that VA really takes a comprehensive approach as it’s putting new AI policies into place?

]]>

Cheryl Mason You know, that’s a great question, Terry. I really think the biggest issue here is how it’s categorized. Classifying the tool the right way in the VHA world will then increase the level of oversight and tracking on it. And that will be able to ensure that the near misses, if there are any, for patient safety purposes, can be categorized and tracked. With the patient safety program and you know VA is changing very quickly as you noted using a lot of different tools and we are finding that in some cases they do have the right people at the table. It’s just in some of the things that have been operating for a while they’re going back to regroup and that’s why this PRAM was issued, this preliminary report, was issued to try to get the word out that this was another area they need to take a look at.

Terry Gerton And who is the they here when it comes to properly classifying these kinds of AI tools?

Cheryl Mason Well, it would sit in two places, actually three. The top, of course, would be the secretary. But then the undersecretary for health, as well as the CIO, the assistant secretary for IT. Currently, that is the deputy, the deputy secretary who’s doing those duties. So those three people would have really the primary duty, the undersecretary for health with the patient safety issues, and then the IT office, the OIT office, Office of Information Technology team. Under the Deputy Secretary should also be looking at it from their side, so it’s a dual tracking situation.

Terry Gerton Does this kind of preliminary advisory get out to the clinicians and doctors now so that while the department is thinking about what policies or processes it wants to put into place, the practitioners know about these risks and can do something about them?

Cheryl Mason Yes, exactly. And thank you for reminding me about that. That’s one of the issues with the PRAMs and why we’re issuing them, because we can get those out to the clinicians. This goes to the undersecretary, but then it is public group released to the public, as well as internally in the department. And just as recently as January, I was actually visiting in the field. And one of facilities I was visiting noted that one of our previous PRAM was very helpful in information to them so they can react to it and think about it differently. So, yes, this also gets the information out to the field quickly. We don’t have to wait for the full report. We can give you a heads up and alert, say hey, pay attention to this, be aware that, you know, we might need to pay a bit more attention to the helpful tool that AI can be, but make sure there’s human engagement.

Terry Gerton I know while we’re waiting for the report and the official recommendations, they’ll come out later. But what happens now inside VA? What do you want the key decision makers to be thinking about? What changes and immediate actions should they be putting in place while your oversight continues?

Cheryl Mason Well, really, the focus is for the undersecretary for health to really take on that conversation and in coordination with the patient safety team just to make sure that we’re looking for the integration of the AI-related risk. Monitoring into the existing patient safety programs they have. They already have great patient safety program for risk. Just adding this one to the list would be a recommendation or a suggestion that we have. Again, we didn’t make recommendations in the PRAM, but it is a suggestion we would like them to take a look at. And, you know, that’s something that when I speak with the secretary and other leaders in the VA, the undersecretary for health, those are things we talk about and why we are issuing this. So that’s part of our job, give you a heads up and then hopefully you react.

Terry Gerton So this isn’t a stop sign when it comes to AI deployment, but sort of a flashing yellow, making sure that they’re doing some additional thinking about process and rules.

Cheryl Mason That is well put, Terry, that’s exactly what it is. It’s a flashing yellow, it’s a caution. AI can be a great tool and should be used as a tool. And that is what we’re advising the department. We would like to make sure you can use the tool with that flashing yellow light to put in some guardrails around it, make sure it’s being tracked, make sure its being classified the right way as a high risk item as appropriate, if appropriate, and then use it with human engagement.

]]>

Copyright
© 2026 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.