Skip to content

Nursing leaders urge caution as AI spreads in care

Hospitals are racing to integrate AI into daily care, but some nursing leaders worry the pace is outstripping the profession’s ability to understand the technology.

At Akron, Ohio-based Summa Health, Chief Nursing Informatics Officer Marc Benoy, BSN, RN, sees that tension play out every day. “If nurses adopt AI without the right training or education, the risk goes beyond technology — it can quietly reshape how nurses think, act and care,” Mr. Benoy said.

Mr. Benoy describes a more subtle danger than faulty algorithms or data bias: the gradual erosion of clinical judgment. When nurses do not fully understand how a system reaches its conclusions, he said, it becomes easier to overtrust the output.

“AI tools can be wrong, biased or blind to nuance,” he added. “Without the ability to question or verify their results, mistakes can slip through unnoticed.”

Across the country, hospitals are piloting AI in nearly every corner of care — from ambient documentation tools that transcribe patient visits to algorithms that predict deterioration hours before symptoms appear. These tools promise to ease workloads and catch problems early. But in the rush to deploy them, formal training often lags behind.

At Tampa, Fla.-based Moffitt Cancer Center, CNIO Marc Perkins-Carrillo, MSN, RN, sees the same issue from a systems perspective. He’s concerned that clinicians are being handed powerful AI models without transparency into how they were built — or the populations they were trained on. “That can lead to biased or inappropriate recommendations that don’t align with the patients actually being treated,” he said.

It’s a problem many health systems are only beginning to confront. Some are introducing “AI literacy” modules in nursing education, mirroring the rise of evidence-based practice courses years ago. Others are forming interdisciplinary oversight committees to review algorithmic bias and explainability before new tools go live. Still, those efforts are inconsistent, and the technology is advancing faster than governance can keep pace.

Training, Mr. Perkins-Carrillo argues, must go beyond how to click through a dashboard. “Without foundational education in AI literacy or prompt design, clinicians may quickly become frustrated or disillusioned with the technology’s potential,” he said. “You can’t expect people to trust a system they don’t understand.”

Both leaders describe a shared vision for AI in healthcare — one where nurses are not just users but informed collaborators. That means teaching them to ask how a model was trained, what its limitations are and when to override its guidance.

Mr. Benoy believes nurses, in particular, have a crucial role to play in shaping the ethical boundaries of AI. “They’re often the last checkpoint before a decision reaches a patient,” he said. “They need to know enough to say, ‘Wait, that doesn’t look right.’”

Hospitals that get this balance right, he said, will use AI not to replace intuition but to refine it — to give clinicians better information without dulling their instincts.

“AI can absolutely strengthen nursing,” Mr. Benoy said. “But only when nurses have the literacy and technical skill to use it critically. In the end, AI should support, not replace, what makes nursing human.”

The post Nursing leaders urge caution as AI spreads in care appeared first on Becker’s Hospital Review | Healthcare News & Analysis.

Scroll To Top