As hospitals and primary care networks race to deploy AI tools, one practical question keeps getting ignored: do the clinicians expected to use these systems actually understand them—and trust them? A new study in Frontiers in Digital Health tackles that gap by validating a short, nine-item survey designed to measure nurses’ knowledge and attitudes toward AI, and it offers early insights from primary healthcare centre nurses in Almaty, Kazakhstan.
The research may sound incremental—another questionnaire, another psychometric analysis—but it addresses a foundational problem in healthcare AI implementation. You can’t manage what you can’t measure, and attitudes toward AI aren’t just “soft” factors. They influence adoption, workarounds, documentation quality, escalation behavior, and ultimately whether patients benefit or get harmed.
Why a nine-item scale is bigger than it looks
AI tools are often evaluated through model metrics, FDA-cleared indications, and workflow integration plans. Yet many deployments falter because frontline clinicians experience them as extra work, unreliable “black boxes,” or compliance mandates rather than clinical support. Nurses sit at the center of this tension. They coordinate care, triage symptoms, reconcile medications, monitor deterioration, and translate plans into action. If nurses don’t understand what an AI system is doing—or feel it threatens their professional judgment—the tool’s clinical value can evaporate, regardless of how strong the underlying algorithm is.
According to the Frontiers in Digital Health study, the authors set out to evaluate the psychometric properties of a previously validated nine-item instrument that captures nurses’ AI knowledge and attitudes, and to report initial findings among primary care nurses in Almaty. Validation work like this is the unglamorous scaffolding of implementation science: it helps ensure that when a health system says “our staff are ready,” that statement is grounded in reliable measurement rather than anecdotes.
Why Kazakhstan—and why primary care?
Much of the published conversation about clinical AI readiness comes from large academic centers in North America or Western Europe. That creates a distorted picture of global adoption: where resources are plentiful, informatics teams are established, and AI pilots are heavily subsidized. Studying nurses in Kazakhstan’s primary care environment matters because it reflects where AI could have enormous impact—and where the constraints are most real.
Primary care is where AI’s promises are most frequently marketed: earlier risk detection, smarter triage, population health management, and administrative automation. It’s also where implementation is hardest. Primary care clinics often have lean staffing, fragmented IT, and high patient volumes. A tool that adds even a minute per visit can backfire. In that setting, nurse attitudes become a leading indicator of whether AI will streamline care—or simply become yet another layer of digital friction.
Attitudes aren’t “feelings”—they’re patient safety signals
Measuring attitudes toward AI isn’t about winning a popularity contest. It’s about mapping safety risks and training needs. A nurse who over-trusts an AI output may fail to escalate a subtle but dangerous change in patient status. A nurse who distrusts the system might ignore a valid alert, delay action, or document outside the tool to avoid it. Both failure modes—automation bias and algorithm aversion—are well documented in human factors research.
A short, validated scale can help organizations segment readiness: who needs foundational AI literacy, who needs workflow coaching, and where leadership should slow down and redesign rather than push adoption. It also creates a way to measure change over time—before and after training, before and after rollout, and after adverse events—turning “culture” into something a quality team can track.
What this means for healthcare leaders
For executives and digital health teams, the lesson is straightforward: don’t treat nurses as downstream “end users.” Treat them as co-designers and safety partners. A validated instrument—like the one assessed in the Frontiers in Digital Health paper—can be used as a baseline diagnostic before procurement, not just as a post-rollout satisfaction survey.
It also has procurement implications. If a clinic’s baseline AI knowledge is low, then a vendor’s interface, explainability features, and training burden become central to value—not afterthoughts. Conversely, if attitudes are positive but knowledge gaps are significant, leaders can justify targeted education rather than concluding that “staff are resistant.”
What it means for nurses and patients
For nurses, formal measurement can be empowering—if used correctly. It can support the case for protected training time, clearer accountability when AI is wrong, and better escalation pathways. But it can also be misused if leadership treats attitudes as compliance metrics. The right approach is to pair measurement with meaningful action: updated protocols, transparent model governance, and mechanisms for nurses to report issues without blame.
For patients, the connection is direct. Nurses are often the first to notice when a tool is creating confusion, delaying care, or changing clinical priorities. When nurses are informed, confident, and appropriately skeptical, AI can become a genuine safety net. When they are rushed, undertrained, or excluded from decision-making, AI can magnify inequities and errors—especially in high-throughput primary care settings.
The road ahead: from measurement to readiness-by-design
The next phase for this line of work is to connect attitude and knowledge scores with real outcomes: adoption patterns, alert response times, documentation quality, near-miss reporting, and patient-level measures. If the scale can predict where AI deployments will struggle—or where harm is more likely—then it becomes a practical tool for governance, not just research.
More broadly, this study is a reminder that the global AI conversation is shifting from “Can we build it?” to “Can we run it safely in everyday care?” As Kazakhstan and other health systems expand digital infrastructure, readiness measurement among nurses may become one of the most cost-effective interventions available: a small survey that helps prevent large, system-wide failure.
Source: Frontiers in Digital Health, “Assessing nurses’ attitudes toward artificial intelligence in Kazakhstan: psychometric validation of a nine-item scale” (as reported by the journal and study authors).

