Clinician Training 2021 Update
Training is the backbone of user mastery (one of the three pillars of EHR satisfaction outlined in the Arch Collaborative 2020 Guidebook). The COVID-19 pandemic highlighted potential vulnerabilities for healthcare organizations as clinicians had to quickly adapt to new methods of care delivery, often without sufficient training.
Want to see full details?
Want to see full details?
Here is my information:
This report, an update to the 2019 Clinician Training report, dives into evidence-based findings on EHR training. It provides a refresh on past findings and shares insights from new questions (on telehealth, virtual training, etc.) and new views of Collaborative data. Ultimately, this report aims to help organizations elevate training to improve clinician EHR satisfaction, clinician wellness, and the quality of patient care. (See the full report for the complete insights on clinician training.)
Keys to Successful EHR Training
More than 20 organizations have participated in the Trainer Quality Benchmark survey, which collects responses from clinicians after they receive EHR training. Data from this survey reveals two aspects of training are highly correlated with satisfaction: type of training and length of training. Various types of training can be effective as long as an actual trainer is involved—self-directed e-Learning is much less effective. More than an hour of training is also likely to result in higher training satisfaction.
Initial Training Has Consistently High Correlation with Satisfaction
Clinicians who strongly agree that their initial EHR training prepared them well to use the EHR have an average Net EHR Experience Score (NEES) 89.7 points higher (on a -100 to 100 scale) than those who strongly disagree. This is the exact same spread reported in the 2019 Clinician Training report, even with 50,000 additional responses collected since then. (More insights on initial training can be found in the Expanded Insights and on the Arch Collaborative website in the form of webinars, case studies, and other reports.)
Early Insights on the Use of Simulations
A new question in the executive survey (conducted with executive leaders at member healthcare organizations) asks whether the organization uses simulations for initial EHR training. Preliminary results show that organizations that do use simulations have, on average, a higher NEES than organizations that don’t.
‡ The Net EHR Experience Score (NEES) is a snapshot of clinicians’ overall satisfaction with the EHR environment(s) at the organization. The survey asks respondents to rate factors such as the EHR’s efficiency, functionality, impact on care, and so on. The Net EHR Experience Score is calculated by subtracting the percent of negative user feedback from the percent of positive user feedback. Net EHR Experience Scores can range from -100 (all negative feedback) to +100 (all positive feedback).
Strong Ongoing Training Associated with 100-Point Higher Satisfaction
Arch Collaborative data shows a 101.2-point difference in NEES between clinicians who strongly agree ongoing training is sufficient and those who strongly disagree. KLAS’ 2019 report found a 102.7-point difference. The static nature of these results indicates the consistent importance of training satisfaction, even as Collaborative data has expanded to include more and more organizations.
Telehealth Training and the EHR Experience
In the last year, KLAS has added a question to our EHR Experience Survey about training on telehealth tools. Responses to this question show strong training on telehealth tools and processes is correlated with a better EHR experience overall. For deeper insights on the effects of telehealth training, see the Expanded Insights.
Report Non-Public HTML Body
Report Public HTML Body
This material is copyrighted. Any organization gaining unauthorized access to this report will be liable to compensate KLAS for the full retail price. Please see the KLAS DATA USE POLICY for information regarding use of this report. © 2019 KLAS Research, LLC. All Rights Reserved. NOTE: Performance scores may change significantly when including newly interviewed provider organizations, especially when added to a smaller sample size like in emerging markets with a small number of live clients. The findings presented are not meant to be conclusive data for an entire client base.