Abstract
PURPOSE Hearing loss (HL) is underdiagnosed and often unaddressed. A recent study of screening for HL using an electronic prompt showed efficacy in increasing appropriate referrals for subsequent testing. We build on the results of this study using a qualitative lens to explore implementation processes through the perspectives of family medicine clinicians.
METHODS We collected clinic observations and semistructured interviews of family medicine clinicians and residents who interacted with the HL prompt. All data were analyzed using thematic, framework, and mixed methods integration strategies.
RESULTS We interviewed 27 clinicians and conducted 10 observations. Thematic analysis resulted in 6 themes: (1) the prompt was overwhelmingly viewed as easy, simple to use, accurate; (2) clinicians considered prompt as an effective way to increase awareness and conversations with patients about HL; (3) clinician and staff buy-in played a vital role in implementation; (4) clinicians prioritized prompt during annual visits; (5) medical assistant involvement in prompt workflow varied by health system, clinic, and clinician; (6) prompt resulted in more conversations about HL, but uncertain impact on patient outcomes. Themes are presented alongside constructs of normalization process theory and intervention outcomes.
CONCLUSION Integration of a HL screening prompt into clinical practice varied by clinician buy-in and beliefs about the impact on patient outcomes, involvement of medical assistants, and prioritization during clinical visits. Further research is needed to understand how to leverage clinician and staff buy-in and whether implementation of a new clinical prompt has sustained impact on HL screening and patient outcomes.
INTRODUCTION
Over 50 million adults aged 50 years and older in the United States have hearing loss (HL).1 Hearing loss can negatively impact physical and mental health outcomes,2-7 and is associated with excess medical costs (eg, emergency visits and hospital readmissions), lost income, and lost productivity.8,9 Though evidence suggests that treatment (eg, hearing aids) helps mitigate the adverse outcomes associated with hearing loss, hearing loss remains underdiagnosed and unaddressed.10,11 Improving the screening and referral process could address some practice and clinician factors that limit effective hearing loss diagnosis and care.
Primary care physicians (PCPs) are uniquely positioned to screen and refer for hearing loss,12 but barriers exist. Many PCPs are unaware of screening options and the impact of hearing loss on patient quality of life.10,13 Primary care physicians also face time constraints and limited reimbursements for screening.10,13 Research on health information technology interventions suggests that electronic clinical decision-support tools may improve the quality of preventive care by automatically prompting PCPs to take action around diagnosis, prevention, and treatment.14-17 Yet, uptake varies as PCPs have many tasks within a brief clinic visit and responding to all relevant prompts is difficult.18 Primary care physicians prefer prompts that are quick, easy to use, and effective in guiding care,19-21 particularly for conditions considered important (eg, diabetes).22 For conditions viewed as less pressing—as hearing loss can be by PCPs—improving their expertise by providing a simple set of effective actions may be useful.20,23
Recently, our team completed the Early Audiology Referral-Primary Care (EAR-PC) study, which effectively increased identification and referral of patients with hearing loss in 10 clinics across 2 health systems.24 An electronic health record (EHR) prompt alerted clinicians to screen for HL among patients aged 55 years or older, asking a validated 1-question screener.23 The prompt was developed through cognitive task analysis interviews with clinicians at both health systems to optimize usability and uptake.23 Overall, audiology referrals increased (from 2.2% to 11.5%, P <0.001).24 While the quantitative results demonstrated efficacy, an in-depth understanding of factors influencing implementation across clinicians, clinics, and health systems was still needed.
To address this knowledge gap, we collected qualitative data during the EAR-PC intervention. We used normalization process theory (NPT)25 as a lens to investigate how this intervention was integrated into practice. Normalization process theory has been used in health services research to understand individual and systems-level changes that occur as an intervention is implemented in complex settings, like primary care.26-29 Normalization process theory consists of 4 interrelated constructs: coherence, cognitive participation, collective action, and reflexive monitoring (Table 1).30 The framework is effective in understanding unanticipated outcomes and factors that hinder implementation. The purpose of the qualitative component of the EAR-PC study was to: (1) examine clinician perspectives of the hearing loss prompt and its use in practice and (2) compare these perspectives to intervention outcomes (ie, audiology referral rates).
METHODS
The EAR-PC intervention design is depicted in Figure 1. Five clinics at Michigan Medicine (MM) and 5 at Beaumont Health (BH) were enrolled. All clinics received the same EHR prompt intervention (summarized in Supplemental Appendix, https://www.AnnFamMed.org/lookup/suppl/doi:10.1370/afm.2695/-/DC1). Clinics received minimal support to imitate real world conditions, except for 1 team education session before the study. Clinics integrated the EHR prompt into normal practice workflow, using typical protocols for who addressed the prompt and where patients were referred. The primary quantitative outcome was audiology referral rates among patients aged 55 years and older, and was previously published.24 The qualitative phase of the intervention study is described here. Institutional Review Boards at both institutions approved this study.
Participants and Recruitment
We recruited family medicine clinicians (physicians, physician assistants, and nurse practitioners) at each site who had used the hearing loss prompt. In MM sites, we used purposive sampling to select clinicians identified as frequent (ie, above average) or infrequent responders to the prompt. At BH, these data were not available before the end of the study period so we used convenience sampling. Response rates calculated at the end of the intervention, however, indicated that both frequent and infrequent responders were represented.
Data Collection
We conducted semistructured interviews and clinic observations during the second and third quarters of the intervention, as we expected clinicians would have had multiple encounters with the prompt. Semistructured interviews32 were designed to explore clinician attitudes toward hearing loss and factors influencing participation in the intervention (see Table 1 for sample questions aligned with NPT constructs). One-on-one interviews were conducted either in-person or by telephone 3 to 8 months after the intervention began, depending on clinician availability. Interviews were audio recorded and transcribed.
Observations were designed to gather information on clinic-level factors influencing the intervention. Observations were conducted once at each site 3 to 4 months after the intervention began. We recorded notes using a structured protocol to capture data related to the NPT constructs, including observations of clinical spaces and informal interviews with medical assistants (MAs) and other staff. Notes were synthesized in site summaries.
Data Analysis
We conducted thematic analysis and framework analysis, which were compared and integrated. First, in order to gain a holistic sense of the data, we pursued an inductive, thematic analysis.33 We read observational data and transcripts to orient to the data, generated descriptive codes, and applied these to the transcript data. Next, we reviewed coded data for patterns to develop themes. We then reviewed each theme, checking against data and revising as needed to reach a final description.
We conducted the framework analysis using the process outlined by Holtrop and colleagues27 to examine normalization in each site. We deductively coded deidentified interview and observation data from each site using NPT constructs. We independently scored the 4 constructs for each site using a 5-point Likert scale (“not at all” to “completely”).27 We met to discuss and resolve discrepancies, and ratings were averaged across team members to create final scores.
Thematic results were compared with the NPT ratings to look for patterns. For example, themes were compared between sites that scored high and low in each construct to identify factors that may drive normalization. We also compared findings to changes in referral rates across sites.
We used several strategies to ensure rigor.34 First, we integrated 2 data sources: interviews and observations. Second, to avoid bias, we de-identified the data before rating the sites on each NPT construct. Third, our team had extensive knowledge of each site due to prolonged time in the field. Study coordinators were often on-site for study procedures and 3 team members were practicing family physicians in the health systems. Two study coordinators wrote detailed notes about each site before reviewing any qualitative data, which were later used to validate themes and provide additional context.
RESULTS
We interviewed 27 family medicine clinicians across 2 health systems, 14 MM and 13 BH clinicians, and conducted 10 field observations. All 10 EAR-PC sites participated in the qualitative phase. See Table 2 for site and setting characteristics. Participating sites were mostly in suburban or city communities, with populations ranging from approximately 5,000 to 121,000. Median household incomes in each setting ranged from $37,000 to $91,000.35 See Table 2 for additional site and setting characteristics.
Thematic analysis resulted in 6 themes, which are presented alongside related NPT constructs in Table 3. Overall, NPT ratings for each site ranged from 1.8 to 3.5 (m = 2.8 for all sites, 2.7 for MM, 2.8 for BH). Individual construct scores ranged from 1.5 to 3.9. In the following sections, we describe NPT ratings using descriptors: “low” for scores in the lower quartile (1.5-2.1); “medium-low” for the second quartile (2.2-2.8); “medium-high” for the third quartile (2.9-3.4); “high” for the upper quartile (3.5-3.9).
Coherence: Understanding the Intervention
Coherence of the intervention was rated “medium-low” (m = 2.7, range = 1.6 to 3.9 across sites), slightly higher at BH (m = 3.0) than MM (m = 2.4) sites.
Prompt Overwhelmingly Viewed as Easy, Simple to Use, Accurate
Clinicians believed the prompt reduced effort by requiring only the single question, “Do you think you have hearing loss?” with 5 possible responses. One clinician summarized this common sentiment: “It’s not that big of a deal to press one more button” (P18, Site 3, BH). Another clinician elaborated:
“I’m probably what you would call a late adopter…. My initial thought was, ‘Oh geez, one more thing.’ … It’s not overly intrusive or anything like that… Do they have a problem?
Yes. Open the [prompt]. Send them to [an audiologist]. Do they not have a problem? Click no and it’s done” (P23, Site 4, BH).
Clinicians Viewed the Prompt as an Effective Way to Increase Their Awareness and Conversations With Patients About Hearing Loss
Overall, clinicians reported that they previously did not ask about HL outside of a wellness exam for patients aged ≥65 years. Instead, they relied on patients and their families to bring up HL.
“If someone had mentioned to me that they felt like they couldn’t hear, or if I’d noticed in talking to them that they were having trouble hearing me, we would talk about it. Otherwise, I left it on them to bring it up, which, probably wasn’t the best way to address it” (P32, Site 10, MM).
Sites with lower coherence ratings had clinicians who were more likely to describe challenges during early implementation. For example, one clinician at Site 7 said the prompt fired inappropriately by not always appearing for patients aged over 55 years (“I think it wasn’t popping up for everybody, or it wasn’t popping up for the MA, but it was popping up for the provider,” P9, Site 7, MM). In addition, some misperceptions of prompt use (eg, when it should appear and for whom) and clinic protocols for implementation existed (eg, who should interact with the prompt and when).
Cognitive Participation: Supporting and Maintaining the Intervention
Cognitive participation was the lowest-ranked construct (m = 2.5, range = 1.5 to 3.5), rated “medium-low” at BH (m = 2.5) and MM (m = 2.6).
Clinician and Staff Buy-In Played a Vital Role in Implementation
Overall, clinicians who shared personal experiences with hearing loss described strong support for the intervention. They often emphasized the difficulty of addressing their own family members’ hearing loss due to an unwillingness to admit hearing loss or wear hearing aids. They viewed the prompt as an opportunity to discuss the impact of hearing loss on quality of life with patients:
“Life is too short to only hear half the conversation” (P20, Site 2, BH).
Sites with higher coherence (Sites 2 and 9) described buy-in among both clinicians and staff, also supported by observation data. For example, at Site 2, one clinic manager described implementation of the intervention as “required just like all other office duties, it is not an option” (field notes). At Site 9, MAs played a prominent role, including asking patients about hearing loss when rooming them and queueing the prompt or referral in order for the clinician to quickly respond. Additionally, MAs and the office manager demonstrated buy-in by identifying possible problems with the prompt and communicating these to the research team.
In contrast, sites with lower scores described less buy-in and support. At Site 3, clinicians who had not attended an educational session before implementation described being surprised by the prompt, not understanding how to interact with it, and automatically dismissing the alert. A resident requested continuing education to understand the prompt purpose and functions:
“[I need] a refresher reminding us what we should do and how to use it... Like if I click that what would happen, or what would I ask in a physical exam? And then if I’m referring them, how I use the alert to refer” (P17, Site 3, BH).
Notably, the 3 sites with residents were ranked “low” or “medium-low” for cognitive participation. “Low” and “medium-low” sites for cognitive participation also were described in field observations as having high team turnover or recent changes in clinic leadership, which may limit individual and collective buy-in of a new intervention.
Lack of support was also evident among few clinicians who described the prompt as “low yield.” One clinician explained: “I don’t really want to have something else to discuss with patients that wasn’t important enough to them to bring up [on their own]” (P8, Site 7, MM).
Collective Action: Developing Practices and Accountability
Collective action varied by site (m = 2.6 or “medium-low,” range = 1.6 to 3.5) but was rated consistently between health systems (m = 2.5 for BH; m = 2.7 for MM).
Clinicians Prioritized Hearing Loss Prompt During Annual Visits
Clinicians varied in how they incorporated the prompt into their workflow: some addressed all prompts immediately whereas others waited until the visit’s end, often at risk of not addressing them. Clinicians implemented additional reminders, whether asking their MA to write a note on a patient’s chart or creating an EHR sticky note while preparing to see a patient.
Overall, clinicians were able to easily address the prompt during wellness exams, prioritizing more pressing health concerns during acute visits and used a patient-centered approach (ie, to focus the visit on the patient’s needs and expectations for the visit).
“I think when it’s a routine physical, it’s [addressing the prompt] not a big deal. It’s just a part of your reviewing systems… it doesn’t take much time. But on the other hand, if I’ve got a chronic diabetic with all these other issues, I may not focus on it as much if the patient is not bringing it up” (P24, Site 4, BH).
Several MM clinicians had practiced at multiple sites and emphasized how the different social needs of patients influenced their ability to address the prompt during visits.
“At [Site 10], people have a lot of inconsistencies in their life, psychologically, socially, financially, and they will often come in and have a lot going on. These health maintenance visits are just to catch up on screening and things that aren’t happening as much because we’re putting out a lot of fires. There just sometimes isn’t time” (P31, Site 6, MM).
Involvement of MA in Prompt Workflow Varied by Health System, Clinic, and Clinician
At MM clinics, typical workflow required MAs to interact with all EHR prompts. One participant summarized the procedure at her clinic, similar to other MM sites:
“There’s a clinic-wide process that when a MA will room them, if the prompt fires, they will ask that question upon rooming the patient… [The MA] writes a lot on the white sheet for me, and in particular, she will hand-write out hearing prompt declined, or positive HL. I know other provider/MA pairs may communicate through the MA typing in the visit information, in [the EHR]” (P9, Site 7, MM).
While Site 9 had the highest rating for collective action (m = 3.5 or “high”), other MM sites were rated “medium-low” or “low.” Some clinicians described inconsistency in the actions MAs performed, which was also evidenced in observations. For example, multiple MAs at Sites 6 and 8 (both “medium-low”) were unaware of the prompt or how to address it while clinicians reported that MAs should be engaging with the prompt. One physician explained that the fast pace of some sites could contribute to miscommunication about workflow protocols:
“There’s more patients per hour [at Site 10]. Almost double for some providers. MAs are busy everywhere but it’s just a quicker workflow clinic… It’s easier for communication to get lost” (P26, Site 10, MM).
In contrast, the BH health system did not have workflows incorporating MAs in the hearing loss prompt. When asked about their workflow, no BH clinicians suggested involving the MA in addressing the prompt.
Reflexive Monitoring: Appraising the Intervention
Reflexive monitoring was the highest-rated construct for BH (m = 3.3, range = 2.1 to 3.8), MM (m = 3.2), and overall (m = 3.3).
Prompt Resulted in More Conversations About Hearing Loss, But Uncertain Impact on Patient Outcomes
Clinicians reported having more conversations about hearing loss during the intervention and reflected on changes in their practice:
“I think I am ordering more testing. I had gotten somewhat jaded by the ‘nothing I can do about it.’ And so I’m less jaded by that right now. More patients are accepting of it [referral]” (P15, Site 6, MM).
Though participants overall described increasing their conversations and referrals for HL, some were skeptical if the prompt improved patient outcomes.
“I feel like I probably send people for more hearing tests than I used to, but I don’t know that I’ve had patients reflect back to me that–I feel like I get a lot of kind of ambiguous ones where maybe there’s a little bit of age-related loss, but they don’t really need hearing aids yet” (P 25, Site 9, MM).
Clinicians were unsure whether the prompt had increased use of hearing aids, though most expected to see an increase over time. Many reported that additional barriers, particularly cost, would limit the number of patients who obtained hearing aids.
Mixed Methods Results
Table 4 presents NPT ratings alongside referral rates by site. Looking at individual NPT constructs, sites with the lowest ratings for cognitive participation (Site 3, BH and Site 10, MM) were at opposite ends of the referral rate distribution. Site 3 had the lowest change in referral rates but still improved to 2.4% (P = 0.007), while Site 10 improved from 3.1% to 14.2% (P <0.001). High scores for reflexive monitoring were present in sites across the distribution of referral rate changes.
The site with the lowest overall NPT ratings also had the lowest relative increase in referral rates during the intervention period (0.8% to 2.4%, P = 0.007). The site with non-significant change in referral rates, but the highest baseline referral rate (7.0 to 12.2%, P = 0.213), had relatively higher ratings for coherence and reflexive monitoring, but lower scores for cognitive participation and collective action. The site with the highest overall ratings (and the only site to be rated “high” in multiple constructs) had the greatest relative increase (4.0% to 19.9%, P <0.001).
DISCUSSION
We examined evidence of normalization across sites and health systems. Findings suggest that the hearing loss screening prompt was viewed by clinicians as: easy to use, effective at increasing discussion of hearing loss and guiding care, and dependent on buy-in and workflow integration. Most sites fell into the “medium” range of normalization which may indicate that contextual barriers limited the effectiveness of the intervention, practice integration had not yet fully occurred, or that some clinicians were uncomfortable addressing hearing loss.
Primary care physicians rarely assess hearing loss, often citing time constraints, conflicting priorities, and discomfort managing hearing loss.20,36 Although prompts are useful for improving preventive care, clinicians also experience barriers including formatting, workflow disruptions, time pressure, and perceived importance.16,18-21 Our results indicated that the EHR prompt was easy to use and incorporate into typical workflows, likely because it was designed through extensive feedback from PCPs.20,23 Most family medicine clinicians valued addressing hearing loss and believed the prompt improved patient care. Still, intervention buy-in varied, often influenced by their personal experiences with hearing loss and mental model of HL care.20 This prompt may have reduced some of the uncertainty clinicians feel around hearing loss, thereby increasing their screening and referral rates. Longitudinal research is needed to know whether a prompt leads to improved hearing loss screening and referral over time.
Implementing a health information technology intervention in primary care is complex, and outcomes likely varied due to buy-in and integration of hearing loss screening into everyday workflows.21,37 Some physicians reported inconsistent expectations about how staff and clinicians should interact with the prompt. In one health system, MAs were responsible for addressing prompts related to other screenings (eg, colorectal and breast cancer) and workflows were already established. Systematically leveraging MAs may be an effective way to increase prompt utilization in primary care.38 Moreover, expanding MA responsibilities may lead to innovations for panel management, health coaching, or patient navigation that reduce patient-level barriers to hearing loss.39,40
Integration of qualitative findings with referral rates revealed differences between sites with low and high normalization ratings. For example, the site with the lowest overall rating had the smallest relative increase in referral rates, while the site with the highest overall rating had the largest increase during the intervention. This finding supports the theory that practice normalization is related to success, though the pattern was not consistent across all constructs and sites. Understanding any differential impact of individual constructs on practice normalization would improve future implementation.
Limitations
Only clinicians were included in the interviews, which excluded the perspectives of other team members and factors that may have influenced variation. Some MAs were informally interviewed during the clinic observations, however, which added depth to our findings. Observations reflected 1 time point, early in the intervention. We may have missed some complexity of clinics and health systems, which change frequently. As team members became accustomed to the prompt, their perspective may have changed. Longitudinal data and data on the perspectives of multiple team members would supplement our understanding of factors that influenced implementation. Moreover, patients play a key in role in whether a referral is ultimately accepted and used; additional research is needed to understand patient perspectives on HL screening and their pursuit of hearing health care. Finally, our study did not measure the influence of demographic variables on the EAR-PC intervention. Clinics were in communities of varying sizes with diverse household incomes, but were predominantly White communities. Additional evaluation that considers the potential impact of patient, clinic population, and clinician demographics would augment our conclusions.
CONCLUSION
Integration of an effective EHR prompt for hearing loss screening into clinical practice varied by clinician buy-in and beliefs about the impact on patient outcomes, involvement of MAs, and prioritization during clinical visits. Further research is needed to understand how to leverage clinician and staff buy-in and what constructs have sustained impact on hearing loss screening and patient outcomes.
Footnotes
Conflicts of interest: authors report none.
To read or post commentaries in response to this article, see it online at https://www.AnnFamMed.org/content/19/5/388.
Funding support: This work was supported in part by a grant (1R33DC013678-01) from the NIH National Institute on Deafness and Other Communication Disorders.
Supplemental materials: Available at https://www.AnnFamMed.org/lookup/supple/doi:10.1370/afm.2695/-/DC1/.
- Received for publication June 29, 2020.
- Revision received November 16, 2020.
- Accepted for publication December 3, 2020.
- © 2021 Annals of Family Medicine, Inc.