Abstract
Card studies—short surveys about the circumstances within which patients receive care—are traditionally completed on physical cards. We report on the development of an electronic health record (EHR)–embedded card study intended to decrease logistical challenges inherent to paper-based approaches, including distributing, tracking, and transferring the physical cards, as well as data entry and respondent prompts, while simultaneously decreasing the complexity for participants and facilitating rich analyses by linking to clinical and demographic data found in the EHR. Developing the EHR-based programming and data extraction was time consuming, required specialized expertise, and necessitated iteration to rectify issues encountered during implementation. Nonetheless, future EHR-embedded card studies will be able to replicate many of the same processes as informed by these results. Once built, the EHR-embedded card study simplified survey implementation for both the research team and clinic staff, resulting in research-quality data, the ability to link survey responses to relevant EHR data, and a 79% response rate. This detailed accounting of the development and implementation process, including issues encountered and addressed, might guide others in conducting EHR-embedded card studies.
INTRODUCTION
Card studies—short surveys focused on the circumstances within which patients receive care—have been used to collect descriptive data on a range of research questions for decades.1 Topics explored in primary care settings include the effect of missing clinical information,2 clinical decision making in diabetes and hypertension care,3-5 perceptions of patient-centered medical homes,6 and perceptions of clinician ability to address patients’ social risks,7 among others.
Traditionally, survey questions are printed on a physical card (hence the name). We designed and implemented an electronic health record (EHR)–embedded card study in the context of a study assessing how to support community health center (CHC) adoption of social risk screening. Here we report on the process, programming, and data extraction challenges and successes we experienced to facilitate future uses of this method. We know of no other publications describing EHR-embedded card studies.
Rationale
While card studies are useful for collecting data at the point of care, traditional paper-based methods can be costly (eg, printing, mailing, travel, and staff time). They can also be logistically complicated for researchers and clinic staff because they require distribution, tracking, collection, storage, and transfer of physical cards as well as procedures to ensure that respondents complete the cards as directed. Because card studies are meant to be brief so that they do not interfere with clinic workflow, it can be difficult to obtain demographic and contextual data needed for rich analyses. Conditional branching—using skip patterns to tailor survey questions to the individual respondent or encounter8—can also be challenging because respondents must follow written skip instructions. This can lead to errors that affect data reliability and validity.9 Finally, data entry can be time consuming and another potential source of error.
We hypothesized that an EHR-embedded card study might address known limitations of paper-based card studies and yield high response rates by (1) easing the cost and logistics of paper-based distribution, tracking, and data entry, (2) incorporating automated prompts for clinicians, (3) automating conditional branching, and (4) enabling linkage of completed surveys (cards) with EHR data on the encounter, clinician, and patient while minimizing the amount of documentation asked of respondents.
Context
This card study was part of a 5-year trial titled Approaches to Community Health Center Implementation of Social Determinants of Health Data Collection and Action (ASCEND; 1R18DK114701) designed to test the effect of technical assistance on CHC documentation of patient-reported social risk data in the EHR.10 Participating clinics were randomized to sequential wedges. In each wedge, each clinic’s appointed champion and interested staff received 6 months of coaching and instruction on EHR-based documentation tools. The trial’s process evaluation relied primarily on data from interactions between the implementation support team and study clinics. The card study was intended to supplement these data by assessing how patients’ social risk data were used in point-of-care decision making (Supplemental Appendix). The study was approved by the Kaiser Permanente Northwest Institutional Review Board, which granted a waiver for obtaining patient consent. All OCHIN members also sign an agreement that their EHR data may be used for research.
Programming the Card Study in the EHR
Study clinics were members of OCHIN, Inc, a nonprofit health information technology provider that hosts a shared instance of the Epic (Epic Systems Corp) EHR tailored for CHCs. The EHR-embedded card study was built within this EHR and was designed to enable data extraction from Epic’s database.
Initial Programming
An in-house application developer programmed the card study into the EHR as follows: a secure message, called an In Basket message in Epic, was automatically sent to selected clinicians (described below) for 2 office visit encounters per day during the data collection period. The message appeared at the first 2 encounters during which a user (1) clicked on the visit navigator “Wrap-up” tab (where clinicians typically document follow-up information) or (2) closed the encounter.
The card study messages displayed to clinicians in 3 places in the EHR as described below (see also Supplemental Appendix). Once the survey was completed, the remaining prompts for that encounter were suppressed (ie, no longer appeared). All requests to complete a card for a given patient referenced the visit record, allowing clinicians to easily view the full encounter report.
In the “Wrap-up” tab, a hyperlink appeared that could be clicked to complete the questions or ignored.
In the “Close Encounter” checklist, a recommended item alerted the user to the card request, which could be clicked to complete or ignored.
All card requests appeared in a new “Research Request” folder in the clinician’s In Basket until completed, allowing clinicians to complete them after the visit, if preferred. Once completed, these requests disappeared. Requests that remained incomplete at the end of the card study timeframe were deleted manually.
Regardless of entry point, clicking on the card request took the clinician to the survey questions, which were built using the same Epic tools (SmartText with SmartLists) that clinicians use in standard note templates. Clinicians could enter free text wherever they wished. On completing the survey, the clinician clicked “Accept,” at which point they were no longer able to edit or view the survey. The “In Basket” request message automatically changed to a status of “Done,” which switched the navigator section to a “Thank you” message, suppressed the close encounter validation, and removed the request from the clinician’s “In Basket.”
The survey requests and answers were filed as “In Basket” messages within Epic and did not become part of the legal medical record. Surveys were marked as “Done” (no further action in Epic expected) after data collection was complete, allowing them to be deleted by the standard maintenance process that removes old “In Basket” messages once they are no longer needed.
Issues Encountered and Addressed
Additions and revisions were made to the card study programming as challenges arose, as follows:
Issue: Encounters for which a card was completed were occasionally later addended and reclosed to address follow-up orders, leading to prompts months after the card study was complete. The EHR did not detect the old cards because completed In Basket messages were occasionally purged.
Fix: We modified the build to disallow additional requests for reopened encounters by removing clinicians from the participant list after their participation was completed.
Issue: Some clinicians worked at affiliated or multiple clinics within the organization. Initially, cards were assigned based only on clinician identification number, leading to card requests for participating clinicians when seeing patients at nonstudy clinics.
Fix: We added clinic department identification numbers as an additional constraint on card assignment.
Issue: Owing to the coronavirus disease 2019 (COVID-19) pandemic, study clinics transitioned to a largely telemedicine model, but the card study was originally programmed to only trigger for office-based encounters.
Fix: We added card study triggers to telemedicine visits.
Issue: In telemedicine encounters, charting before the patient arrived triggered the card study prompt. Occasionally, a patient did not attend the appointment, rendering the prompt inappropriate. During in-person encounters, such errors were prevented by requiring front desk staff to complete patient check-in documentation before the clinician could start charting, but these guardrails were removed to support varied virtual workflows during the pandemic.
Fix: None; it was impractical to change these decisions for survey purposes.
Implementation
To minimize clinic and clinician workload, 2 clinicians per clinic received a maximum of 2 card prompts per day for a period of 3 weeks. Each card took <1 minute to complete, as tested by the study’s EHR trainer and confirmed by participating clinicians. The prompts were designed to trigger for the first 2 completed encounters of the day. Randomized encounter selection was considered but would have introduced complexity and the potential for missing data, owing to shifting schedules and patient no-shows.
Timeline
The card study took place approximately 5 months into each 6-month intervention period. Clinics could opt to have clinicians receive a $50 gift card for participation or $100 for a general clinic fund. Key steps included the following:
Six weeks before the card study start date, a verbal and written card study overview, checklist of information needed from the clinics, workflow details with screen shots, timeline of card study activities (Supplemental Appendix) and a 3-minute video that walked viewers through the card study workflow were e-mailed to clinic champions. Champions were asked to introduce the card study to selected clinicians (described below) and confirm clinician participation.
Shortly before the card study start date, the research team prompted the champion to ensure that participating clinicians were familiar with the card study workflow.
One week after the start of data collection, researchers e-mailed the champion with the number of cards completed vs requested. Champions used this information when checking in with participating clinicians.
Three weeks after the start of data collection, new card study requests ended.
Four weeks after the start of data collection, outstanding card requests closed.
Clinician Selection
Eligible clinicians were those with doctor of medicine (MD), doctor of osteopathic medicine (DO), nurse practitioner (NP), or physician assistant (PA) degrees working in primary care; behavioral health clinicians were not eligible. We planned for clinician selection to be based on the number of social risk screenings associated with each clinician’s patient panel, identified via EHR data; clinicians with the most documented screenings would be prioritized for recruitment. Multiple challenges complicated this process. Some clinics had not yet begun social risk screening; in these cases, the champion was asked to recruit the 2 clinicians with the most patients in the patient group(s) targeted for screening, as determined by each study site. The named clinicians were sometimes no longer clinic employees, worked limited hours, split time between multiple clinics, or had other commitments. Champions were sometimes reluctant to recruit selected clinicians because they felt that the individual clinician was overwhelmed (particularly during the COVID-19 pandemic) or struggled with EHR workflows. Some clinics conducted social risk screening only in behavioral health encounters, and the champion felt uncomfortable asking primary care clinicians to participate. Although we attempted to remain consistent in selection criteria, recruitment was ultimately the project champion’s decision.
Data Extraction and Analysis
Card response data were extracted from the EHR. We used SQL Server Management Studio (Microsoft) and SAS (SAS Institute) to extract, clean, and reformat the data for analyses. Several challenges were encountered and remedied via these processes. In many cases, issues identified during postwedge data extraction were immediately addressed by the Epic developer and thus only affected data extraction for that wedge.
Issues Encountered and Addressed
Issue: Multiple responses for the same patient. Cards were triggered for the first 2 closed encounters of the day for each clinician. When a given patient had multiple visits in that timeslot during the 3-week card study, >1 request was triggered.
Fix: We modified the build to prevent >1 card request for the same patient within a 30-day timeframe.
Issue: Duplicate responses for the same encounter were recorded in the database. As described above, pre- or post-encounter EHR documentation could trigger additional card study prompts. In some instances, clinicians responded to >1 survey for the same encounter.
Fix: Data from the same encounter were manually merged for analysis.
Issue: Assigning responses to appropriate clinicians and wedges. Issues previously described led to the following data anomalies: prompts delivered after a given clinic’s card study was completed, and clinician responses to a card study prompt while logged into a clinic not participating in the study.
Fix: Analysts examined data by clinician, clinic, and date ranges to attribute responses to the appropriate wedge. If it was clear that the clinician was part of a participating clinic but used the wrong login to complete the card, responses to that card were manually reassigned to the correct site. A small number of records that were inappropriately collected were removed before analysis.
Issue: The structure of the card surveys, based on Epic SmartText and SmartLists, led to data quality challenges. Clinicians could enter >1 response option for a given question and provide free-text responses. They could also overwrite the prepopulated SmartList response with their own text. Some clinicians made extensive use of the “Other: free text” options and provided more detailed information than expected; while helpful in providing a more nuanced view of the decision-making process, it was difficult and time consuming to aggregate and summarize these data in a meaningful way.
Fix: Analysts conducted substantial data cleaning and recoding to concatenate relevant multiresponse answers and also cross-checked responses with other available data, as appropriate. In some instances, it was not clear what clinicians meant by their free-text responses or we could not reconcile their responses with other EHR data; in these cases, the responses were removed from quantitative analyses but were noted for qualitative review.
Response Rates
A total of 26 clinics participated in the parent study. Five of those clinics chose not to participate in the card study, citing clinician burden, and 2 were able to recruit only 1 clinician. Ultimately, 40 clinicians at 21 clinics participated in the card study. Some clinicians worked part-time or took time off during the 3-week block; card requests were not sent on the days clinicians were not in the clinic. The final response rate was 79% (600/760 card requests).
DISCUSSION
We created the present EHR-embedded card study to decrease the financial and logistical challenges inherent to paper-based approaches, while simultaneously decreasing complexity for clinicians and clinics and enhancing our ability to conduct rich analyses. We met these goals, and the substantial logistic and analytic benefits incurred outweighed the initial start-up costs, as described below.
Unlike paper card studies, in which physical materials and related postage and mileage can incur considerable expense, the only cost for the EHR-embedded card study was time spent on design, programming, data extraction and analysis, and participant incentives. In this case, full-time equivalent costs were considerable because the programmer and data analysts were learning and iterating as the card study progressed. The programmer spent approximately 25 hours conceptualizing and programming the card requests, and analysts spent approximately 30 hours on data extraction, formatting, and linking survey results to EHR data.
The full-time equivalent was financially offset by study time not spent managing the logistics of survey printing, distribution, tracking, collection, and data entry, while also streamlining the process for busy clinic staff. Future EHR-embedded card studies could replicate many of the same processes and automate some of the data extraction and cleaning, which should substantially decrease labor costs. The use of EHR-embedded card studies might also be cost effective at scale because programming costs remain comparable regardless of the number of card requests.
Embedding the card study in the EHR also simplified the process for respondents. Clinicians were not required to remember when they were expected to complete a card because the EHR provided prompts. The electronic format facilitated skip patterns tailored to the individual respondent. In addition, the ability to combine brief survey responses with relevant clinical and demographic data from the EHR allowed for deeper analysis. The response rate of 79% is toward the high end of health care clinician survey response rates in the United States, which range from 60% to 83%,11,12 even though one-half of the data collection occurred during a global pandemic.
We note some limitations to this approach. Recruiting clinicians on the basis of number of (expected) completed social risk screenings on their patient panel might have led to a disproportionate level of highly engaged clinicians and thus possibly a response rate greater than would be expected in a more representative sample. In addition, the success of our approach was facilitated by the OCHIN environment; we had access to in-house expertise in EHR programming and data extraction, and all participating clinics used the same instance of the Epic EHR. Settings that lack this technical expertise or that require card study customization across multiple EHRs might have a different cost/benefit ratio.
Despite mention of the promise of EHR-embedded card studies a decade ago,1 most card studies continue to be conducted on paper.6,7,13 Paper-based card studies are usually anonymous,1,2,7 making it difficult to link clinician and patient information to the answers on the physical cards. Card studies that do collect demographic data take longer,3,14 increasing respondent burden. The use of EHR-embedded card studies addresses many of the challenges inherent to paper-based data collection and can yield quality data, rich analytic data sets, and relatively high response rates, presenting new opportunities to conduct effective point-of-care research.
Acknowledgments
The authors deeply appreciate the contributions of the participating study clinics and the individual clinicians who took part in the ASCEND card study. We would also like to thank Julianne Bava, Meg Bowen, Molly Krancari, Nadia Yosuf, and Christina Sheppler for their promotion and support of the card study, and Inga Gruß for her collaboration on all aspects of the card study and broader evaluation.
Footnotes
Conflicts of interest: authors report none.
Funding support: This work was supported by a grant from the National Institute of Diabetes and Digestive and Kidney Diseases (1R18DK114701).
Previous presentations: Portions of the content of this article were presented at the 47th North American Primary Care Research Group (NAPCRG) Annual Meeting, November 16-20, 2019, Toronto, Canada, and at the 12th Annual Conference on the Science of Dissemination and Implementation in Health, December 4-6, 2019, Arlington, Virginia.
- Received for publication July 7, 2021.
- Revision received December 6, 2021.
- Accepted for publication January 31, 2022.
- © 2022 Annals of Family Medicine, Inc.