Abstract
PURPOSE Information about social determinants of health (SDOH) is essential for primary care clinicians in the delivery of equitable, comprehensive care, as well as for program planning and resource allocation. SDOH are rarely captured consistently in clinical settings, however. Artificial intelligence (AI) could potentially fill these data gaps, but it needs to be designed collaboratively and thoughtfully. We report on a codesign process with primary care clinicians to understand how an AI tool could be developed, implemented, and used in practice.
METHODS We conducted semistructured, 50-minute workshops with a large urban family health team in Toronto, Ontario, Canada asking their feedback on a proposed AI-based tool used to derive patient SDOH from electronic health record data. An inductive thematic analysis was used to describe participants’ perspectives regarding the implementation and use of the proposed tool.
RESULTS Fifteen participants contributed across 4 workshops. Most patient SDOH information was not available or was difficult to find in their electronic health record. Discussions focused on 3 areas related to the implementation and use of an AI tool to derive social data: people, process, and technology. Participants recommended starting with 1 or 2 social determinants (income and housing were suggested as priorities) and emphasized the need for adequate resources, staff, and training materials. They noted many challenges, including how to discuss the use of AI with patients and how to confirm their social needs identified by the AI tool.
CONCLUSIONS Our codesign experience provides guidance from end users on the appropriate and meaningful design and implementation of an AI-based tool for social data in primary care.
- primary care
- artificial intelligence
- codesign
- qualitative methods
- social factors in health and health care
- practice-based research
- vulnerable populations
INTRODUCTION
Artificial intelligence (AI) has increasingly become part of our society, including in health care. The use of AI in primary care, in particular, has the potential for widespread impact on patient care and clinician workload, as primary care is where the majority of patient visits occur within the Canadian health care system.1,2 A high-priority task for AI identified by primary care clinicians and patients during recent pan-Canadian consultations was to support automated charting, including the collection and verification of patient information in their electronic health record (EHR).3 This practice would help alleviate clinician burnout and liberate both time and cognitive freedom for direct patient care.3 Despite technologic advances, however, successful implementation of AI-based tools into primary care practice remains challenging because of a variety of factors such as lack of system readiness, bias in data and AI algorithms, and the need for a better understanding of people as “technology enablers.”4 Codesigning AI tools with end users is an important strategy that leads to better acceptability and adoption of the tools in clinical settings.5 This process also ensures that AI is addressing an important and timely problem identified by primary care teams, while building trust and ensuring effective integration into clinical workflow.
This article describes a real-world example from the St Michael’s Hospital Academic Family Health Team (FHT), an urban interprofessional primary health care team who recently initiated a project to explore the development of an AI-based tool to derive information about their patients’ social determinants of health (SDOH). Patient SDOH data are necessary for the provision of comprehensive, personalized medicine, yet this information is often not available to health care teams.6 As part of a codesign process, the objective of this study was to understand the FHT’s preferences on the integration and presentation of a machine learning–based tool that could generate SDOH information about their patients from existing EHR data. This work is novel in Canada, as very few primary care practices have access to a reference set of SDOH data for their patients, as well as the technical infrastructure and expertise to conduct AI research.
METHODS
The FHT has distributed an SDOH questionnaire to their patients since 2013; however, information is still missing for nearly two-thirds. The FHT wanted to explore whether AI could fill information gaps by deriving SDOH data from its EHR data. The Design Council’s Framework for Innovation7 guided this process to ensure the AI tool was relevant and useful for their needs. The first phase (discovery) included interviews with primary care clinicians and health system leaders in Ontario, Canada.8 Those findings informed the second phase (define) to develop a strategy for using machine learning to derive SDOH data from EHR data (work in progress). The third phase (develop), described in this article, consisted of codesign workshops with the FHT to gain a better understanding of their preferences for the AI tool and how it might work best in practice.
Study Sample
Participants were recruited through e-mail invitations circulated to all FHT clinicians, allied health care professionals, clinic staff, and medical residents. The FHT provides care across 5 practice sites in the downtown core of Toronto, Ontario, Canada’s largest city with an estimated population of more than 3 million people.9 It is part of a teaching network within the University of Toronto and serves approximately 50,000 patients, many of whom experience social and economic challenges.
Data Collection
We offered 4 workshops, each lasting 50 minutes, at the convenience of participants from May to June 2023; we held 3 in person at practice sites and conducted 1 virtually (Supplemental Appendix). The workshops were facilitated by a postdoctoral fellow embedded in the FHT with experience in primary care EHR data research (S.G.) and an FHT family medicine resident (S.L.).
The workshops began with an overview of the project, including a short tutorial on machine learning and details about ongoing work to test the use of AI to derive SDOH from EHR data. Participants briefly discussed what SDOH data were available and used for patient care. Next, a short, animated example was shown, demonstrating how the AI tool might function in a mock patient visit. This led into a brainstorming exercise that was guided by a sociotechnical systems framework for the application of AI in health care delivery10 and centered on 3 areas: (1) who are the people who should access and/or act on AI-derived social data, (2) what is the optimal process for integrating the tool into clinical workflow, and (3) what should the technology look like or achieve. Lastly, participants were asked about perceived barriers and facilitators when implementing this AI tool. We used a deliberative dialog technique to facilitate meaningful feedback that could be incorporated into the development of the AI tool.11,12 A demographic survey captured participants’ personal and practice characteristics (gender, age, language, role, years in practice). Written consent was obtained from each participant before the workshop and a $25 gift card was provided as appreciation for participants’ time.
Data Analysis
A voice-recording device was used to capture audio from in-person workshops and Zoom (Zoom Video Communications, Inc) was used to record the online workshop. The audio data were transcribed using Otter.ai software (Otter.ai Inc), and the transcript was reviewed alongside the audio by one of the workshop facilitators (S.G.) to verify the accuracy and edit as needed. An inductive thematic analysis13 was conducted to classify the discussions and ideas generated from the workshops. The data were categorized independently by 2 study team members (S.G., S.L.), then an iterative process was used to ensure alignment, where the analysts would meet to discuss, compare, and reach consensus on codes and themes.
Coding and analysis of the transcribed text was conducted using NVivo 12 version 12.6.0.959 (QSR International). Qualtrics (Silver Lake Technology Management, LLC) was used for the electronic demographic survey.
This study was reviewed and approved by the Unity Health Toronto Research Ethics Board (REB No. 22-036).
RESULTS
In total, 15 members of the St Michael’s Hospital Academic FHT participated across the 4 workshops. Of the 12 FHT members who completed the demographic survey, the large majority were female (75%) and aged from 30 to 44 years (67%) (Table 1).
How Do Participants Currently Access and Use SDOH Information?
Participants reported having access to some SDOH information in the personal history and/or Cumulative Patient Profile sections of the EHR, which is obtained by asking patients directly. SDOH data are also captured through the Health Equity Questionnaire6 and becomes part of the patient record. Participants reported, however, that patient social data were often inconsistent, difficult to search for, and incomplete for most. During patient intake interviews, clinicians generally asked about employment, education, social supports, immigration, and other determinants as appropriate. Most participants felt confident that gender identity and sexual orientation were kept up to date, but there was uncertainty about how frequently other SDOH were updated.
Participants spoke of the importance of having access to SDOH information for providing tailored and respectful patient care. As one clinician articulated, “You might not assume that they have a lot of, maybe, resourcefulness in terms of like … what kind of instructions you give them, how to proceed with the treatment or how to access other resources.”
There was a clear desire to have information about patients’ income, employment, and health insurance to inform prescribing decisions for those who may not have medication benefits (eg, choosing a lower-cost drug) or to provide free supplies, such as dressing changes for wound care. Knowledge of sexual orientation helped provide appropriate education, screening, and resources for sexually transmitted infections to their patients. Information on preferred spoken language was also seen as valuable to determine if interpreter services were required during visits, to prompt clinicians into giving shorter, simpler instructions to patients, or to allocate extra appointment time.
Implementation and Use of AI-Derived SDOH
As previously noted, our brainstorming exercise elicited participants’ preferences in 3 areas related to the implementation and use of an AI tool to derive social data: people, process, and technology. Preferences in these domains, which are based on a sociotechnical systems framework for the application of AI in health care delivery,10 are detailed below and summarized in Table 2.
People
Participants expressed various preferences on who should have access to AI-derived social data and who is responsible for acting on these data (Table 2). There was consensus that anyone in the patient’s direct “circle of care” or who already had access to the patient record should be able to view the AI-derived social data, including physicians, nurse practitioners, nurses, and social workers. There was uncertainty whether patients and/or clerical staff should have access; some participants worried the information could bias staff when communicating with patients, although others noted these data are necessary to address people by preferred pronouns and support them with transportation needs or logistics.
The most responsible provider (usually the family physician) was identified as the person to lead any action required if a major social need was flagged. Several participants argued that anyone involved with patient care should be responsible for acting but in varying capacities according to different clinical roles (eg, physician, nurse, social worker). The difficulties of deciding when and how to act on identified social needs were expressed by many participants. Further, participants were unclear if they had a legal obligation to act on a social need flagged by the tool, as this was viewed as “extra information.” Participants were open to providing access to AI-derived social data to personnel outside the care team for specific uses that would benefit patients (eg, quality improvement, research), but strongly opposed other private/third-party entities such as insurance companies, housing agencies, or other commercial organizations having access of any kind.
Process
Participants recommended a pilot implementation of the AI tool using a slow, scaled-back approach focused on 1 or 2 social determinants initially, accuracy testing by confirming the AI output with patients, anticipating and preparing guidance according to the AI output, and gathering feedback from clinicians (Table 2). All participating sites suggested starting with income and housing as 2 important and actionable determinants.
Several participants questioned why AI would be used rather than relying on self-report surveys, which would be easier to discuss with patients. It was agreed, however, that it was not feasible to collect SDOH information from all patients in the practice and a balance was needed between using AI and relying on patient self-report. There was recognition that because the AI output was unlikely to be 100% accurate, a validation process with patients was required. It was left undecided who would be responsible for this; one suggestion was to use the AI output as a “check-in” prompt, which could initiate a telephone call or follow-up with patients to ask about their current situation. Participants also felt that they would need to explain to patients why they were asking and what information in the record led them to this determination, although some participants were uneasy about how their patients would respond.
There was universal concern about the additional staff time and resources that this tool could impose, especially to confirm accuracy with patients and with respect to the new workflow created by the tool (eg, extra appointment time, additional visits). A tension existed between implementing a tool that would place a larger burden on busy clinicians and staff, and recognizing that this tool would likely improve patient care. Nevertheless, participants expressed an interest in trialing the tool and offered a variety of suggestions that would support its integration (Table 2).
Technology
As the workshop discussion turned to the technology itself (Table 2), participants agreed it was important to have the AI-derived social data presented simply and made available in the patient record to view and use as needed, either in the Cumulative Patient Profile (main summary of conditions) or with a custom SDOH button to display a summary of the AI output.
There were many examples provided of how the AI tool could be used in practice, such producing simple summaries of pertinent patient information so that clinicians could act quickly instead of spending time scrolling through notes or potentially overlooking important information. Another suggestion was to have the tool monitor social circumstances over time to make note of improvements or declines. Participants also requested the ability to manually correct the AI output if it was found to be inaccurate after confirming with patients. All participants desired that the AI tool assist with actionable tasks, ideally matched with evidence-based, local solutions; however, this aim was acknowledged as being challenging and time consuming. For instance, participants described an ideal use case where the tool could automatically prompt a referral to or follow-up appointment with social work or income support, especially if this could free up time to manage other pressing medical concerns during the visit.
Participants also described several relevant non–patient-facing uses of the AI tool that would be desirable, as long as it would benefit patients; these uses included quality improvement, program planning, resource allocation, designing patient education sessions, streamlining workflow, prioritizing patients for initiatives such as wellness telephone calls, providing tailored information on how to access social supports (eg, tax assistance clinics), performing research, and conducting advocacy.
Barriers and Facilitators
The barriers and facilitators for implementing and using the AI tool reported by participants are summarized in Table 3. Many concerns centered around biases in the data and AI, as well as maintaining patient confidentiality and trust. There was considerable worry that the AI tool could perpetuate discrimination or create racial biases, which might affect clinical decisions or communication with patients. Participants also mentioned the selection bias that could occur if patients were required to provide explicit consent for use of the AI tool—for instance, patients who might benefit from additional social resources may be less willing to permit the use of AI on their data.
DISCUSSION
Findings in Context
The codesign workshops described here provided a guiding path for our continued development of an AI tool for social data and eventual integration into the FHT’s practice. This “last mile of implementation” into real-world clinical settings is arguably the most difficult and remains largely unsolved, even with AI systems that are highly accurate.14,15 Hesitation for implementing AI tools may stem from one of the predominant tensions that emerged from the workshop discussions, where AI was perceived as a time-saving tool that could alleviate clinician workload, but it was also anticipated to create additional burden on the practice. A number of questions still remain and will require further investigation, including workflow specifics, staffing considerations, and the patient verification process.
Participants in this study all mentioned concerns with data quality and the potential for bias; clinicians, as both data creators and users, are acutely aware of the possibilities and limitations of EHR data when used for secondary purposes. Given the current challenges with data quality and algorithms that are unlikely to perform perfectly, the ideal balance may require limiting the AI-derived SDOH data for aggregate, clinic-level purposes (eg, planning, resourcing) rather than using it for direct patient care. Patient self-report of SDOH should ideally continue, not only to support personalized care with individual-based solutions to social needs but also as a true reference standard for any secondary analyses in the future.
The challenges reported by participants are consistent with those in other studies on AI use in primary care, such as system readiness; ethical, legal, and social implications; and balancing the value of adopting AI against the large time and staff commitment.4,5 Broad strategies to ameliorate these concerns include codesign/cocreation, iterative implementation, and continual evaluation.4,5 More specifically, participants indicated their preference to start slowly with a small pilot project; to ensure regular communication between staff and researchers; to better define the workflow once a social need was identified; and to understand and address all resourcing implications.
Finally, participants expressed uncertainty about how to talk to patients about the use and outcomes of AI applied to their data and what a feasible consent model might look like. A recent review emphasized the necessity of moving machine learning models from ones merely being interpretable to ones that provide justifiability,16 for instance, rather than using AI to indicate whether a patient is experiencing a particular issue (eg, poverty), the tool should also provide the reasoning (eg, unemployment record found, key words identified in progress notes). This also highlights the importance of patient and clinician educational resources for AI and digital health, which could facilitate more informed conversations around the use of AI in clinical settings and possibly allay concerns about the technology itself.
Limitations
This study reflects the perspectives of one primary care group in Toronto, Ontario and may not be applicable to all primary care teams, health care settings, or regions. The patient-related barriers and facilitators were reported by participants based on their experiences and conversations with patients, and may not reflect concerns from patients themselves. Lastly, we likely had underrepresentation of social workers and other staff/allied health care professionals who are primarily responsible for addressing aspects of SDOH (eg, housing, food insecurity, transportation), which could affect the design of workflow processes.
Conclusions
The use of AI in health care settings is growing, with many possible applications and purposes. It is critical to engage and codesign with end users throughout the entire process of AI ideation and development. Our study highlights the preferences of one large urban academic family health team on an AI tool to derive social data for their patients. A future study is needed to formally evaluate the implementation of the AI tool once it is ready for deployment.
Footnotes
Conflicts of interest: authors report none.
Funding support: Financial support for this work was provided through the Canadian Institutes of Health Research (CIHR) Health System Impact Fellowship Program, in partnership with Unity Health Toronto.
Disclaimer: The views expressed are solely those of the authors and do not necessarily represent official views of the authors’ affiliated institutions or funders.
Previous presentations: Poster presentation, T-CAIREM AI in Medicine Conference; Oct 12-13, 2023; Toronto, Ontario, Canada; Poster presentation, Canadian Association for Health Services & Policy Research Annual Conference; May 29-31, 2023; Montreal, Québec, Canada; Oral presentation, University of Toronto Department of Family & Community Medicine Conference; May 11, 2023; Toronto, Ontario, Canada.
- Received for publication September 14, 2023.
- Revision received March 19, 2024.
- Accepted for publication March 28, 2024.
- © 2024 Annals of Family Medicine, Inc.