The Aresty student(s) will join an ongoing project in my “lab” to document known areas of algorithmic bias, which have the potential to increase given that Artificial Intelligence (AI) enabled tools will continue to proliferate throughout various healthcare settings. AI and predictive algorithms are revolutionizing healthcare by enabling early disease detection, improved risk assessment, and better decision-making. The ability to analyze vast amounts of patient data enables healthcare providers to anticipate and proactively intervene in potential health issues to achieve better patient outcomes and promote more efficient resource allocation. However, the increasing use of AI and unregulated predictive algorithms can harm certain patients. In this ongoing project, my team and I consistently acknowledge the vast benefits of AI-enabled tools, but we are also highlighting the already recognized need for patient safeguarding, including protection from algorithmic bias (Adams et al., 2024). We are exploring the details and feasibility of oversight and monitoring, especially given how algorithmic bias was discovered in various health-related applications long after the implementation of the technology (Gichoya et al., 2022; Park et al., 2021; Valdez, 2021; Vyas et al., 2020). So, to promote “fair and equitable” AI we should be aware of the potential for bias in AI-enabled tools (Baumgartner et al., 2023; Chen et al., 2023; El-Azab & Nong, 2023). My team is currently exploring the feasibility of leveraging the experience and authority that already resides in Independent Review Boards (IRBs) to help monitor AI bias. IRBs are already in place at academic medical centers to protect research participants. And the current role of these Boards offers insights. We are exploring how IRBs can serve as a model for protecting patients given their responsibility to ensure ethical appropriateness and legal compliance. They mandate that researchers confirm that any risk is described and justified given anticipated benefits. We are exploring if patients would be better protected from the harm of algorithmic bias if institutions stipulate that AI-enabled tools in their healthcare settings are subject to similar ethical and legal stipulations, and that potential risk is described and justified with anticipated benefits. Since IRBs exist to protect research participants, similar oversight should be implemented to protect patients, especially vulnerable patients. After all, in academic medical centers, research participants are recruited from patient panels. So these same individuals are entitled to protection, as patients, similar to the protection they are afforded as research participants.
|
Sign in
to view more information about this project.