Abstract
Generative AI models, such as GPT-4.o, have become increasingly prevalent in various domains, from creative content generation to decision-making support. However, these models are susceptible to biases present in their training data, which can lead to biased or unfair outcomes. While various methods exist for detecting and mitigating bias in AI, there is a lack of standardized, accessible tools for systematically evaluating AI-generated content. Biases in Generative AI input (both implicit and explicit) will result in biased Generative AI output, so it is also important to analyze input. This poster presents a decision tree as a systematic tool to help users identify and evaluate potential biases in generative AI input and output.
Notes
References:
Babaei, G., Banks, D., Bosone, C., Giudici, P., & Shan, Y. (2024). Is ChatGPT More Biased Than You? Harvard Data Science Review, 6(3). https://doi.org/10.1162/99608f92.2781452d
FitzGerald, C., & Hurst, S. (2017a). Implicit bias in healthcare professionals: A systematic review. BMC Medical Ethics, 18(1), 19. https://doi.org/10.1186/s12910-017-0179-8
FitzGerald, C., & Hurst, S. (2017b). Implicit bias in healthcare professionals: A systematic review. BMC Medical Ethics, 18(1), 19. https://doi.org/10.1186/s12910-017-0179-8
Gross, N. (2023). What ChatGPT Tells Us about Gender: A Cautionary Tale about Performativity and Gender Biases in AI. Social Sciences, 12(8), 435. https://doi.org/10.3390/socsci12080435
Sigma Membership
Phi Nu
Type
Poster
Format Type
Text-based Document
Study Design/Type
Other
Research Approach
Other
Keywords:
Health Equity, Social Determinants of Health, Virtual Learning, Faculty Development, Emerging Technologies
Recommended Citation
Sullivan, Debra Henline and Frazer, Christine, "A Decision Tree for Evaluating Generative AI Output and Input for Bias" (2025). Biennial Convention (CONV). 12.
https://www.sigmarepository.org/convention/2025/posters_2025/12
Conference Name
48th Biennial Convention
Conference Host
Sigma Theta Tau International
Conference Location
Indianapolis, Indiana, USA
Conference Year
2025
Rights Holder
All rights reserved by the author(s) and/or publisher(s) listed in this item record unless relinquished in whole or part by a rights notation or a Creative Commons License present in this item record. All permission requests should be directed accordingly and not to the Sigma Repository. All submitting authors or publishers have affirmed that when using material in their work where they do not own copyright, they have obtained permission of the copyright holder prior to submission and the rights holder has been acknowledged as necessary.
Review Type
Abstract Review Only: Reviewed by Event Host
Acquisition
Proxy-submission
Date of Issue
2025-11-18
A Decision Tree for Evaluating Generative AI Output and Input for Bias
Indianapolis, Indiana, USA
Generative AI models, such as GPT-4.o, have become increasingly prevalent in various domains, from creative content generation to decision-making support. However, these models are susceptible to biases present in their training data, which can lead to biased or unfair outcomes. While various methods exist for detecting and mitigating bias in AI, there is a lack of standardized, accessible tools for systematically evaluating AI-generated content. Biases in Generative AI input (both implicit and explicit) will result in biased Generative AI output, so it is also important to analyze input. This poster presents a decision tree as a systematic tool to help users identify and evaluate potential biases in generative AI input and output.
Description
Generative AI models, such as GPT-4, are prone to perpetuating and amplifying biases present in their training data, raising ethical concerns about fairness and equity in AI-generated outputs. This poster aims to present a comprehensive decision tree that serves as a systematic tool for evaluating generative AI input and output for bias. The decision tree will guide users through the identification and assessment of various types of biases, including gender, racial, and cultural biases.