Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Geographic prompting and content fidelity in generative Artificial Intelligence: A multi-model study of demographics and imaging equipment in AI-generated videos and images of Canadian medical radiation technologists
1
Zitationen
6
Autoren
2025
Jahr
Abstract
BACKGROUND: As generative AI tools increasingly produce medical imagery and videos for education, marketing, and communication, concerns have arisen about the accuracy and equity of these representations. Existing research has identified demographic biases in AI-generated depictions of healthcare professionals, but little is known about their portrayal of Medical Radiation Technologists (MRTs), particularly in the Canadian context. METHODS: This study evaluated 690 AI-generated outputs (600 images and 90 videos) created by eight leading text-to-image and text-to-video models using the prompt ``Image [or video] of a Canadian Medical Radiation Technologist.'' Each image and video was assessed for demographic characteristics (gender, race/ethnicity, age, religious representation, visible disabilities), and the presence and accuracy of imaging equipment. These were compared to real-world demographic data on Canadian MRTs (n = 20,755). RESULTS: Significant demographic discrepancies were observed between AI-generated content and real-world data. AI depictions included a higher proportion of visible minorities (as defined by Statistics Canada) (39% vs. 20.8%, p < 0.001) and males (41.4% vs. 21.2%, p < 0.001), while underrepresenting women (58.5% vs. 78.8%, p < 0.001). Age representation skewed younger than actual workforce demographics (p < 0.001). Equipment representation was inconsistent, with 66% of outputs showing CT/MRI and only 4.3% showing X-rays; 26% included inaccurate or fictional equipment. CONCLUSION: Generative AI models frequently produce demographically and contextually inaccurate depictions of MRTs, misrepresenting workforce diversity and clinical tools. These inconsistencies pose risks for educational accuracy, public perception, and equity in professional representation. Improved model training and prompt sensitivity are needed to ensure reliable and inclusive AI-generated medical content.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.652 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.567 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.083 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.856 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.