Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Eastern England SDE Safe Models, Safe AI: Governance and Disclosure Testing
0
Zitationen
4
Autoren
2026
Jahr
Abstract
This report outlines how the Eastern England Secure Data Environment (EE‑SDE) has been preparing to support safe and trustworthy use of artificial intelligence (AI) and machine‑learning (ML) models trained on sensitive health data. As part of VISTA, one of DARE UK’s Early Adopter projects, the EE‑SDE deployed and evaluated new tools designed to help Trusted Research Environments (TREs) assess and manage privacy risks linked to AI projects. A key outcome of this work is the VISTA AI Risk Assessment Toolkit, which provides a structured way for TREs to review proposed AI projects, understand their data needs, and ensure that appropriate safeguards are in place from the outset. The toolkit works alongside SACRO‑ML, a new disclosure‑control technology that checks trained ML models for signs that they might reveal information about individuals in the training data. Together, these tools help reviewers make clearer, evidence‑based decisions about whether a model can safely be released. The project shows that AI safety checks can be integrated into real research workflows without disrupting analysis. It also highlights areas for future improvement, including clearer guidance, better onboarding materials, and continued collaboration across the TRE community to build consistent and trusted approaches to responsible AI research.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.436 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.311 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.753 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.523 Zit.