OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.04.2026, 21:59

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Eastern England SDE Safe Models, Safe AI: Governance and Disclosure Testing

2026·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2026

Jahr

Abstract

This report outlines how the Eastern England Secure Data Environment (EE‑SDE) has been preparing to support safe and trustworthy use of artificial intelligence (AI) and machine‑learning (ML) models trained on sensitive health data. As part of VISTA, one of DARE UK’s Early Adopter projects, the EE‑SDE deployed and evaluated new tools designed to help Trusted Research Environments (TREs) assess and manage privacy risks linked to AI projects. A key outcome of this work is the VISTA AI Risk Assessment Toolkit, which provides a structured way for TREs to review proposed AI projects, understand their data needs, and ensure that appropriate safeguards are in place from the outset. The toolkit works alongside SACRO‑ML, a new disclosure‑control technology that checks trained ML models for signs that they might reveal information about individuals in the training data. Together, these tools help reviewers make clearer, evidence‑based decisions about whether a model can safely be released. The project shows that AI safety checks can be integrated into real research workflows without disrupting analysis. It also highlights areas for future improvement, including clearer guidance, better onboarding materials, and continued collaboration across the TRE community to build consistent and trusted approaches to responsible AI research.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationPrivacy-Preserving Technologies in DataScientific Computing and Data Management
Volltext beim Verlag öffnen