Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Weaponising Generative AI Through Data Poisoning: Analysing Various Data Poisoning Attacks on Large Language Models (LLMs) and Their Countermeasures
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Large Language Models (LLMs) and most modern AI models profoundly rely on the quantity, quality and integrity of training data, which ultimately determines the overall success of these LLMs or AI models. This enormous amount of training data is collec
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.488 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.648 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.547 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.380 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.587 Zit.