- Overview of OWASP Top 10 ML & LLM Security Checklist
- Understanding Attack Surfaces in AI Systems
- Adversarial Attacks
- ML01:2023 - Input Manipulation Attack
- ML08:2023 - Model Skewing
- ML07:2023 - Transfer Learning Attack
- ML09:2023 - Output Integrity Attack
Github Link: https://github.com/RihaMaheshwari/AIML-LLM-Security/
- Adversarial Attacks
- ML01:2023 - Input Manipulation Attack
- ML08:2023 - Model Skewing
- ML07:2023 - Transfer Learning Attack
- ML09:2023 - Output Integrity Attack
OWASP has categorized security risks into two distinct areas: Machine Learning (ML) models and Large Language Models (LLMs). While both fall under the broader AI umbrella, the attacks and vulnerabilities differ due to their respective technologies and applications. In this blog, we’ll break down the difference between ML and LLM attacks, go over the OWASP Top 10 for ML and LLMs.
Difference Between ML and LLM Attacks (OWASP)
Key Takeaways:
ML attacks primarily target the model’s training data and decision-making process.
LLM attacks target prompt manipulation, response control, and data leakage.
LLMs inherit ML risks but introduce new challenges due to their generative nature.
ML attacks primarily target the model’s training data and decision-making process.
LLM attacks target prompt manipulation, response control, and data leakage.
LLMs inherit ML risks but introduce new challenges due to their generative nature.
🔹 OWASP Top 10 Machine Learning (ML:2023) Security Risks
🔹 OWASP Top 10 Large Language Model (LLM:2023) Security Risks
🔹 OWASP Top 10 Large Language Model (LLM:2025) Security Risks
📌 Bonus: Attack Categories Mapped to OWASP
Data Attacks: ML02, ML03, ML04, LLM10
Model Attacks: ML05, ML06, ML07, ML08, ML09, ML10
Prompt Exploits: LLM01, LLM02, LLM07
API & Web Attacks: LLM05
Data Attacks: ML02, ML03, ML04, LLM10
Model Attacks: ML05, ML06, ML07, ML08, ML09, ML10
Prompt Exploits: LLM01, LLM02, LLM07
API & Web Attacks: LLM05
This list serves as an essential resource for security professionals to map threats quickly and effectively while conducting AI/ML security assessments. 🚀
With this guide, you now have a comprehensive understanding of the security risks facing ML and LLM models.
No comments:
Post a Comment