Use MITRE ATLAS to threat model your AI systems
Frameworks & ComplianceApr 04, 2026 • by Marcus Lenngren
If your organization is deploying LLMs, ML pipelines, or agentic AI, you need a threat model built for AI. MITRE ATLAS is the ATT&CK equivalent for AI systems.
What it covers:
- 16 tactics mapping the full AI attack lifecycle
- 85+ techniques specific to AI/ML systems
- 57 real-world case studies
Two tactics are unique to ATLAS (not in ATT&CK):
- AI Model Access: how attackers reach your model (API probing, direct inference, physical access)
- AI Attack Staging: preparation for AI-specific attacks (crafting adversarial inputs, poisoning training data)
Key techniques to know:
- Data Poisoning: injecting malicious data into training sets
- Prompt Injection: biasing LLMs to produce harmful outputs
- Model Inversion: extracting training data from a model
- AI Supply Chain Compromise: tampering with models or datasets before deployment
- LLM Jailbreaking: bypassing safety guardrails
Practical steps:
- Map your AI assets against ATLAS tactics
- Identify which techniques apply to your deployment model (API, on-prem, fine-tuned)
- Run AI-focused tabletop exercises using ATLAS case studies
- Integrate ATLAS into existing threat modeling alongside ATT&CK
Only about 50 techniques have been observed in the wild so far. The attack surface is growing faster than the threats. Get ahead of it.
Start here: atlas.mitre.org