Docs
Guides
Github
Blog

> the open-source LLM red teaming framework_

Get Started
Delivered by
Confident AI
Detect 40+ LLM Vulnerabilities

Automatically scan for vulnerabilities such as bias, PII leakage, toxicity, etc.

SOTA Adersarial Attacks

Prompt injections, gray box, etc. to jailbreak your LLM

OWASP Top 10, NIST AI, etc.

OWASP Top 10 for LLMs, NIST AI, and so much more out of the box

DeepTeam

Open-source LLM red teaming framework. Apache 2.0 licensed.

Star us on GitHub

Product

  • Getting Started
  • Vulnerabilities
  • Adversarial Attacks
  • Guardrails
  • Frameworks

Very Useful Reads

  • Red Teaming AI Agents
  • Red Teaming RAG
  • Safety Frameworks
  • Building Custom Attacks
  • Deploying Guardrails

Ecosystem

  • Confident AI
  • DeepEval
© 2026 Confident AI Inc. Made with 🖤 and confidence.