arrow_back Red-Teaming
school
Core Knowledge
open_in_newThe OWASP LLM Top 10 (v1.1, 2024). The standard attack taxonomy: LLM01 Prompt Injection (direct and indirect), LLM02 Insecure Output Handling (XSS via LLM output), LLM03 Training Data Poisoning,...
build
Expected Practical Skills
open_in_newRun an automated red-team scan. Configure and run Garak or Promptfoo redteam against an LLM application. Interpret results: which attacks succeeded, what's the success rate per category, which...
quiz
open_in_new
Read full fundamentals Interview-Ready Explanations
open_in_new"Walk me through how you'd red-team an LLM application." Start with scope: what system, what threats are in-scope (data exfiltration, harmful content, unauthorized actions), what's the risk context...