Compliance & Governance Framing — Competence
What an interviewer or hiring manager expects you to know.
Core Knowledge
-
The regulatory landscape. EU AI Act (Regulation 2024/1689 — risk-based classification: unacceptable/high-risk/limited-risk/minimal-risk; high-risk systems need conformity assessments, transparency, human oversight; GPAI rules for foundation model providers; penalties up to EUR 35M or 7% revenue; phased enforcement through Aug 2027). NIST AI RMF 1.0 (Govern/Map/Measure/Manage — voluntary but increasingly a procurement requirement; NIST AI 600-1 adds GenAI-specific guidance). Colorado AI Act SB 24-205 (first US state comprehensive AI law, effective Feb 2026 — requires “reasonable care” to protect against algorithmic discrimination). Know these well enough to explain to a non-technical stakeholder which applies to their AI system and what it requires.
-
AI governance frameworks. ISO/IEC 42001:2023 (AI Management Systems — the ISO 27001 equivalent for AI, increasingly required in enterprise procurement). OECD AI Principles (inform policy worldwide — transparency, accountability, robustness, fairness). IEEE 7000-2021 (addressing ethical concerns in system design). Singapore’s Model AI Governance Framework (influential in APAC). Know that governance frameworks are converging around common themes: risk management, transparency, accountability, human oversight, and fairness — but implementations differ by jurisdiction.
-
AI risk classification. The EU AI Act defines 4 risk levels with different obligations. A staff-level skill is classifying your own AI system: Is your chatbot “limited risk” (transparency obligation only — tell users they’re talking to AI) or “high risk” (full conformity assessment)? High-risk triggers when AI is used in: employment/HR decisions, credit scoring, insurance, healthcare, education, law enforcement, or critical infrastructure. This classification determines the compliance burden — potentially $50K-$200K in additional costs for high-risk designation.
-
Enterprise AI governance in practice. What enterprises actually implement: AI inventory/registry (catalog every AI system, its purpose, risk level, data sources, and owner), AI ethics review board (cross-functional committee that reviews high-risk deployments), model cards/system cards (documentation of model capabilities, limitations, intended use, and evaluation results), incident management (AI-specific incident response for model failures, bias discoveries, safety issues), and audit trail (immutable logs of AI decisions for regulatory review). Tools: Credo AI (governance platform), IBM AI FactSheets, custom governance wikis.
-
Data governance for AI. GDPR implications for training data and inference (right to explanation for automated decisions under Article 22, data minimization, purpose limitation). CCPA/CPRA (California — opt-out rights for automated decision-making). HIPAA (PHI in LLM prompts/outputs requires BAAs, encryption, access controls). Know the difference between model training (consent/licensing issues) and inference (real-time data handling). Know that “we don’t train on customer data” is necessary but not sufficient — inference-time data handling also has compliance obligations.
Expected Practical Skills
- Classify an AI system under EU AI Act. Given an AI product description, determine its risk level, applicable obligations, and compliance timeline. Produce a one-page classification memo.
- Build a NIST AI RMF mapping. Map your AI system’s existing controls to the NIST AI RMF categories (Govern/Map/Measure/Manage). Identify gaps. Produce a compliance matrix that procurement teams can review.
- Write a model card / system card. Document an AI system’s: intended use, out-of-scope uses, training data description, evaluation results, ethical considerations, limitations, and recommended monitoring. Follow the format from Mitchell et al. (2019) “Model Cards for Model Reporting.”
- Conduct an AI impact assessment. Evaluate potential harms: who could be affected by this AI system? What are the failure modes? What are the fairness implications? What data is used and how? What human oversight exists? Produce a risk register with mitigation strategies.
- Present compliance requirements to engineering teams. Translate regulatory requirements into engineering tasks: “EU AI Act Article 14 requires human oversight” becomes “build a human review queue for high-confidence-threshold decisions with override capability and audit logging.”
Interview-Ready Explanations
-
“Walk me through how you’d assess the compliance requirements for a new AI product.” Identify jurisdictions (where are users? EU triggers AI Act, California triggers CPRA, healthcare triggers HIPAA). Classify risk level under EU AI Act. Map to NIST AI RMF. Check sector-specific requirements (financial: OCC SR 11-7, healthcare: FDA AI/ML guidance, employment: EEOC AI guidance). Produce a compliance matrix: requirement → current state → gap → remediation plan → cost estimate. Present to leadership with a prioritized roadmap.
-
“How do you balance compliance requirements with shipping speed?” Compliance is not a blocker — it’s a design constraint, like security or performance. Build it in from the start: AI impact assessment during design (2 hours, not 2 months), compliance-by-design patterns (audit logging, human oversight hooks, documentation templates) that are part of the standard development workflow. The mistake is treating compliance as a late-stage gate review — that creates bottlenecks. The alternative is embedding compliance checkpoints into the development process.
-
“What are the biggest compliance risks for companies deploying LLMs?” Unauthorized data in training (copyright, PII — see NYT v. OpenAI). Uncontrolled outputs in regulated domains (medical advice without disclaimers, financial recommendations without disclosures). Lack of audit trail (regulated industries need to explain why the AI said what it said). Algorithmic discrimination (EU AI Act and Colorado AI Act both target this). The “we didn’t know it was high-risk” problem — failure to classify AI systems against risk frameworks before deployment.
Related
- Guardrails & Safety — guardrails implement compliance requirements
- Use Case Qualification — qualification determines compliance burden
- Eval Frameworks — documented eval satisfies compliance evidence