AI Use Case Qualification — Competence
What an interviewer or hiring manager expects you to know.
Core Knowledge
-
Decomposition before technology. The fundamental insight from Agrawal, Gans & Goldfarb (“Prediction Machines”): AI reduces the cost of prediction. Qualification starts by breaking a business process into atomic prediction/decision tasks and asking which specific sub-task benefits from ML — not “can AI do this?” but “which prediction, if made cheaper, changes the economics of this process?” Use their AI Canvas: prediction task, judgment required, action taken, outcome, training data, input data, feedback loop.
-
Feasibility-value scoring. Know McKinsey’s two-axis framework (feasibility × business value) and Deloitte’s Value-Complexity Matrix. Feasibility covers: data availability and quality, algorithm complexity, compute requirements, last-mile integration. Value covers: revenue uplift, cost reduction, risk mitigation, strategic alignment. A qualified use case scores well on both axes — high value but low feasibility is R&D, high feasibility but low value is waste.
-
Data readiness assessment. Not “do we have data?” but: What’s the label quality? Distribution shift risk? How stale does it get? Feedback loop latency? Survivorship bias? Apply the FAIR principles (Findable, Accessible, Interoperable, Reusable) and assess data quality dimensions: completeness, accuracy, consistency, timeliness, validity, uniqueness. Use Google’s MLOps maturity levels (0-2) to gauge operational readiness.
-
Baseline economics. Calculate the cost of the current process without AI: labor hours, error rates, throughput, customer impact. Then model the AI-augmented process: build cost, run cost (compute + human-in-the-loop + monitoring + retraining), risk cost (model failure impact, compliance), and opportunity cost. If a 10% accuracy improvement saves $50K/year but the system costs $200K to build and $80K/year to maintain, it fails qualification.
-
Regulatory context. Know the EU AI Act classification system (unacceptable, high-risk, limited-risk, minimal-risk) and its implications for use case viability. Know NIST AI RMF 1.0’s “Map” function for use case qualification. Know that certain use cases (hiring decisions, credit scoring, biometric surveillance) carry regulatory burdens that change the ROI calculation entirely. ISO/IEC 42001:2023 for AI management systems is becoming a procurement requirement.
Expected Practical Skills
- Run a structured qualification. Given a proposed AI use case, produce a one-page assessment: problem decomposition, feasibility score, value estimate, data readiness verdict, regulatory risk, and go/no-go recommendation with rationale.
- Calculate Total Cost of AI Ownership (TCAO). Build costs + run costs + risk costs + opportunity costs, modeled over 3 years with realistic assumptions about retraining frequency and model degradation.
- Conduct a data audit. Assess an organization’s data for a proposed use case: volume, quality, labeling, access constraints, privacy implications, and gap analysis.
- Present a disqualification. Saying “no” is harder than saying “yes.” Explain why a use case fails — business stakeholders need to understand the reasoning, not just the verdict. Frame alternatives (“not LLM, but rule-based automation would work here”).
- Compare AI approaches. For a qualified use case, evaluate: LLM vs. traditional ML vs. rule-based automation vs. human process improvement. Not every good use case needs an LLM.
Interview-Ready Explanations
-
“Walk me through how you’d evaluate whether an AI/LLM use case is viable.” Start with problem decomposition — what prediction or decision is being made? Establish baseline economics — what does it cost now? Assess data readiness (quality, volume, labels, feedback loops). Score feasibility and value. Model TCAO over 3 years. Check regulatory classification (EU AI Act risk level). Evaluate organizational readiness (sponsor, SME availability, change management). Recommendation with confidence level and key assumptions.
-
“How do you say no to a use case that an executive is excited about?” Lead with the business case, not the tech. “The current process costs $X. This AI solution would cost $Y to build and $Z/year to maintain, with a quality threshold that requires human review for 40% of outputs. The breakeven is 3.5 years, which exceeds our typical payback requirements. Here’s what I’d recommend instead…” Never say “it can’t be done” — say “the economics don’t support it at this scale/maturity.”
-
“What are the most common reasons AI use cases fail in production?” The Impressive Demo Trap (works in controlled settings, fails on real data). The Data Doesn’t Exist Discovery (assumed labeled data, found PDFs). The Nobody Uses It Problem (70% of AI transformations fail at adoption per McKinsey). Maintenance Bankruptcy (model degrades, no retraining budget). Regulation Arrives (EU AI Act recategorizes the use case as high-risk post-deployment).
Related
- Compliance & Governance — regulatory context shapes qualification
- Cost Estimation — TCAO feeds use case ROI calculations
- Eval Frameworks — eval proves use case viability