Gittielabs · Research · 2026
AI Workforce Diligence Checklist
Based on Augmentation, Not Replacement: Reading the Evidence on AI and White-Collar Work. Use before announcing, communicating, or approving AI-cited workforce decisions.
Before the announcement
- Have you mapped specific tasks — not job titles — to current AI capabilities, and identified which tasks require irreplaceable human judgment?
- Do you have task-level performance data from an operational pilot, not a vendor demo or isolated proof of concept?
- Has your AI ROI projection been validated? Base rate: 95% of enterprise GenAI initiatives show zero P&L return (MIT NANDA).
- Are fewer than 80% of your current AI projects returning zero P&L? If not, solve this before adjusting headcount.
Before the cut
- Can you identify which specific roles have more than 50% of tasks genuinely automatable at current AI maturity? (Only 19% of US workers meet this threshold — Eloundou et al.)
- Have you modeled the survivor cost? Typical outcome: 74% productivity decline, 77% error increase, 69% quality decline (Leadership IQ).
- Have you modeled the trust cost? Trust in company-provided GenAI fell 31% in two months following AI-cited layoffs (Edelman).
- Have you run the quit-multiplier math? Each layoff triggers 2.2 voluntary departures; one high-performer exit triggers ~18% cumulative peer attrition (Cornell ILR, LSE).
- Have you protected entry-level hiring — the pipeline that develops your future senior talent? (Stanford HAI: ~13% relative decline in 22–25-year-old hiring in AI-exposed occupations already underway)
Before you communicate
- Have you described specifically what AI is doing — rather than citing AI as a generic reason for cuts?
- Does your communication plan include a concrete, named training commitment for survivors?
- Is your board prepared for reversal risk? Gartner: 50% of AI-attributed layoffs will be quietly reversed by 2027. The balance-sheet cost of reversal will appear in your operating model.
After deployment
- Are you measuring human + AI output quality — errors, time-to-complete, quality ratings — not just headcount and cost?
- Do you have a trust recovery plan if survivor productivity declines after the reduction?
- Have you set a 12-month review gate for all AI-attributed staffing decisions, with documented criteria for what counts as success?