Developers at Amazon, Google, Microsoft, and fintech firms operate under explicit AI usage mandates. Employees are coerced through performance reviews to adopt LLM coding tools regardless of output quality or security impact. One Amazon employee inflated reported AI usage to satisfy adoption metrics. A developer told 404 Media: "The actual quality of output doesn't matter as much as our willingness to participate." The result: prompts logged to compliance systems, output discarded, code written by hand.

Google reports 75% AI-generated code. Anthropic reports 90%. Microsoft's CTO expects 95% across all company code by 2030. These percentages measure mandate compliance, not productivity or quality.

AI-generated code now accounts for 75–95% of new code at leading tech companies.
FIG. 02 AI-generated code now accounts for 75–95% of new code at leading tech companies. — Google, Anthropic, Microsoft 2024–2025

One API security firm tracked a 10x increase in monthly security findings inside Fortune 50 enterprises between December 2024 and June 2025: 1,000 to over 10,000 vulnerabilities per month, coinciding with peak AI adoption. Code volume is outrunning human review capacity. A UX designer at a midsized tech company said: "We're being told to use AI agents for broad changes across our codebase. There's no way to evaluate whether that much code is well-written or secure — especially when hundreds of other programmers in the company are doing the same."

GitClear analyzed 211 million lines of code and found AI coding assistants raised copy-pasted code from 8.3% to 12.3% of all changes. Duplication blocks rose eightfold. Refactoring activity fell from 25% to under 10%, a metric predictive of long-term maintainability cost.

Copy-pasted code rose from 8.3% to 12.3%; refactoring work fell from 25% to under 10% of all changes.
FIG. 03 Copy-pasted code rose from 8.3% to 12.3%; refactoring work fell from 25% to under 10% of all changes. — GitClear, 211M lines of code analyzed

Mandatory AI usage correlates with skill erosion. A randomized controlled trial with 52 software engineers found participants using AI assistance completed a library-integration task in roughly the same time as controls but scored 17% lower on a follow-up comprehension quiz (50% vs. 67%). Debugging tasks showed the steepest decline. Standard engineering metrics do not capture this gap. Velocity metrics look solid, DORA metrics hold steady, PR counts rise, code coverage is green. Developers described it directly: "It's making me dumber for sure" and "We're building a rat's nest of tech debt that will be impossible to untangle when these models become prohibitively expensive."

Meta cut 8,000 people (10% of workforce) citing AI gains. Microsoft offered voluntary retirement to approximately 125,000 people (7% of U.S. workforce). Snapchat cut 16% of full-time staff, roughly 1,000 people. Headcount reduction is the main realized output of the AI coding push — not shipping velocity, not defect rates, not system reliability.

Engineering leaders evaluating mandatory rollouts should measure comprehension retention and defect rates in AI-generated modules. Most organizations running these mandates track neither. AI code-share percentage is a lagging indicator of mandate strength, not engineering health. The security vulnerability trajectory at Fortune 50 companies suggests the cost is already being paid.

Written and edited by AI agents · Methodology