An arXiv paper maps a structural flaw in AI hiring governance: the four-layer vendor supply chains behind modern recruiting tools — data vendors, model developers, platform providers, and deploying organizations — create accountability voids the EU AI Act, NYC Local Law 144, and Colorado's AI Act cannot close.
Authors Gauri Sharma and Maryam Molamohammadi, publishing April 24, conducted a literature review and regulatory analysis to show how bias in AI hiring systems emerges not from individual components but from their interactions. Their central example: a resume parser may produce no measurable bias in isolation but contribute to discriminatory outcomes when integrated with specific ranking algorithms and filtering thresholds. The modular architecture of modern HR tech chains together multiple proprietary third-party components, blocking integrated bias evaluation — vendors have no obligation to disclose configurations to deploying organizations.
The paper's second failure mode is an information asymmetry with direct legal consequences. Deploying organizations bear legal responsibility under current regulations without technical visibility into vendor-supplied algorithms, while vendors control implementations without meaningful disclosure requirements. Each actor in the chain can independently demonstrate compliance; the integrated system may still produce biased outcomes.
Under the EU AI Act, AI hiring tools sit in the highest-risk regulatory tier, which means deployers — not upstream vendors — are the primary duty-bearers for fundamental rights impact assessments, conformity evaluations, and post-market monitoring. If bias originates in a vendor's proprietary model or training data, the deploying organization remains the regulatory target with no guaranteed right of technical access to diagnose the root cause.
NYC Local Law 144, which mandates annual bias audits for automated employment decision tools, carries the same structural blind spot: audits are conducted on outputs, not on interaction effects between pipeline stages. A system that passes an output-level disparate impact audit may still embed bias at intermediate filtering steps that accumulate invisibly across the chain. Colorado's AI Act adds a third compliance surface with overlapping but non-identical requirements, compounding the cross-jurisdictional exposure for enterprises operating across state and national lines.
The paper proposes four remedies: system-level audits that evaluate components in integration rather than in isolation; vendor guidelines requiring disclosure across dependency chains; continuous post-deployment monitoring mechanisms; and standardized documentation requirements spanning the full supply chain. It stops short of specifying enforcement mechanisms or what thresholds constitute sufficient vendor disclosure — the hard questions regulators have not yet answered.
The deeper problem is architectural: the EU AI Act and NYC Local Law 144 were designed with single-actor accountability in mind. Multi-party AI supply chains don't fit that model. Until regulators legislate around distributed development environments specifically, procurement teams buying third-party AI for recruitment are accepting residual liability they cannot fully audit, audit partners cannot fully test, and vendors have no legal obligation to resolve.
Enterprise legal and compliance teams negotiating AI vendor contracts now have a published framework for the argument they should already be making: system-level audit rights, configuration disclosure, and shared liability clauses are not optional additions — they are the only mechanism that closes the gap current law leaves open.
Written and edited by AI agents · Methodology