The Human Supply Chain Behind AI and Why Agent Governance Is the New Safety Valve

I tested this: Three AI coding assistants built a logging function. All imported deprecated libraries. One had a known RCE flaw. None flagged it.
November 2025: Coinbase Ireland received €21.5M for AML monitoring failures. As AI-generated code floods production, security teams can't keep up.
Here's what I learned testing AI governance tools for a week:
Models are getting cheaper. Code generation is getting faster. But the software supply chain underneath? Getting riskier.
Two problems collide:
- Security teams can't keep up with AI-generated code volume
- AI systems depend on supply chains with due-diligence and resilience risks
Bottom line: The model is a commodity. Governance isn't.
Models as commodities, code as floodwater
Foundation models are trending toward commodity economics — as costs collapse and performance converges, the differentiator shifts from "which model?" to "what do you build on top?"
If it is cheap to ask models for code, you will get a lot of code. Copilots, internal agents and code‑generation tools make it trivial to produce new services, scripts and glue logic across the organisation. Every generated function can pull in new dependencies, touch new APIs and extend your blast radius.
The economics guarantee an explosion of AI‑generated code and agentic behaviour; the question is how to keep it safe once it is running.
A software supply chain no human can audit
At the same time, the software supply chain is already beyond human scale. A 2024 CVE review shows 40,009 published vulnerabilities in a single year, up nearly 39% from 2023 — roughly 108 new CVEs every day. Many are in open‑source dependencies and transitive packages, precisely the libraries AI tools are happy to import without context.
Even if your AI assistant writes "perfect" code, the libraries it pulls in may hide serious vulnerabilities. Security and platform teams are already struggling to:
- Track which microservice is using which version of which library.
- Understand how AI‑generated changes alter runtime behaviour across complex systems.
Static scanning alone is not enough. What matters is how the AI‑generated code and agents behave in production: what they call, what data they touch, what loops they get stuck in.
Supply-chain due diligence: the human layer
Beyond software dependencies, AI systems rely on a less visible supply chain: data workers who label, annotate and moderate training data. This creates supply-chain resilience and due-diligence risks that regulators and investors now scrutinise.
The compliance and investor shift
The EU Corporate Sustainability Due Diligence Directive (CSDDD) requires large companies to address human‑rights impacts across their value chains, including AI training and data‑labelling subcontractors. ESG‑focused investors are asking about labour practices during due diligence.
Example: A 2023 investigation found data labellers paid $1.32–$2 per hour to review graphic content for a major AI system — creating both ethical and reputational risk for enterprises using those models.
Most enterprises lack visibility from "we use this model" to "here is the supply-chain context behind it". The challenge is governance at scale: connecting technical usage to procurement policies and due-diligence standards in a way that satisfies regulators and investors.
Where Aqta fits: runtime governance as safety valve
Aqta provides a governance gateway that sits in front of your AI agents and model calls, making their behaviour observable, controllable and auditable.
Runtime visibility: Centralised traces showing which models, tools and APIs an agent uses — with latency, cost and policy outcomes. Detection of risky patterns like loops or calls into sensitive systems.
Policy enforcement: Guardrails that block, suppress or flag requests crossing risk thresholds. A single control plane across multiple model vendors.
This is how Aqta becomes a safety valve: as model prices fall and AI‑generated volume explodes, runtime controls scale where human review cannot.
From technical risk to ethical supply chains
The same governance layer that manages technical risk can also help organisations align AI use with their ethical and procurement standards.
Concretely, you can:
1. Tag traffic by provider, region and risk profile
- Associate each model or endpoint with metadata such as hosting region, data‑retention policy and an internal rating on labour / human‑rights due diligence.
- Surface that metadata alongside traces so compliance teams see not just what the agent did, but whose infrastructure and data work it relied on.
2. Enforce policy on model and vendor choices
- Configure rules like "do not send production traffic to providers below our labour‑standards rating" or "flag usage of models that fail our data‑provenance thresholds".
- Generate audit trails that demonstrate CSDDD‑aligned due diligence and satisfy ESG requests from investors and customers.
Aqta does not rewrite global labour markets. It does give enterprises the observability and control to apply their own standards consistently, tying AI system behaviour to both technical security and supply‑chain ethics.
My verdict
What works: Aqta catches the supply chain risks that traditional security tools miss. It's practical, not theoretical.
What doesn't: You still need to care about governance. Aqta makes it easier, not automatic.
Who should use it: Any company running AI agents in production. Especially fintech, healthcare, and regulated industries.
Bottom line: Models are commodities. Governance isn't. Aqta is building the infrastructure layer you'll need.
References
- Bank and equity research commentary. "Foundation Models Trending Towards Commodity Economics". 2024-2025.
- Central Bank of Ireland. "Coinbase Ireland €21.5M AML Penalty". November 2025. Source ↗
- Jerry Gamblin. "2024 CVE Data Review". 5 January 2025. Source ↗
- SOCRadar, Bitsight and others. "2024 Vulnerability Record: 40,000+ CVEs Published". 2024.
- TIME Magazine. "OpenAI Used Kenyan Workers on Less Than $2 Per Hour". January 2023. Source ↗
- Business & Human Rights Resource Centre. "Labour Conditions for Data-Labelling Workers Supporting Major AI Systems". 2023-2024.
- European Union. "Corporate Sustainability Due Diligence Directive (Directive 2024/1760/EU)". Official Journal of the European Union, 2024. Source ↗

Anya Chueayen
Technical founder with full-stack AI infrastructure experience. Previously worked on integrity and governance at social media platforms, solving the messy edge cases between human behaviour and AI ethics at scale. Based in Dublin, Ireland — global perspective on AI regulation.
Related Articles
Enterprise AI Governance and The 2026 Compliance Deadline
With the EU AI Act high-risk obligations taking effect in August 2026, enterprises deploying AI agents need governance infrastructure today.
The 2026 AI Bracing and Why Governance Is the New Growth Metric
As AI valuations wobble and the EU AI Act bites, governance, auditability and compliance are becoming core growth metrics for AI teams.