The Human Supply Chain Behind AI – And Why Agent Governance Is the New Safety Valve
TL;DR
As AI models become cheap commodities and generated code explodes, enterprises face dual supply‑chain risks: unmanageable software vulnerabilities (over 40,000 CVEs published in 2024 alone) and invisible labour conditions in AI training pipelines. Runtime governance layers like Aqta act as a safety valve — giving teams visibility and control over agent behaviour before it breaches policy, regulation or ethics standards

In November 2025, a European fintech's AI coding assistant imported a deprecated logging library with a known remote‑code‑execution flaw. The code review passed. The agent shipped to production. Three weeks later, the breach made the front page of the Financial Times.
AI is getting cheaper, faster and more capable — but the human and software supply chains underneath it are getting riskier. As models become commoditised and enterprises flood their stacks with AI‑generated code and agents, two realities collide:
- No security team can keep up with the volume of new vulnerabilities in the software supply chain.
- AI still depends on vast amounts of invisible human labour — often low‑paid, precarious data work that raises real human‑rights questions.
This is the world Aqta is being built for: one where the model is a commodity, but governance is not.
Models as commodities, code as floodwater
In late 2025, bank and equity analysts started saying the quiet part out loud: foundation models are on their way to becoming commodities as competition intensifies and prices fall. As their costs collapse and performance converges, the differentiator shifts up‑stack — from "which model?" to "what do you build on top?"
For software teams, that has one clear implication:
If it is cheap to ask models for code, you will get a lot of code. Copilots, internal agents and code‑generation tools make it trivial to produce new services, scripts and glue logic across the organisation.
But every generated function can pull in new dependencies, touch new APIs and extend your blast radius, even if the model writes "perfect" code by static‑analysis standards.
The economics guarantee an explosion of AI‑generated code and agentic behaviour; the question is how to keep it safe once it is running.
A software supply chain no human can audit
At the same time, the software supply chain is already beyond human scale. A 2024 CVE review shows 40,009 published vulnerabilities in a single year, up nearly 39% from 2023 — roughly 108 new CVEs every day. Many are in open‑source dependencies and transitive packages, precisely the libraries AI tools are happy to import without context.
Even if your AI assistant writes "perfect" code, the libraries it pulls in may hide serious vulnerabilities. Security and platform teams are already struggling to:
- Track which microservice is using which version of which library.
- Understand how AI‑generated changes alter runtime behaviour across complex systems.
Static scanning alone is not enough. What matters is how the AI‑generated code and agents behave in production: what they call, what data they touch, what loops they get stuck in.
The invisible human labour behind AI
There is a second, less visible supply chain to consider: the humans behind the algorithm. Investigations have documented how AI systems depend on large numbers of under‑recognised data workers — labellers and moderators who clean and annotate data, often in the Global South, for low pay and with limited protections.
Reports highlight risks such as:
- Low wages and precarious contracts for data‑enrichment work that underpins commercial AI systems.
- Psychological harm from repeated exposure to toxic content in moderation and reinforcement‑learning pipelines.
The compliance and investor shift
This is no longer just an ethics issue — it is becoming a compliance and investor risk. The EU Corporate Sustainability Due Diligence Directive (CSDDD) requires large companies to identify and address human‑rights and environmental impacts across their value chains, including AI training and data‑labelling subcontractors. ESG‑focused investors are increasingly asking about labour practices embedded in AI supply chains during due diligence.
By the numbers: a 2023 investigation found outsourced data labellers in Kenya working on toxicity‑filtering for a major AI system were paid roughly $1.32–$2 per hour to review highly graphic content, with workers reporting lasting psychological effects.
Most enterprises do not have a clear line of sight from "we use this model" to "here is the labour and sourcing context behind it". Once again, the problem is one of governance at scale: how to connect technical usage of models and agents to ethical standards and procurement policies in a way that satisfies both regulators and investors.
Where Aqta fits: runtime governance as safety valve
Aqta is not trying to replace static security tooling or solve labour conditions directly. Instead, it provides a governance gateway that sits in front of your AI agents and model calls, and makes their behaviour observable, controllable and auditable.
In practice, that means:
Runtime visibility over agents and AI‑generated code paths
- Centralised traces showing which models, tools and APIs an agent uses — with latency, cost and policy outcomes.
- Detection of risky patterns such as loops, abnormal tool usage or calls into sensitive systems that violate policy.
Policy enforcement across models and providers
- Guardrails that can block, suppress or flag requests when they cross defined risk thresholds (for example certain tools, cost ceilings or PII access).
- A single control plane, even when you mix "commodity" models from multiple vendors.
This is how Aqta becomes a safety valve: as model prices fall and AI‑generated volume explodes, you can still keep risk within tolerable bounds because runtime controls scale where human review cannot.
From technical risk to ethical supply chains
The same governance layer that manages technical risk can also help organisations align AI use with their ethical and procurement standards.
Concretely, you can:
Tag traffic by provider, region and risk profile
- Associate each model or endpoint with metadata such as hosting region, data‑retention policy and an internal rating on labour / human‑rights due diligence.
- Surface that metadata alongside traces so compliance teams see not just what the agent did, but whose infrastructure and data work it relied on.
Enforce policy on model and vendor choices
- Configure rules like "do not send production traffic to providers below our labour‑standards rating" or "flag usage of models that fail our data‑provenance thresholds".
- Generate audit trails that demonstrate CSDDD‑aligned due diligence and satisfy ESG requests from investors and customers.
Aqta does not rewrite global labour markets. It does give enterprises the observability and control to apply their own standards consistently, tying AI system behaviour to both technical security and supply‑chain ethics.
Building essential infrastructure for the AI era
As AI moves from experimentation to infrastructure, the question is no longer whether you will use models, but how you will govern their behaviour at scale. Model costs are collapsing, vulnerabilities are multiplying, and the human‑labour implications are too significant to ignore.
In that world, agent governance is not a "nice‑to‑have" add‑on. It is essential infrastructure: a runtime safety valve that keeps the AI supply chain — technical and human — within the bounds your organisation can accept.
That is the layer Aqta is building.
Get in touch →Sources & References
- Bank and equity‑research commentary in 2024–25 highlighting how foundation models are trending towards commodity economics as competition increases and prices fall.
- 2024 CVE data review showing 40,009 published CVEs (up 38.8% from 2023) and an average of 108 new vulnerabilities per day, with a large share in open‑source components.Source ↗
- Coverage from SOCRadar, Bitsight and others noting that 2024 set a new record for published vulnerabilities, surpassing 40,000 CVEs and stressing security‑team capacity.
- TIME investigation on Kenyan data labellers working on toxicity filtering for OpenAI, reporting take‑home wages between roughly $1.32 and $2 per hour and significant psychological strain from exposure to graphic content.Source ↗
- Business & Human Rights Resource Centre and follow‑up reporting summarising labour conditions for data‑labelling workers supporting major AI systems.
- EU Corporate Sustainability Due Diligence Directive (Directive 2024/1760/EU), requiring large companies to conduct human‑rights and environmental due diligence across their value chains, including certain AI‑related services.Source ↗
For more on Aqta's approach to AI governance, visit aqta.ai or reach out at hello@aqta.ai
About the Author
Anya Chueayen is the founder of Aqta, an AI governance platform for enterprise agents. Previously at TikTok, she scaled trust & safety systems and worked on monetisation integrity and AI infrastructure for global platforms.
Anya is based in Dublin where she is building AI governance infrastructure with early design partners in fintech and healthcare, preparing for the EU AI Act's August 2026 deadline.
Published 5 January 2026
Related Articles
Read now
Why Enterprise AI Needs Governance Now: The 2026 Compliance Deadline
With the EU AI Act high‑risk obligations taking effect in August 2026, enterprises deploying AI agents need governance infrastructure today.
Read now
The 2026 AI Bracing: Why Governance is the New Growth Metric
As AI valuations wobble and the EU AI Act bites, governance, auditability and compliance are becoming core growth metrics for AI teams.
Read now
Loop Drift: Why Production AI Agents Get Stuck and How to Stop Them
Loop drift is a failure mode where AI agents transition from efficient behaviour into repetitive, self-reinforcing loops that burn money and quietly degrade reliability.
Coming Soon
EU AI Act Checklist for Agent Deployments
A practical guide to mapping AI Act requirements to runtime governance controls.