The Human Supply Chain Behind AI and Why Runtime Enforcement Is the Missing Layer

The Human Supply Chain Behind AI
TL;DRThree AI coding assistants, same task, all three imported deprecated libraries, one pulled a known RCE vulnerability, none flagged it. The model is the easy part. The real risks in your AI stack live in the software and human supply chains beneath it. Runtime governance is the safety valve.

Three AI coding assistants, same task: build a logging function. All three imported deprecated libraries. One pulled in a dependency with a known RCE flaw. None flagged it.

November 2025: Coinbase Ireland fined €21.5M for AML monitoring failures. AI-generated code is moving faster than the humans responsible for it.

Models are getting cheaper. Code generation is getting faster. The software and human infrastructure underneath is getting riskier.

The model is the easy part. What runs beneath it is not.

Models as commodities, code as floodwater

Foundation models are trending toward commodity economics, as costs collapse and performance converges, the differentiator shifts from "which model?" to "what do you build on top?"

If it is cheap to ask models for code, you will get a lot of code. Copilots, internal agents and code‑generation tools make it trivial to produce new services, scripts and glue logic across the organisation. Every generated function can pull in new dependencies, touch new APIs and extend your blast radius.

The economics point to a rapid expansion of AI‑generated code and agentic behaviour; the question is how to keep it safe once it is running.

A software supply chain that outpaces manual review

At the same time, the software supply chain is already beyond human scale. A 2024 CVE review shows 40,009 published vulnerabilities in a single year, up nearly 39% from 2023, roughly 108 new CVEs every day. Many are in open‑source dependencies and transitive packages, precisely the libraries AI tools are happy to import without context.

Even if your AI assistant writes "perfect" code, the libraries it pulls in may hide serious vulnerabilities. Security and platform teams are already struggling to:

  • Track which microservice is using which version of which library.
  • Understand how AI‑generated changes alter runtime behaviour across complex systems.

Static scanning alone is not enough. What matters is how the AI‑generated code and agents behave in production: what they call, what data they touch, what loops they get stuck in.

The people behind the model

The software supply chain is one problem. The human supply chain is another. Every major model was shaped by workers who labelled, annotated and moderated training data, often for poverty wages with no safety net.

The EU Corporate Sustainability Due Diligence Directive (CSDDD) requires large companies to address human‑rights impacts across their value chains, including AI training subcontractors.

A 2023 investigation found data labellers paid $1.32–$2 per hour to review graphic content for a major AI system. The companies deploying those models carry that risk whether they know it or not.

Most companies cannot trace from "we use this model" to "here is who built it and under what conditions." That gap is an ethical and reputational exposure, not just a procurement footnote.

Where runtime intelligence fits

AqtaCore sits in front of your AI agents and model calls. Every call is traced: which model, which tools, which APIs, at what cost, with what outcome. Risky patterns get flagged or cut before they compound.

You can tag each model endpoint with its hosting region, data-retention policy and an internal rating on labour practices. Policy runs on every call, not in a quarterly audit. As AI-generated volume grows, that is the only thing that actually scales.

The model is a commodity. The infrastructure that makes it trustworthy at scale is not. That is what we are building.

References

  1. Bank and equity research commentary. "Foundation Models Trending Towards Commodity Economics". 2024-2025.
  2. Central Bank of Ireland. "Coinbase Ireland €21.5M AML Penalty". November 2025. Source ↗
  3. Jerry Gamblin. "2024 CVE Data Review". 5 January 2025. Source ↗
  4. SOCRadar, Bitsight and others. "2024 Vulnerability Record: 40,000+ CVEs Published". 2024.
  5. TIME Magazine. "OpenAI Used Kenyan Workers on Less Than $2 Per Hour". January 2023. Source ↗
  6. Business & Human Rights Resource Centre. "Labour Conditions for Data-Labelling Workers Supporting Major AI Systems". 2023-2024.
  7. European Union. "Corporate Sustainability Due Diligence Directive (Directive 2024/1760/EU)". Official Journal of the European Union, 2024. Source ↗
Share this article:
Anya Chueayen

Anya Chueayen

Founder of Aqta. Before this, I worked on integrity at social media platforms, the unglamorous side of AI where human behaviour, edge cases, and ethics collide at scale. That work convinced me that responsible AI needs infrastructure, not just good intentions. Based in Dublin, closely watching how regulation is reshaping what we build and how.

© 2026 Aqta. All rights reserved.

Request access

Choose your access type and tell us about your use case.

Access type