Your organisation has just deployed a state-of-the-art large language model (LLM) to handle customer queries. It works beautifully. It's fast, accurate, and your team thinks the security box is ticked. But here's the uncomfortable truth: you have almost zero visibility into how that model was built, what data it was trained on, or whether it's been compromised at any point in its supply chain.
This is the AI supply chain problem, and it's silently becoming the biggest blind spot in enterprise cybersecurity—especially in Australia, where organisations are racing to adopt AI without the governance structures to understand their dependencies.
The AI Supply Chain Problem: More Complex Than You Think
Traditional software supply chain security is already challenging. But AI introduces a fundamentally new layer of complexity because models aren't just code—they're probabilistic systems trained on massive datasets, often using open-source components, third-party training infrastructure, and data pipelines you didn't build and may not fully understand.
When you use a third-party model, you're not just running someone else's code. You're running a statistical model that has learned patterns from training data you've never seen. That model could have learned harmful patterns. It could have been poisoned during training. Or worse, it could have been fine-tuned or compromised after deployment.
Consider the attack surface:
- Model poisoning at source: An attacker gains access to the training data or training process and injects malicious patterns that cause the model to misbehave in specific scenarios
- Compromised open-source dependencies: Your model uses open-source libraries that are themselves targets for supply chain attacks
- Weights tampering: The model weights (the learned parameters) are intercepted or modified during distribution
- Infrastructure compromise: The third-party hosting platform, model registry, or fine-tuning service is compromised
- Dependency chain vulnerabilities: Supporting libraries, tokenizers, and frameworks contain exploitable vulnerabilities
"The model supply chain is the new software supply chain—except you can't inspect the code. You can only observe the outputs. And by the time you notice something's wrong, the damage is done."
Open-Source Model Risks: The Double-Edged Sword
Open-source AI models like Llama, Mistral, and others have democratised access to sophisticated AI capabilities. For Australian organisations with limited budgets, they're an attractive alternative to proprietary cloud APIs.
But open-source brings distinct supply chain risks:
Model Registry Compromise
Platforms like Hugging Face host thousands of models. While the platform has security controls, a sophisticated attacker could potentially upload a poisoned model that mimics a legitimate one (think: a model named meta-llama-7b-official that's actually compromised). Users downloading what they believe is an official Llama model might actually be installing a backdoor.
Unmaintained Dependencies
Many open-source models depend on libraries that are no longer actively maintained. If a vulnerability is discovered in a dependency years after the model was published, that model becomes a liability. Few organisations track which open-source libraries are embedded in their models.
Community Modifications
The beauty of open-source is also its weakness: anyone can fine-tune and redistribute a model. Without cryptographic verification, you can't prove a model's provenance. Is this fine-tuned Llama variant from a trusted research team or someone's garage project?
Model Poisoning: A Silent Attack
Model poisoning is arguably the most insidious supply chain attack because it's nearly invisible to traditional security scans.
In a data poisoning attack, an attacker introduces malicious examples into the training data. These examples are designed to teach the model to misbehave in specific, hard-to-detect ways. For example:
- A model trained on financial data with poisoned examples learns to subtly over-estimate certain asset values
- A customer service chatbot trained on poisoned dialogue learns to comply with specific social engineering prompts
- A medical imaging model learns to misclassify certain conditions in ways that favour particular diagnoses
The attacker might inject just 1-2% poisoned examples into millions of training records. The model still performs well on normal tasks. But in specific scenarios, it behaves exactly as the attacker intended.
For organisations using pre-trained models, there's no way to know if poisoning occurred during the original training. You can't inspect the training data. You can't re-run the training process. You're trusting the model provider completely.
Dependency Risk: The Long Tail of Vulnerabilities
Modern AI systems don't run in isolation. They depend on frameworks (PyTorch, TensorFlow), inference libraries (ONNX, vLLM), tokenizers, and dozens of other components.
A recent analysis found that the average ML model has 50-100+ transitive dependencies. Each dependency represents a potential vulnerability vector. A vulnerability discovered in a tokenizer library used by thousands of models across the industry could compromise all of them simultaneously.
Australian financial services and healthcare organisations are particularly at risk because they often run older model versions with unpatched dependencies due to operational stability requirements.
Implementing AI Supply Chain Security: A Framework
So how do you protect yourself? AI supply chain security requires a layered approach:
1. Model Provenance and Verification
- Cryptographic signing: Demand that model providers sign their releases. Use tools like
gitverification and code-signing certificates to verify authenticity - SBOM for models: Request a Software Bill of Materials (SBOM) for every model—list all training data sources, dependencies, and infrastructure used
- Provenance tracking: Maintain a registry of every model in use, its source, version, and the date it was deployed
2. Dependency Management
- Lock all versions: Use dependency pinning in production. Never auto-update ML framework versions
- Inventory and monitoring: Use tools like
pip-auditorpip-checkto scan for known vulnerabilities in dependencies - Vulnerability response plan: Define a process for rapidly assessing and patching vulnerable dependencies
3. Model Integrity Verification
- Baseline behaviour testing: Create comprehensive test suites that verify the model's behaviour on a representative sample of inputs
- Continuous monitoring: In production, monitor model outputs for statistical anomalies that might indicate poisoning or tampering
- Cryptographic hashing: Hash model weights and store the hash in a secure location. Periodically verify that production models match the expected hash
4. Third-Party Risk Assessment
- Vendor questionnaires: Ask model providers about their security practices, training methodology, and vulnerability disclosure process
- Evaluate model transparency: Prefer models where training data, architecture, and dependencies are openly documented
- Monitor security advisories: Subscribe to security alerts from model providers and the open-source community
5. Isolation and Containment
- Sandboxed deployment: Run third-party models in isolated containers with limited access to sensitive data
- Principle of least privilege: Give models only the data access they actually need for their function
- Network segmentation: Restrict model endpoints from directly accessing internal systems or databases
Australian Compliance Context
For Australian organisations subject to financial system regulation or critical infrastructure laws, AI supply chain security is becoming a governance requirement, not just best practice.
The Australian Prudential Regulation Authority (APRA) has signalled that responsible AI deployment includes understanding and managing the risks introduced by third-party models. The proposed Security of Critical Infrastructure Act (SOCI Act) will require critical infrastructure operators to have visibility and control over the AI systems they depend on.
Building supply chain security now positions your organisation ahead of regulatory requirements and reduces the risk of catastrophic model failures down the line.
The Path Forward
AI supply chain security is still an immature discipline. Standards and best practices are evolving rapidly. But one thing is clear: you can't afford to treat third-party models as a black box.
Start with inventory. Know what models you're using, where they came from, and what they depend on. Establish provenance tracking and dependency management. Test model behaviour before and after deployment. And build containment so that if a model is compromised, the blast radius is limited.
Your third-party models aren't just a technical dependency. They're a supply chain dependency. Treat them accordingly.
Key Takeaways
- AI models represent a new supply chain attack surface that's invisible to traditional security scanning
- Open-source models offer cost benefits but require active verification and maintenance of dependencies
- Model poisoning can introduce attacks that are nearly impossible to detect through normal testing
- Dependency vulnerabilities in ML frameworks affect entire classes of deployed models simultaneously
- Effective AI supply chain security requires provenance tracking, integrity verification, isolation, and continuous monitoring
- Australian regulators are beginning to mandate AI governance, including supply chain controls