Australia's Security of Critical Infrastructure Act (SOCI Act) is coming, and it will reshape how Australian organisations in energy, transport, water, and communications think about security. But the act was written before AI deployment exploded. Now, critical infrastructure operators are facing a critical question: How do we apply SOCI Act requirements to AI systems?
The answer: You can't treat AI systems like traditional IT infrastructure. The act's 13 essential security outcomes don't directly map to AI governance. You need a translation layer—and that's what we're going to build here.
Understanding the SOCI Act Security Outcomes
The SOCI Act defines 13 security outcomes that critical infrastructure operators must achieve. At their core, they demand visibility, control, and resilience across systems and supply chains.
For traditional infrastructure (SCADA systems, network switches, databases), this is achievable through vulnerability management, access control, and incident response. But AI systems introduce new complexities:
- You can't patch a neural network the way you patch software
- Model behaviour is probabilistic, not deterministic
- Supply chain risks are hidden in training data and model weights
- Compromise might be silent—the model still produces outputs, just slightly wrong ones
"The SOCI Act assumes you can inventory your systems, know their vulnerabilities, and patch them. AI challenges every one of these assumptions."
The 12 AI-Specific Governance Controls
Domain 1: Model Governance (Outcomes 1-2)
You must maintain a complete inventory of all AI models in use, including their source, version, training date, and the data they were trained on. This is foundational. You cannot manage what you do not see.
SOCI Outcome 1 (Systems Governance) demands that you know your systems. For AI, this means:
- A central registry of all models (proprietary, open-source, third-party, internally built)
- Documentation of data lineage: where training data came from, quality controls applied, consent obtained
- Change history: who deployed it, when, under what conditions
- Dependency tracking: which other systems, APIs, and data sources it depends on
Before deploying an AI model to production, it must pass a formal approval process that includes security, compliance, and risk assessment.
This maps to SOCI Outcome 2 (Systems Hardening). You need:
- Pre-deployment security review checklist for all models
- Formal approval from CISO and relevant business stakeholders
- Staged rollout: canary deployments to catch issues early
- Rollback procedures: ability to remove or replace a model quickly if it behaves unexpectedly
Domain 2: Model Integrity and Data Protection (Outcomes 3-5)
Models must be cryptographically verified before execution to ensure they haven't been tampered with during storage or transmission.
SOCI Outcome 3 (Systems Data Protection) extends to model weights:
- Compute cryptographic hashes of model weights and store securely (separate from the model itself)
- Verify hashes at load time before a model is instantiated
- Log all model access and modifications
- Encrypt models both at rest and in transit
Training data is an asset. It must be protected, inventoried, and governed with the same rigour as any other sensitive data.
This is critical for SOCI Outcome 4 (External Dependency Management):
- Data sensitivity classification: which training data contains PII, health information, or operational secrets
- Access controls: who can access training data, when, and for what purpose
- Retention policies: when training data is deleted (especially important if it contains PII)
- Audit trails: all access to training data must be logged
In production, monitor model outputs for anomalies that might indicate poisoning, drift, or compromise.
This supports SOCI Outcome 5 (Incident Response):
- Statistical baselines: establish expected output distributions in normal operation
- Anomaly detection: flag outputs that deviate significantly from baseline
- Manual review workflows: for critical decisions (medical diagnosis, infrastructure control), require human validation
- Continuous validation: periodically test models against known-good test cases to detect silent degradation
Domain 3: Security Operations and Supply Chain (Outcomes 6-8)
Before using a third-party model or AI service, conduct formal security assessment of the vendor.
SOCI Outcome 6 (External Dependency Management) is critical for AI:
- Vendor security questionnaires: training methodology, data handling, vulnerability disclosure process
- Model transparency requirements: documented architecture, dependencies, known limitations
- SLA and security commitments: response times for vulnerability fixes, transparency commitments
- Regular audits: periodic assessment of vendor compliance with commitments
AI models depend on frameworks, libraries, and tools. These must be continuously scanned and patched.
SOCI Outcome 7 (Systems Monitoring) requires:
- Software Bill of Materials (SBOM) for all dependencies in ML pipelines
- Automated vulnerability scanning: CVE monitoring for all dependencies
- Patch management: prioritise and apply patches for critical vulnerabilities
- Testing before deployment: validate that patches don't break model functionality
Traditional incident response doesn't work for AI. You need AI-specific detection, containment, and forensics.
SOCI Outcome 8 (Incident Response) for AI includes:
- Detection: identify when a model is producing anomalous outputs or behaving unexpectedly
- Containment: isolate the model, revert to previous version, or disable feature dependent on it
- Forensics: review model logs, test data, and confidence scores to determine what went wrong
- Communication: rapid notification to stakeholders when an AI system is compromised or acting unexpectedly
Domain 4: Accountability and Compliance (Outcomes 9-12)
Document how AI systems comply with relevant regulations (privacy, anti-discrimination, financial services).
SOCI Outcome 9 (Risk Management Framework) requires:
- Privacy impact assessments: does the model expose or memorise PII? What's the compliance risk?
- Bias and fairness assessments: does the model discriminate against protected groups?
- Regulatory mapping: how does this AI system comply with APRA, ASIC, Privacy Act, etc.?
- Annual reviews: refresh these assessments as models evolve or regulatory requirements change
For critical infrastructure decisions, explain why the model decided what it decided.
This supports SOCI Outcome 10 (Audit and Assurance):
- Explainability framework: for high-stakes decisions, require explainable AI or hybrid human-AI review
- Audit trails: log all inputs, outputs, and confidence scores for models making critical decisions
- Reproducibility: be able to replay a model's decision and understand which inputs drove it
- Independent audit: periodic third-party review of critical AI systems
Your team must understand AI security risks. Roles and responsibilities must be clear.
SOCI Outcome 11 (Security Governance) demands:
- Board/executive awareness: directors understand AI-specific risks and oversight mechanisms
- Team training: security, development, and operations teams understand AI security requirements
- Clear ownership: someone owns AI security outcomes; this cannot be siloed
- Metrics and reporting: track AI security maturity and report regularly
Proactively test AI system resilience to attacks, drift, and anomalies.
SOCI Outcome 12 (Continuous Improvement) for AI includes:
- Red teaming: test models against adversarial inputs, prompt injection, data poisoning scenarios
- Drift detection: models degrade over time as the real world changes. Monitor and retrain
- Stress testing: what happens if input data becomes corrupted or malicious?
- Disaster recovery: can you rapidly retrain or replace a compromised model?
Mapping These 12 Controls to SOCI Outcomes
Here's how the controls map back to the SOCI Act's 13 outcomes:
- Outcome 1 (Systems Governance): Controls 1, 2 (inventory, approval)
- Outcome 2 (Systems Hardening): Control 2 (change management)
- Outcome 3 (Data Protection): Controls 3, 4 (integrity, training data protection)
- Outcome 4 (External Dependency): Controls 6, 7 (third-party risk, dependency management)
- Outcome 5 (Incident Response): Controls 5, 8 (output validation, IR)
- Outcomes 6-13: Distributed across Controls 9-12 (compliance, auditability, governance)
Implementation Roadmap for Australian CISOs
Implementing all 12 controls simultaneously is unrealistic. Here's a phased approach:
Phase 1: Foundation (Months 1-3)
- Control 1: Build AI model inventory
- Control 9: Assess regulatory compliance gaps
- Control 11: Build awareness across leadership
Phase 2: Protection (Months 4-6)
- Control 2: Implement model approval process
- Control 3: Set up cryptographic verification for models
- Control 4: Govern training data access
Phase 3: Detection (Months 7-9)
- Control 5: Deploy output monitoring for production models
- Control 8: Build AI-specific incident response procedures
- Control 10: Implement audit logging for critical decisions
Phase 4: Resilience (Months 10-12)
- Control 6: Formalise third-party risk assessment
- Control 7: Automate dependency vulnerability scanning
- Control 12: Execute red team exercises against AI systems
Key Takeaways
- The SOCI Act will apply to AI systems. Traditional IT governance doesn't work for machine learning
- 12 AI-specific controls map to the SOCI outcomes and address model-specific risks
- Model inventory, integrity verification, and supply chain controls are foundational
- Output monitoring and incident response for AI require new capabilities
- Australian critical infrastructure operators should begin implementation now, before SOCI enforcement