Essential Eight Meets AI: Adapting Australia's Cyber Framework for Machine Learning Systems

How to extend traditional security controls to cover AI-specific workloads

← Back to Blog
Governance & Compliance

The Australian Signal Directorate's Essential Eight is the baseline cybersecurity framework for Australian government and critical infrastructure. Most Australian organisations are either subject to it directly or use it as guidance.

But the Essential Eight was designed for traditional IT systems: servers, endpoints, networks, applications. It assumes that systems are deterministic, patchable, and auditable.

AI systems aren't. And that gap is a problem.

The Essential Eight and Their Limitations for AI

The Essential Eight are:

  1. Application whitelisting
  2. Patch applications
  3. Patching the operating system
  4. Microsoft Office macro settings
  5. User application hardening
  6. Disable unnecessary services
  7. Multi-factor authentication
  8. Regular backups

For traditional systems, these work. But for AI systems, they either don't apply or need extension.

Control 1: Application Whitelisting

Traditional: Approve a list of applications that can run on endpoints. Everything else is blocked.

AI Challenge: ML training frameworks (PyTorch, TensorFlow) are approved, but code running within them is dynamic. How do you whitelist model code that changes daily? How do you prevent malicious training scripts from running?

Extension: Whitelist approved model sources, training repositories, and model registries. Scan training data for malicious content. Verify model integrity before loading.

Control 2: Patch Applications

Traditional: Apply security patches to applications regularly.

AI Challenge: ML frameworks are frequently updated, but patching breaks models. A PyTorch update might change model serialisation, breaking all your trained models. Patching is risky, so teams often run outdated frameworks with unpatched vulnerabilities.

Extension: Implement a patch management strategy specifically for ML frameworks that includes testing compatibility, creating reproducible environments (containers, virtual environments), and validating models after patching.

Control 3: Patch the Operating System

Traditional: Apply OS patches quickly.

AI Challenge: ML workloads are sensitive to OS-level changes. GPU drivers, CUDA toolkit versions, and kernel versions all affect model execution. An OS patch might break training or inference.

Extension: Use containerisation to isolate ML workloads from OS-level changes. Test OS patches in isolated environments before applying broadly. Maintain version pinning for critical dependencies.

Control 4: Microsoft Office Macro Settings

Applies to AI?: Not directly. But the principle extends: disable potentially dangerous code execution by default. For AI, this means disabling untrusted code execution in notebooks, sandboxing Jupyter environments, and restricting code injection in model training pipelines.

Control 5: User Application Hardening

Traditional: Harden applications to reduce attack surface (disable unnecessary features, apply security configurations).

AI Challenge: Models are inherently flexible. Hardening often means reducing model functionality or performance, creating tension between security and business needs.

Extension: Apply input validation and output filtering to harden model endpoints. Use model compression and quantisation to reduce attack surface. Implement ensemble methods where multiple models validate each other.

Control 6: Disable Unnecessary Services

Traditional: Disable services not required for operation.

AI Challenge: ML infrastructure often requires monitoring, logging, API endpoints, and debugging facilities. Disabling these reduces visibility, making incidents harder to detect.

Extension: Enable comprehensive logging and monitoring. Use API gateways to control access. Implement least-privilege service access.

Control 7: Multi-Factor Authentication

Traditional: Require MFA for all users.

AI Challenge: Who needs access to models, training data, and model registries? Sometimes it's automated systems (CI/CD pipelines, monitoring systems). How do you implement MFA for automated access?

Extension: Apply MFA to human users accessing models and data. For automated access, use API tokens with strict scope and rotation policies. Implement service-to-service authentication.

Control 8: Regular Backups

Traditional: Regularly back up data and systems.

AI Challenge: What's the "backup" of a model? Models are trained, not written. You can't just restore a model from a previous backup—you'd need to retrain from scratch. Training data must be backed up, but models themselves are derived artifacts.

Extension: Implement model versioning: maintain checkpoint versions of trained models. Back up training data and model metadata. For critical systems, maintain multiple model versions and test recovery procedures.

AI-Specific Extensions to the Essential Eight

Beyond adapting the existing eight, you need new controls:

Control 9: Model Integrity Verification

Cryptographically verify model weights before deployment. Hash models and validate signatures.

Control 10: Training Data Protection

Treat training data as a critical asset. Encrypt at rest and in transit. Implement access controls and audit logging.

Control 11: Supply Chain Verification

Verify the provenance of third-party models. Use Software Bill of Materials (SBOM) for model dependencies.

Control 12: Continuous Model Monitoring

Monitor model performance, confidence scores, and output distributions. Detect drift and anomalies.

Control 13: AI Incident Response

Develop AI-specific incident response procedures. Have playbooks for model compromise, data poisoning, and adversarial attacks.

Implementation Guidance for Australian Organisations

For government and critical infrastructure: Start with Essential Eight as baseline, then layer on AI-specific controls. Document how your AI systems map to each control. ASIO and ASD will expect this mapping.

For financial services: APRA expects AI governance. Use Essential Eight + AI extensions as your framework. Document in your AI governance policy.

For all organisations: The Essential Eight remains foundational. You're not abandoning it; you're extending it. Treat the framework as a foundation to build on, not a ceiling.

Key Takeaways