AI Security for Engineers

Courses

AI Security for Engineers

Course Duration: 2.5 days
Level: Advanced
AI Security for Engineers

About

As AI systems become increasingly integrated into government operations, they present evolving security challenges that require dedicated attention from cybersecurity leadership. The rapid adoption of AI has created a critical gap: while these systems are fundamentally software that should follow well-established cybersecurity practices, their unique characteristics demand new approaches to risk assessment, threat modelling, and security controls.

AI systems possess distinctive properties that differentiate them from traditional software and create novel security vulnerabilities. They are dynamic and adaptive, learning and changing behavior based on data and interactions, making vulnerabilities harder to identify and contain. They perform complex tasks at unprecedented scale with reduced human oversight, meaning security failures can have amplified impacts across entire organizations. Most critically, LLM-based applications suffer from a fundamental design vulnerability – instructions and data are passed on the same channel – creating opportunities for prompt injection and model extraction. This, together with other AI-related attacks such as data poisoning and adversarial examples, creates threats that traditional security controls were not designed to address.

Meanwhile, adversaries are actively targeting AI systems as high-value assets, seeking to extract proprietary models, poison training data, manipulate outputs, and exploit the trust organizations place in AI-driven decisions. This 2.5-day course equips development and engineering professionals with the foundational knowledge needed to identify AI security risks, evaluate AI SaaS providers, and work effectively with cybersecurity teams.

Target Audience

    Typical participants include:

    • Software engineers
    • Cloud engineers
    • Network engineers
    • Product managers

    Learning Outcomes

    At the end of this course, participants will be able to:

    • Describe the key components of AI systems and how they relate to attack surfaces, attack vectors, and common security risks
    • Identify well-established AI security risks and the key controls to address them
    • Formulate the right questions to ask AI SaaS providers when evaluating security posture, expressing requirements, and managing ongoing engagements
    • Communicate effectively with cybersecurity professionals on AI security issues using appropriate vocabulary and foundational concepts

    Syllabus Summary

    Module 1: Anatomy of AI Systems

    Introduces how AI systems work and where risks emerge. Participants learn the differences between predictive, generative, and agentic AI; the AI supply chain (developers, deployers, operators, users); and core system components such as LLMs, prompting and reasoning methods, memory (e.g. RAG), tools, and guardrails. Concepts are grounded using hands-on exercise where participants build a simplified agentic AI system (no coding) to understand real-world architectures and attack surfaces.

    Module 2: Cybersecurity Attacks and Defenses for AI

    Focuses on AI security threats and controls within existing cybersecurity frameworks. Participants explore OWASP Top 10 risks for LLM and generative AI, real-world AI attack scenarios, and key threat families such as prompt injection, tool/RAG abuse, and AI supply-chain attacks. Using an attack–defend approach, learners red-team and secure an agentic AI system through practical exercises, supported by case studies and MITRE ATLAS coverage. No coding or deep security background required.

    Module 3: Evaluating AI SaaS Providers

    The last module will focus on assessing AI SaaS vendors. Participants learn why AI vendors pose unique risks beyond traditional SaaS assessments and how to evaluate them using frameworks like CSA STAR for AI and OWASP guidance. The focus is on data handling, model and inference security, supply-chain risks, transparency, monitoring, and how to ask the right security questions and spot red flags.

    Course Pricing & Payment Terms

    • The course will commence with a minimum subscription of 20 pax and is limited to 30 pax per cohort.
    • Corporate rates are available. Government subsidies/grants do not apply to this course.
    • For organisations seeking to enrol multiple employees in the course (i.e., more than 10 pax), please contact us at [email protected]
    Payment Terms
    • Payment must be made before the start of the course.
    • In the event of cancellation after acceptance into the course, you are entitled to a refund based on the following guidelines:
    • More than 30 days before the start date: 100% refund
    • Between 5-30 days before the start date: 50% refund
    • Less than 5 days before the start date: No refund

    To sign up or learn more about course dates, please contact us at [email protected]

    Category: Advanced Training

    Ready to Enroll?

    Take the next step in your cybersecurity journey with this comprehensive training program.

    Contact Us to Enroll

    📋 Course Information

    Duration: 2.5 days
    Level: Advanced
    Category: Advanced Training
    Format: On-site