We're running a 7-day intensive AI security program in Singapore for experienced security professionals who want to upskill on securing frontier AI systems. This is the second iteration of AISB - the first ran in London in August 2025. Accommodation and programme costs are covered, and limited travel support is available if you need financial assistance.
As AI systems become more capable and integrated into critical infrastructure, new attack surfaces and failure modes are emerging that traditional security training doesn't cover. Current AI security resources focus on application-layer vulnerabilities (prompt injection, jailbreaks) but the threat landscape for frontier AI systems is broader and more complex.
AISB is designed to fill that gap. We will focus on the real risks as advanced AI systems become more common and are targeted by increasingly motivated attackers. The curriculum will cover scenarios like misuse by sophisticated adversaries, loss of control risks from advanced AI systems, governance interventions, and more.
Curriculum
The programming with cover:
How to develop threat models for frontier AI systems - including risks that scale with AI capability
Hands-on skills across the full attack surface: adversarial techniques, infrastructure exploitation, supply chain attacks, and model-level vulnerabilities
Security challenges that frontier AI organizations are actively working on, not yet covered in standard training curricula
How to position for high-impact roles at AI labs, government programs, and research institutions
Each day combines lectures, demos, guest speakers, and hands-on red/blue exercises. For the most up to date version of the curriculum, please see the website. The days will look roughly like:
DAY 1: Introduction & Threat Modeling
Current threat landscape: frameworks, misuse (e.g., to assist cyberattacks), application security, infrastructure security
Future threat models: misalignment, model theft and tampering, integrity attacks (backdoors, trojans), governance guarantees
Mapping threat models to attacks, defenses, and follow-up pathways
Threat modeling exercise against an AI deployment, which we will attack and defend in future days
DAY 2: Adversarial Attacks, Watermarking & Data Security
Adversarial examples and attacks on image models
Trojans, backdoors, and fine-tuning attacks on open-source models
Model weight extraction attacks
Watermarking techniques and detection
Data security: weight security, training data protection, inference-time data handling
DAY 3: LLM Security
Jailbreaks, prompt injection, and RAG injection
Guardrails: Constitutional classifiers and linear probes for input and output monitoring
Abliteration and model editing techniques
Tokenization vulnerabilities
MCP (Model Context Protocol) security
DAY 4: Infrastructure Security
NVIDIA Container Toolkit exploits and case studies
GPU isolation and confidential computing
Sandbox design: containment, escape vectors, and design considerations
DAY 5: Weight security, Verification & Formal Methods
RAND report analysis and policy implications
Output verification using formal methods
Detecting and defending against rogue deployments
DAY 6: Data Center Security & ML Stack Threat Modeling
Data center infrastructure: power, networking, physical security
ML stack threat modeling end-to-end
Personnel security considerations for AI deployments
(Potential site visit - TBD) to a local data center for a behind-the-scenes look at real-world deployments
DAY 7: AI Control & Hardware Governance
AI control mechanisms and policy
Hardware supply chains and governance frameworks
Securing against treaty violations and governance guarantees
Who should apply
The program is primarily designed for security professionals ready to secure frontier AI systems. Selection prioritizes candidates interested in frontier AI risk, high-consequence failure modes, or work involving sophisticated threat actors.
Experience with deep learning frameworks (e.g., PyTorch) is a plus but not required. We want to make this accessible to security professionals from a variety of backgrounds, so we provide comprehensive pre-work to get everyone up to speed on the AI fundamentals needed to engage with the curriculum.
We encourage you to apply even if you don't check all the boxes, but have a strong background in one of the areas we'll focus on.
Logistics
When: April 20-26, 2026
Where: Central Singapore
Cohort size: 10-12 participants
Cost: Accommodation and programme costs are covered, and limited travel support is available if you need financial assistance.
Application deadline: March 15, 2026 (rolling - we encourage early applications!)
Decisions by: March 28, 2026
AISB will overlap with Black Hat Asia (April 21-24) and run right before DEF CON Singapore (April 28-30).
Application process
Fill out the application form at aisb.dev. We review applications on a rolling basis - early applications encouraged
3 levels of evaluation:
Application evaluation
Short work exercise
30 minute interview
If you need a faster decision, note your deadlines in the form
Team
Pranav Gade (Program Lead) - Research engineer at Conjecture; created AISB to bridge AI safety and security
Nitzan Shulman (Security Lead) - Head of Cyber Security at Heron AI Security Initiative; 6+ years security research specializing in IoT, robotics, malware and AI security
Singapore AI Safety Hub (SASH) - Local execution and institutional support
Questions?
Reach out to pranav@aisb.dev or ask in the comments.
tl;dr
We're running a 7-day intensive AI security program in Singapore for experienced security professionals who want to upskill on securing frontier AI systems. This is the second iteration of AISB - the first ran in London in August 2025. Accommodation and programme costs are covered, and limited travel support is available if you need financial assistance.
Apply by March 15, 2026.
Why this program exists
As AI systems become more capable and integrated into critical infrastructure, new attack surfaces and failure modes are emerging that traditional security training doesn't cover. Current AI security resources focus on application-layer vulnerabilities (prompt injection, jailbreaks) but the threat landscape for frontier AI systems is broader and more complex.
AISB is designed to fill that gap. We will focus on the real risks as advanced AI systems become more common and are targeted by increasingly motivated attackers. The curriculum will cover scenarios like misuse by sophisticated adversaries, loss of control risks from advanced AI systems, governance interventions, and more.
Curriculum
The programming with cover:
Each day combines lectures, demos, guest speakers, and hands-on red/blue exercises. For the most up to date version of the curriculum, please see the website. The days will look roughly like:
DAY 1: Introduction & Threat Modeling
DAY 2: Adversarial Attacks, Watermarking & Data Security
DAY 3: LLM Security
DAY 4: Infrastructure Security
DAY 5: Weight security, Verification & Formal Methods
DAY 6: Data Center Security & ML Stack Threat Modeling
DAY 7: AI Control & Hardware Governance
Who should apply
The program is primarily designed for security professionals ready to secure frontier AI systems. Selection prioritizes candidates interested in frontier AI risk, high-consequence failure modes, or work involving sophisticated threat actors.
Experience with deep learning frameworks (e.g., PyTorch) is a plus but not required. We want to make this accessible to security professionals from a variety of backgrounds, so we provide comprehensive pre-work to get everyone up to speed on the AI fundamentals needed to engage with the curriculum.
We encourage you to apply even if you don't check all the boxes, but have a strong background in one of the areas we'll focus on.
Logistics
AISB will overlap with Black Hat Asia (April 21-24) and run right before DEF CON Singapore (April 28-30).
Application process
Team
Questions?
Reach out to pranav@aisb.dev or ask in the comments.
Apply here