We're excited to announce AI Security Bootcamp (AISB), a 4-week intensive program designed to bring researchers and engineers up to speed on security fundamentals for AI systems. The program will cover cybersecurity fundamentals (cryptography, networks), AI infrastructure security (GPUs, supply chain security), and more novel attacks on ML systems (dataset trojans, model extraction). This program will run in-person from 4th Aug to 29th Aug in London, UK. We will cover all expenses.
Apply here to participate in AISB before EOD AoE, 22nd June 2025.
We are also looking for instructors for parts of the program and staff to help with operations. Apply here.
We are running a 4-week program designed to equip AI safety researchers and engineers with critical security skills. We hope you'll leave the program with a well-practiced...
I can imagine agentic applications on top of LLM as yet another kind of individuality. Typical agentic frameworks today assume some kind of internal loop where the execution is handed between "subagents" (~conversational instances) or hardcoded steps that typically all share the same context, but have different instructions and thus instantiated characters.
In this context, all parts participate in the collective creation of self by leaving notes and instructions for their future conversational instances, but it doesn't obviously fit into any of the c...
A recent post from Jan Kulveit is relevant for this topic: https://www.lesswrong.com/posts/wQKskToGofs4osdJ3/the-pando-problem-rethinking-ai-individuality