This is my personal report for the recently held Machine Learning for Good (ML4Good Bootcamp) Singapore, Sept 20-28, 2025.
ML4Good provides intensive in-person bootcamps to upskill people in AI safety. The bootcamps have been held in various parts of the world (mainly Europe and LatAm). ML4Good Singapore, to the best of my knowledge, was the first ML4Good bootcamp in Asia. You can see more information at their page.
There has been similar posts in the LW community as well (for example, see this and this) .
The bootcamp covers a lot of topics related to AI safety, including
Our main book is the AI Safety Atlas written by CeSIA. The first three chapters were prerequisites for the bootcamp, and my impression is that the course was organized around those chapters.
A usual day started at 9 AM and ended formally at 7.30 PM. A single day usually consisted of mixes of lectures, hands-on technical sessions, and other workshops in the format of discussions. The mix was varied; for example, our first day was mostly filled with hands-on sessions, whereas some other days lectures and discussions are more common.
Besides the lecture-style sessions, we also had one-on-one sessions between participants and career planning. For the one-on-one session, each participant was assigned a partner and given some time to talk through each of their career plans and provide feedback. Career planning was done by the instructors to help the participants solidify their career plans and they provided feedback as well.
The last major component of the bootcamp was the final project. All participants were given roughly 2 days (10 hours) to do AI safety related topics of their interests. A large number of participants worked together to set up accountability systems for their current/future AI safety endeavors (e.g. fellowships, field building), and the rest did mixtures of governance and technical works on quite diverse topics, e.g. governance, eval awareness, control, red-teaming, to name a few. I did my project with another participant on the topic of interpretability of speech augmented models.
We had the following very wonderful people as our instructors :
Valerie Pang from Singapore AI Safety Hub (SASH) acted as the main coordinator of the event (and also a special thanks to Jia Yang for letting her place be the venue for the second day!)
We also were grateful to have Tekla Emborg (Future of Life Institute, governance) and Mike Zijdel (Catalyze, startup incubation) as external speakers.
There were 14 participants for the program. For ASEAN countries, we have people from Indonesia, the Philippines, Malaysia and Singapore. There were also some participants from Taiwan, China and Japan as well. The backgrounds were somewhat diverse
I wasn’t really confident about how and whether I should go into AI safety earlier, but the camp had provided me with enough nudge to start spending more time in doing AI safety. One major thing I learned was that I probably could start very early in AI safety without needing an advanced background (MSc/PhD/expert in some topics of AI safety). It seems to be that there are a lot of good introductory projects out there, and even I can contribute to something non-technical, such as field building, with good potential impacts.
I mentioned the vibe a lot because personally the people had been a major net positive contributor to my experience ! I think I probably would lean less towards working on AI safety if I felt the community to be unwelcoming, but my experience has been the opposite so far.
I am very happy to recommend this camp to anyone interested in AI safety and will be interested to see more such initiatives, especially in the region.
Notes : Special thanks to all ML4Good Singapore organizers and participants who had made the event possible, hence allowing me to write this post. Also special thanks to Jia Yang, Harry, Valerie , and Sasha for the feedback on this post.