TLDR: Participate online or in-person on the weekend of December 16th to 18th in a fun and intense AI safety research hackathon focused on benchmarks, neural network verification, adversarial attacks and RL safety environments. We invite mid-career professionals to join, but the event is open to everyone (including non-coders) and we will provide starter code templates to help kickstart your team's projects. Join here.
Below is an FAQ-style summary of what you can expect.
The AI Testing Hackathon is a weekend-long event where teams of 1-6 participants conduct research on AI safety. At the end of the event, teams will submit a PDF report summarizing and discussing their findings.
The hackathon will take place on Friday from December 16th to 18th, and you are welcome to join for any part of it (see further details below). An expert on the topic will be speaking and we will introduce the topic for you on the launch date.
Everyone can participate and we encourage you to join especially if you’re considering AI safety from another career . We prepare templates for you to start out your projects and you’ll be surprised what you can accomplish in just a weekend – especially with your new-found friends!
Read more about how to join, what you can expect, the schedule, and what previous participants have said about being part of the hackathon below.
AI safety testing is becoming increasingly important as governments require rigorous safety certifications. The deployment of the EU AI Act and the development of AI standards by NIST in the US will both necessitate such testing.
The use of large language models, such as ChatGPT, has emphasized the need for safety testing in modern AI systems. Adversarial attacks and neural Trojans have become more common, highlighting the importance of testing for robustness and viruses in neural networks to ensure the safe development and deployment of AI.
In addition, the rapid development of related fields, such as automatic verification of neural programs and differential privacy, offers promising research for provably safe AI systems.
There is relatively little existing literature from AI safety on AI safety metrics and testing, though anomaly detection in various forms is becoming more prevalent:
Overall, AI testing is a very interesting problem that requires a lot of work from the AI safety community both now and in the future. We hope you will join the hackathon to explore this direction further!
All the hackathon information and talks are happening on the Discord server that everyone is invited to join: https://discord.gg/3PUSbdS8gY.
Besides this, you can participate online or in person at several locations. This hackathon, online is the name of the game since there’s end-of-year deadlines and Christmas but you’re welcome to set up your own jam site as well. You can work online on Discord or directly in the GatherTown research space.
For this hackathon, our in-person locations include the ENS Ulm in Paris at the most prestigious ML master in France, Delft University of Technical and Aarhus University. More locations might join as well.
You’ll also have to sign up on itch.io to submit your projects: Join the hackathon here. This is also where you can see an updated list of the locations.
As we get closer to the date, we’ll add more ideas on aisafetyideas as inspiration.
Also check out the results from the last hackathon to see what you might accomplish during just one weekend. Neel Nanda was quite impressed with the full reports given the time constraint! You can see projects from all hackathons here.
Check out the continually updated inspirations and resources page on the Alignment Jam website here. Here are a few of the resources:
Websites, lists and tools for AI testing:
Technical papers on benchmarking and automated testing:
Governance-related work from governments and related institutions:
There’s loads of reasons to join! Here are just a few:
Please join! This can be your first foray into AI and ML safety and maybe you’ll realize that it’s not that hard. Hackathons can give you a new perspective!
There’s a lot of pressure from AI safety to perform at a top level and this seems to drive some people out of the field. We’d love it if you consider joining with a mindset of fun exploration and get a positive experience out of the weekend.
There will be many participants with many questions during the hackathon so one type of help we would love to receive is your mentorship during the hackathon.
When you mentor at a hackathon, you employ your skills to answer questions on Discord asynchronously. You will monitor the chat and possibly go on calls with participants who need extra help. As part of the mentoring team, you will get to chat with the future talents in AI safety and help make AI safety an inviting and friendly place!
The skills we look for in mentors can be anything that helps you answer questions participants might have during the hackathon! This can include experience in AI governance, networking, academia, industry, AI safety technical research, programming and machine learning.
Join as a mentor on the Alignment Jam site.
Join the public ICal here.
Everyone submits their report on Sunday and prepares a 5 minute presentation about what they developed. Then we have 3 days of community voting where everyone who submitted projects can vote on others’ projects. After the community voting, the judges will convene to evaluate which projects are the top 4 and a prize ceremony will be held online on the 22nd of December.
Check out the community voting results from the last hackathon here.
As a matter of fact, we encourage you to join even if you only have a short while available during the weekend!
For our other hackathons, the average work amount has been 16 hours and a couple of participants only spent a few hours on their projects as they joined Saturday. Another participant was at an EA retreat at the same time and even won a prize!
So yes, you can both join without coming to the beginning or end of the event, and you can submit research even if you’ve only spent a few hours on it. We of course still encourage you to come for the intro ceremony and join for the whole weekend.
Definitely! You can join our team of in-person organizers around the world! You can read more about what we require here and the possible benefits it can have to your local AI safety group here. It might be too late but you can sign up for the upcoming hackathon in January. Contact us at email@example.com.
Again, sign up here by clicking “Join jam” and read more about the hackathons here.
Godspeed, research jammers!
Thank you to Sabrina Zaki, Fazl Barez, Leo Gao, Thomas Steinthal, Gretchen Krueger and our Discord for helpful discussions and Charbel-Raphaël Segerie, Rauno Arike and their collaborators for jam site organization.