Hi! This our first post in what we hope to become a series of updates filled with accomplishments and mostly good plans we’ve made. We’ve heard that some of you are aware that “something is happening in Poland”, but not sure what, and we want to tell you what we’ve been up to. We invite you to challenge our plans and tell us where we could be more effective or more agentic, and ask questions when our reasons are not clear.
We started with 2 people from the Polish EA community (Marcel Windys & Jakub Nowak) creating a dedicated Slack workspace for AI Safety Poland. This move was supported by Chris Szulc, the current Director of EA Poland, which as of today is fiscally sponsoring the AI Safety Poland. That means – “on paper” we’re a project of that organization.
Thanks for reading AI Safety PL Newsletter! Subscribe for free to receive new posts and support my work.
The newly formed Slack gained traction when two other people joined:
Jakub Kryś, PhD, MATS fellow and SaferAI researcher, based in London, who started offering AI Safety career consulting for Polish people trying to get into AI Safety; and
Patryk Wielopolski, PhD, then on a career break, now a Research Manager at MATS, well-connected in Polish academia and the ML community (which boosted our growth).
This quintet started their first AI Safety Poland initiatives:
building the aisafety.org.pl website (brand new version coming very soon!)
regular AI Safety webinars with (mostly) Polish researchers presenting their work
“Intro to AI Safety” by Jakub Kryś
“Out of context reasoning in LLMs & Emergent Misalignment” by Anna Sztyber-Betley & Jan Betley
“Making LLM Unlearning More Selective with Collapse of Irrelevant Representations” by Filip Sondej
“Chain of thought monitorability: A new and fragile opportunity for AI safety” by Tomek Korbak
the first in-person meetup – a casual networking event during the MLinPL 2025 conference
two workshops:
“From Superposition to Sparse Autoencoders: Understanding Neural Feature Representations” by Patryk Wielopolski & Taras Kutsyk during the last day of MLinPL 2025
“Build Your Own GPT-2” by David Quarel & Patryk Wielopolski, organized with the Artificial Intelligence Society Golem at Warsaw University of Technology
new volunteers:
Anna Szalwa joined us as a Social Media Designer, helping us with graphic design for our Luma and social media channels.
Piotr Kędziora, based in Kraków, joined as a coordinator with expertise in photo & video production.
2026 so far
In person meetups aka Tour de Pologne 🚴🏼
As you can see from our Luma calendar, we’ve been pretty active organizing and co-organizing in-person meetups in major Polish cities.
January 6th: AI Safety & EA social meetup in Warsaw, organized by Zuzanna Topolska
February 4th: NASK x AISPL meetup in Warsaw:
Collaboration with NASK – a Polish national research institute covering R&D in computer science, cybersecurity, AI, and distributed networks
35+ people showed up due to limited venue capacity; 90+ had expressed interest, signalling we need more flexible venues going forward
Anna Sztyber-Betley (WUT) presented on inductive backdoors and new attack vectors for corrupting LLMs
Piotr Kawa (Resemble AI / WUST) spoke on designing real-world speech deepfake detection systems that go beyond standard benchmarks
Karolina Seweryn (NASK) covered safety of large language models beyond English — a particularly relevant angle given our Polish-language context
Warsaw meetup at NASK (speaking: Anna Sztyber-Betley)
February 25th: AISPL meetup in Poznań, co-organized with the AI Center at Adam Mickiewicz University:
~50 people showed up
we had a talk on AI Psychosis by IDEAS Research Institute researchers – Karolina Drożdż and Kacper Dudzic
Bartosz Naskręcki, Vice-Dean of Mathematics and Computer Science at the Mickiewicz University, gave a talk on AI models in maths research. He also discussed the FrontierMath benchmark he contributed to.
Jan Czajkowski (UAM) and Witold Waligóra (MyreLabs) gave a talk on AI threat asymmetry.
Mateusz Idziejczak and Mateusz Stawicki (Poznań University of Technology) presented their experiments with evaluating persuasion and manipulation in LLM agents using AmongUs-like game environment.
Poznań meetup (speaking: Kacper Dudzic, on his left: Karolina Drożdż)
March 17th: AISPL meetup in Kraków
~85 people attended
Patryk Wielopolski gave an intro to AI Safety talk
Maciej Krystian Szymański (Bielik AI) presented “Sójka” — safety mechanisms in Polish LLMs
Panel discussion: “What limits current LLMs on the path to AGI?” with dr. Aleksander Smywiński-Pohl (AGH) and Marcel Windys
Kraków meetup
March 24th: Wrocław meetup with WAIT (Wrocław AI Team):
our biggest success so far, with a turnout of ~160 people, mostly IT industry employees, students and researchers.
Patryk Wielopolski gave an “Intro to AI Safety” talk (version four of the slide deck we keep reusing).
Jakub Kryś gave a more technical talk on compute governance, which was captivating even to our core team members who were familiar with the topic
we received strong positive feedback for both these talks
Wrocław meetup (red hoodie: Jakub Kryś, on his right: Jakub Nowak)
March 26th: AISPL social meetup in Warsaw - despite being a last-minute event, 13 people signed up!
Online webinars and reading club
We continued to organize our “AI Safety Poland Talks”, doing five more in Q1 2026:
“Eliciting Secret Knowledge from Language Models” by Bartosz Cywiński
“Offensive AI Capabilities: Current Risks and Trends” by Reworr
“The Economics of Transformative AI: Hardware, Software, and p(doom)” by Jakub Growiec
“Understanding and controlling behavioral self-awareness in LLMs” by Taras Kutsyk
“Multi-layer Prototypes for Efficient Safety Moderation” by Maciej Chrabąszcz
We also resurrected our Reading Club which met two times already.
120 monthly active users (+40% from 85 in Jan 2026),
60 posting users (+30% from 45 in Jan 2026)
Funding
We’re very grateful to have received a Rapid Small Grant from BlueDot Impact amounting to $4800, which should cover mostly local meetup costs for 5 events in Q1/Q2. It was indeed rapid – we received a positive reply in under 2 hours. We’re happy to report our usage upon inquiry, and we’re reporting it periodically to our grantmaker.
The rest of our costs are either self-funded or, in case of some meetups, covered by our partners.
Staff updates
We’re unpaid volunteers for now, so we’re not really changing our employment statuses within AI Safety Poland. However, Jakub Nowak took on the role of Executive Director part-time (50% FTE), initially committed to do so for six months until the end of September 2026 with the possibility of continuing full-time if the fit is strong and funding is secured.
Plans for the rest of this year
Our core Theory of Change is likely what you would expect from a team doing AI Safety field-building in a “European country that’s not a globally important player”:
find talented people
convince them AI risk is high & neglected
help them upskill and transition to high-impact work in AI Safety/Governance
...in hope that this results in AI risk reduction.
#3 is mostly through existing international programs and fellowships, unless we build enough capacity to replicate them. We’re happy to share a more detailed ToC upon request.
We’re aware this isn’t a uniquely Polish theory of impact. We think it’s the right starting point, but we’re actively thinking about where Poland’s specific position creates distinctive leverage: proximity to EU policy processes, a strong technical talent pool, and a network spanning Central and Eastern Europe. If you see angles we’re missing, we’d love to talk.
Q2 2026
High priority
💼 Career Consulting — scaling the core program
Career advising is our highest-leverage intervention: it directly converts talent into AI safety capacity.
Minimum 2 advisors active (enabling specialization).
At least one Career Transition Workshop run.
Increased marketing efforts online and offline (universities).
🗯️ Slack Community — intentional redesign
Thanks to Kevin Xia’s post, we diagnosed our current state: low activity, driven by newcomers and a handful of power users. The community has value (only Polish-language AI safety space online) but isn’t compounding.
Q2 goal: at least 2 recurring content formats that don’t depend on us posting, another attempt at 1:1 networking (we’ve tried introoo.com before), recruiting 2-3 “channel leads” owning their topics.
Baseline: track active members before/after.
🧜🏼♀️Planning the Warsaw AI Safety Day
We are quite convinced now that instead of doing a handful of smaller meetups in major cities, we could double down on that and organize an event akin to the Zurich AI Safety Day. It would have a better ROI, higher quality talks, participants, and networking – assuming we obtain sufficient funding.
Goal for Q2: scope the project and estimate the budget we would require to make it happen.
Lower priority
🔗 Website 2.0
We have created a new website with significantly improved navigation and clarity. It should go live within the next month.
Goal for Q2: Transition fully, add some original content there (at least 2 articles this quarter), implement privacy-preserving analytics.
🍕 More local meetups (one in every big city)
In Q1, we managed to have local meetups in 4 major Polish cities. For Q2 we’re planning to collaborate with other teams in Poland that do AI Safety adjacent research (e.g. CCAI). We’re also considering some non-technical meetups, e.g. around economics of AI or AI governance, in collaboration with social sciences and policy experts.
If you’ve been to one of our meetups and want to help organize or present at another one - let us know!
We are, of course, planning to continue our webinars (with summer break planned) and reading club meetings.
Goal for Q2: Two meetups with speakers (not counting social meetups), one new collaboration.
H2 2026
🧜🏼♀️ Warsaw AI Safety Day
As mentioned above, if we secure sufficient funding, we’ll make it happen 🔥.
🧑🏼🎓 University courses on AI Safety
Many members of our community are PhDs at the top tech universities in Poland and have expressed interest in helping create materials for courses that students could choose as their electives. We plan to explore 1-2 university partnerships and make AI Safety courses happen there, while we’re helping with material preparation and promotion.
📜 Setting up a legal entity
We’re currently fiscally sponsored by EA Poland and that seems to work fine for our purposes. However, we think that optics-wise it might be confusing to people unfamiliar with the EA community to see EA Poland mentioned in our documents. We’re also aware there are many freebies for registered NGOs, sometimes limited to 1 per organization, so registering our own entity seems to have a good ROI.
Let’s collaborate
If you’re working on AI safety in Poland or CEE — or you know someone who is — reach out. We’re actively looking for collaborators, advisors, and people who want to be part of what we’re building. Reach out to us via contact@aisafety.org.pl
Hi! This our first post in what we hope to become a series of updates filled with accomplishments and mostly good plans we’ve made. We’ve heard that some of you are aware that “something is happening in Poland”, but not sure what, and we want to tell you what we’ve been up to. We invite you to challenge our plans and tell us where we could be more effective or more agentic, and ask questions when our reasons are not clear.
2025 summary
We started with 2 people from the Polish EA community (Marcel Windys & Jakub Nowak) creating a dedicated Slack workspace for AI Safety Poland. This move was supported by Chris Szulc, the current Director of EA Poland, which as of today is fiscally sponsoring the AI Safety Poland. That means – “on paper” we’re a project of that organization.
Thanks for reading AI Safety PL Newsletter! Subscribe for free to receive new posts and support my work.
The newly formed Slack gained traction when two other people joined:
This quintet started their first AI Safety Poland initiatives:
2026 so far
In person meetups aka Tour de Pologne 🚴🏼
As you can see from our Luma calendar, we’ve been pretty active organizing and co-organizing in-person meetups in major Polish cities.
Warsaw meetup at NASK (speaking: Anna Sztyber-Betley)
Poznań meetup (speaking: Kacper Dudzic, on his left: Karolina Drożdż)
Kraków meetup
Wrocław meetup (red hoodie: Jakub Kryś, on his right: Jakub Nowak)
Online webinars and reading club
We continued to organize our “AI Safety Poland Talks”, doing five more in Q1 2026:
We also resurrected our Reading Club which met two times already.
Some stats
Funding
We’re very grateful to have received a Rapid Small Grant from BlueDot Impact amounting to $4800, which should cover mostly local meetup costs for 5 events in Q1/Q2. It was indeed rapid – we received a positive reply in under 2 hours. We’re happy to report our usage upon inquiry, and we’re reporting it periodically to our grantmaker.
The rest of our costs are either self-funded or, in case of some meetups, covered by our partners.
Staff updates
We’re unpaid volunteers for now, so we’re not really changing our employment statuses within AI Safety Poland. However, Jakub Nowak took on the role of Executive Director part-time (50% FTE), initially committed to do so for six months until the end of September 2026 with the possibility of continuing full-time if the fit is strong and funding is secured.
Plans for the rest of this year
Our core Theory of Change is likely what you would expect from a team doing AI Safety field-building in a “European country that’s not a globally important player”:
...in hope that this results in AI risk reduction.
#3 is mostly through existing international programs and fellowships, unless we build enough capacity to replicate them. We’re happy to share a more detailed ToC upon request.
We’re aware this isn’t a uniquely Polish theory of impact. We think it’s the right starting point, but we’re actively thinking about where Poland’s specific position creates distinctive leverage: proximity to EU policy processes, a strong technical talent pool, and a network spanning Central and Eastern Europe. If you see angles we’re missing, we’d love to talk.
Q2 2026
High priority
💼 Career Consulting — scaling the core program
Career advising is our highest-leverage intervention: it directly converts talent into AI safety capacity.
Q2 deliverables:
🗯️ Slack Community — intentional redesign
Thanks to Kevin Xia’s post, we diagnosed our current state: low activity, driven by newcomers and a handful of power users. The community has value (only Polish-language AI safety space online) but isn’t compounding.
Q2 goal: at least 2 recurring content formats that don’t depend on us posting, another attempt at 1:1 networking (we’ve tried introoo.com before), recruiting 2-3 “channel leads” owning their topics.
Baseline: track active members before/after.
🧜🏼♀️Planning the Warsaw AI Safety Day
We are quite convinced now that instead of doing a handful of smaller meetups in major cities, we could double down on that and organize an event akin to the Zurich AI Safety Day. It would have a better ROI, higher quality talks, participants, and networking – assuming we obtain sufficient funding.
Goal for Q2: scope the project and estimate the budget we would require to make it happen.
Lower priority
🔗 Website 2.0
We have created a new website with significantly improved navigation and clarity. It should go live within the next month.
Goal for Q2: Transition fully, add some original content there (at least 2 articles this quarter), implement privacy-preserving analytics.
🍕 More local meetups (one in every big city)
In Q1, we managed to have local meetups in 4 major Polish cities. For Q2 we’re planning to collaborate with other teams in Poland that do AI Safety adjacent research (e.g. CCAI). We’re also considering some non-technical meetups, e.g. around economics of AI or AI governance, in collaboration with social sciences and policy experts.
If you’ve been to one of our meetups and want to help organize or present at another one - let us know!
We are, of course, planning to continue our webinars (with summer break planned) and reading club meetings.
Goal for Q2: Two meetups with speakers (not counting social meetups), one new collaboration.
H2 2026
🧜🏼♀️ Warsaw AI Safety Day
As mentioned above, if we secure sufficient funding, we’ll make it happen 🔥.
🧑🏼🎓 University courses on AI Safety
Many members of our community are PhDs at the top tech universities in Poland and have expressed interest in helping create materials for courses that students could choose as their electives. We plan to explore 1-2 university partnerships and make AI Safety courses happen there, while we’re helping with material preparation and promotion.
📜 Setting up a legal entity
We’re currently fiscally sponsored by EA Poland and that seems to work fine for our purposes. However, we think that optics-wise it might be confusing to people unfamiliar with the EA community to see EA Poland mentioned in our documents. We’re also aware there are many freebies for registered NGOs, sometimes limited to 1 per organization, so registering our own entity seems to have a good ROI.
Let’s collaborate
If you’re working on AI safety in Poland or CEE — or you know someone who is — reach out. We’re actively looking for collaborators, advisors, and people who want to be part of what we’re building. Reach out to us via contact@aisafety.org.pl