GuyP
GuyP has not written any posts yet.

Open meetup for people working on or interested in AI Safety (incl. alignment, policy, advocacy & field building). People new to the field welcome!
19:30 Doors open, arrivals
19:45 Talk + Q&A
20:30 Open announcement & pitch round
20:40 Open discussion & networking
Open end (doors close at 22:30)
Talk: SL5 Task Force - Securing future frontier AI models against powerful adversaries
As AI models become more powerful, the companies building them are facing more powerful adversaries. As AI approaches human level, we expect various risks, but it would be particularly bad if malicious actors got their hands on unprotected versions of extremely intelligent models. To prevent that, AI companies in the future will need to be secured against... (read more)
New venue! This time we'll be at Thoughtworks in Friedrichshain, register above for the address.
Open meetup for people working on or interested in AI Safety (incl. alignment, policy, advocacy & field building). People new to the field welcome!
19:00 Doors open, arrivals 19:45 Talk + Q&A 20:30 Open announcement & pitch round 20:40 Open discussion & networking Open end (doors close at 22:30)
Topic: What Failure Looks Like: Understanding AI Risk Scenarios
In his talk, Markov will explore AI risks ranging from benign to potentially catastrophic. It will map the three primary ways increasingly capable AI systems might cause harm: deliberate misuse by bad actors, fundamental technical misalignment with human values, and gradual systemic erosion of human influence. We'll examine how various AI development paths could lead to humanity losing meaningful control over critical societal systems - even when individual decisions along these paths seem beneficial and reasonable.
Subscribe to AI Safety Berlin Announcements on Telegram, Signal or Whatsapp and join the Community Chat.
RSVP here
aisafety.berlin
Open meetup for people working on or interested in AI Safety. People new to the field welcome!
We'll be discussing ongoing projects, research and recent papers, Namely the recent Emergent Misalignment paper.
Subscribe to AI Safety Berlin Announcements on Telegram, Signal or Whatsapp and join the Community Chat.
RSVP here
aisafety.berlin
Open meetup for people working on AI Safety. Visitors welcome!
Hear a talk by Yoav Tzfati (MATS alumnus) on his recent work in pushing the frontier of Scalable Oversight!
Afterwards we'll have open discussion and research updates.
RSVP: https://lu.ma/r3ov4ptr
In this meetup at Teamwork Berlin we'll get together and discuss Sparse Autoencoders Find Highly Interpretable Features in Language Models.
It might be helpful to have read the paper beforehand, but you're also invited to join without reading the paper or without wishing to discuss it, no preparation required.
After (and during) the discussion there will be space for chatting and socialising.
Signal group - https://signal.group/#CjQKIKwDn8g6EjwfBLAPiJAY2b53dubjBo78XTCYVHp39xzBEhCDvkRJLNBaCVawPjwuRoEv
I don't know if it's relevant to what you were looking into, but it's a very realistic assumption. In air-gapped environments it's common for infiltration to be easier than exfiltration, and it's common for highly sensitive environments to be air-gapped.
This meetup is for anyone interested in AI alignment (yes, that's also you!). We're having open discussions on various topics, no preparation required. We invite everyone regardless of prior knowledge about AI alignment to join us.
If you're doubtful whether to show up to the meeting, just show up. The location is Berlin Teamwork.
In the week before the meetup I may link a paper in the Signal group, and we'll have a group to discuss it during the meetup for whoever would want that.
We're open to talks from anyone in the community, if there's a topic you'd like to present or have any questions send me a message on Signal or as a forum message.
Join the Berlin Alignment Signal group - https://signal.group/#CjQKIKwDn8g6EjwfBLAPiJAY2b53dubjBo78XTCYVHp39xzBEhCDvkRJLNBaCVawPjwuRoEv
This meetup is for anyone interested in AI alignment (yes, that's also you!). We're having open discussions on various topics, no preparation required. We invite everyone regardless of prior knowledge about AI alignment to join us.
If you're doubtful whether to show up to the meeting, just show up. The location is Berlin Teamwork.
We're open to talks from anyone in the community, if there's a topic you'd like to present or have any questions send me a message on Signal or as a forum message.
Join the Berlin Alignment Signal group - https://signal.group/#CjQKIKwDn8g6EjwfBLAPiJAY2b53dubjBo78XTCYVHp39xzBEhCDvkRJLNBaCVawPjwuRoEv
This meetup is for anyone interested in AI alignment (yes, that's also you!). We're having open discussions on various topics, no preparation required. We invite everyone regardless of prior knowledge about AI alignment to join us.
If you're doubtful whether to show up to the meeting, just show up. The location is Berlin Teamwork.
We will hear a short talk from Stephan Wäldchen about his research on alignment and interpretability in ZIB then break into free-form discussion. You're also encouraged to bring a laptop and show people what you're working on.
If you're not in, join the Berlin Alignment Signal group - https://signal.group/#CjQKIKwDn8g6EjwfBLAPiJAY2b53dubjBo78XTCYVHp39xzBEhCDvkRJLNBaCVawPjwuRoEv
See you there!
This meetup is for anyone interested in AI alignment (yes, that's also you!). We're having open discussions on various topics, no preparation required. We invite everyone regardless of prior knowledge about AI alignment to join us.
If you're doubtful whether to show up to the meeting, just show up. The location is Chaos Computer Club Berlin.
You're also encouraged to bring a laptop and show people what you're working on.
If you're not in, join the Berlin Alignment Signal group - https://signal.group/#CjQKIKwDn8g6EjwfBLAPiJAY2b53dubjBo78XTCYVHp39xzBEhCDvkRJLNBaCVawPjwuRoEv
See you there!
This meetup is for anyone interested in AI alignment (yes, that's also you!). We're having open discussions on various topics, no preparation required. We invite everyone regardless of prior knowledge about AI alignment to join us.
If you're doubtful whether to show up to the meeting, just show up. The location is Chaos Computer Club Berlin.
You're also encouraged to bring a laptop and show people what you're working on.
If you're not in, join the Berlin Alignment Signal group - https://signal.group/#CjQKIKwDn8g6EjwfBLAPiJAY2b53dubjBo78XTCYVHp39xzBEhCDvkRJLNBaCVawPjwuRoEv
See you there!