Bachelor in general and applied physics. AI safety/Agent foundations researcher wannabe.
I love talking to people, and if you are an alignment researcher we will have at least one common topic (but I am very interested in talking about unknown to me topics too!), so I encourage you to book a call with me: https://calendly.com/roman-malov27/new-meeting
Email: roman.malov27@gmail.com
GitHub: https://github.com/RomanMalov
TG channels (in Russian): https://t.me/healwithcomedy, https://t.me/ai_safety_digest
I think we can go one step further: (with sufficiently smart AIs) every topic explanation is now a textbook with exercises.
I've read Probabilistic Payor Lemma? and Self-Referential Probabilistic Logic Admits the Payor's Lemma and thought about the problem for a while. I'm not sure I have enough background to fully understand the problem and suggested solutions.
The Case Against AI Control Research seems related. TL;DR: mainline scenario is that hallucination machine is overconfident about it's own alignment solution, then it gets implemented without much checking, then doom.
Welcome! The only thing I can think of on the intersection of AI and photography (besides IG filters) is this weird "camera", which uses AI to turn a little bit of geographical information to create images. Do you know of any other interesting intersections?
IIUC, those are just bots who copy early and liked comments. So my comment would also be copied by other bots.
They are mostly like “wow, what a great [particular detail in the video]”. Sometimes it’s a joke I thought of.
Daily Research Diary
In the comments to this quick take, I am planning to report on my intellectual journey: what I read, what I learned, what exercises I’ve done, and which projects or research problems I worked on. Thanks to @TristianTrim for suggesting the idea. Feel free to comment with anything you think might be helpful or relevant.