This is a polemic to the ten arguments post. I'm not a regular LW poster, but I'm an AI researcher and mild-AI-worrier. I believe that AI progress, and the risks associated with it, is one of the most important things to figure out as humanity in the current year. And...
I'm a daily user of ChatGPT, sometimes supplementing it with Claude, and the occasional local model for some experiments. I try to make squeeze LLMs into agent-shaped bodies, but it doesn't really work. I also have a PhD, which typically would make me an expert in the field of AI,...
Link: Personal note: I'm somewhat in between safetyism and e/acc in terms of their general ideologies/philosophies. I don't really consider myself a part of either group. My view on AI x-risk is that AI can be potentially an existential threat, but we're nowhere near that point right now, so safety...
After some introspection, I realized my timelines are relatively long, which doesn't seem to be shared by most people around here. So this is me thinking out loud, and perhaps someone will try to convince me otherwise. Or not. First things first, I definitely agree that a sufficiently advanced AI...
I'm curious what the LW community as a whole thinks about the work that Hugging Face is doing. Their main MO seems to be taking whatever new breakthrough in AI there is, and making it open-source and accessible to the public. (as a first order approximation) I see a few...
Do they exist? Sure, I'm exaggerating a little bit. On my current job hunt I try to pay special attention to safety/alignment-related positions, but it seems that a vast majority of them would require me to relocate to either the Bay Area (or the US in general), or London, and...
Work by Quinn Dougherty, Ben Greenberg & Ariel Kwiatkowski From the AI Safety Camp group that worked on Cooperativity and Common Pool Resources: a write-up of a problem we worked on, our proposed solution, and the results of implementing this solution in a simulated environment. Contents: 1. Problem description 2....