Manuel Allgaier
Manuel Allgaier has not written any posts yet.

Manuel Allgaier has not written any posts yet.

Great work, repost this on EA forum?
I haven't yet seen this posted in the EA forum, did I miss it? If you don't plan to post it there, mind if I do? I'm sure they're interested as well.
I talked to one Lightcone staff a few weeks ago and they said they are not currently hiring for this role, so while the application form is technically still open, it seems not worth applying.
I'd appreciate if the post could be updated accordingly, else people just waste time applying.
I've been following Sam Altman's messaging for a while, and it feels like Altman does not have one consistent set of beliefs (like an ethics/safety researcher would) but tends to say different things in different times and places, depending on what seems currently most useful for achieving his goals. Many CEOs do that, but he seems to do that more than other OpenAI staff or executives at Anthropic or Deepmind. I agree with your conclusion, to pay less attention to their messaging and more to their actions.
https://clay.earth looks interesting! Are you still using it now (7 months later)? Would you still recommend it?
FYI: The link in the first line didn't work for me ("Invalid URL: https://ai-plans.com"). This link works: https://www.ai-plans.com/
Capacity up to 40 people, first come, first served. Please RSVP on the EA Forum! (no account needed)
How could AI existential risk play out? Choose one of five roles, play through a plausible scenario with other attendees, and discuss it afterward. This was a popular session at the EAGxBerlin conference with ~70 participants over two sessions and positive feedback, so we're doing it again for EA Berlin.
Everyone is welcome, also if you’re new to AI safety! People underrepresented in the AI safety field are especially welcome. If you're very new to the field, we recommend you read/skim an introductory text such as this 80,000 Hours article or the Most Important Century series... (read 498 more words →)
Using ChatGPT etc gives people such an advantage in (some) jobs and is easy to use "secretly" that it seems highly unlikely that a significant amount of people would boycott it.
My guess is that at most maybe 1-10% of a population would actually adhere to a boycott, and those who do would be in a much worse position to work on AI Safety and other important matters.
What about democratically elected non-profit boards?
Most national EA organisations with paid staff (like EA France, EA Norway or EA Germany just to mention a few) are registered associations that have their board (re-)elected by its members every 1-2 years. That way board members can be fired by the association members they represent.
I don't think this is perfect, the average member often does not have enough info to judge the performance of a board member and elections have their own downsides (like sometimes favoring popular and charismatic candidates over the best candidates for the job), but at least for national EA orgs it does seem like the best option to me (medium confidence).
This seems a lot more common in mainland Europe than in the UK or the US. Is this something we should explore more for other nonprofits as well? What other non-profits have clearly defined members (e.g. beneficiaries, stakeholders, ...) that could elect a board?
In case the organiser does not update this anymore, we'll now meet on Saturday (15 January) from 1pm onwards at Baobab in Santa Cruz, Tenerife. So far seven LWers indicated interest. Feel free to join! :)
Address: C. Antonio Domínguez Alfonso, 30, 38003 Santa Cruz de Tenerife, Spain
Google Maps Link: https://g.page/baobab-santa-cruz?share
Thanks for writing this up!
This seems really useful for aspiring rationality organisers, will forward it to those I meet.
Thanks! I just published it on the EA forum as linkpost: https://forum.effectivealtruism.org/posts/8iccNXsAdtpYWAtzu/ai-2027-what-superintelligence-looks-like-linkpost