Technical staff at Anthropic (views my own), previously #3ainstitute; interdisciplinary, interested in everything, ongoing PhD in CS, bets tax bullshit, open sourcerer, more at zhd.dev
I think Elizabeth is correct here, and also that vegan advocates would be considerably more effective with higher epistemic standards:
I think veganism comes with trade-offs, health is one of the axes, and that the health issues are often but not always solvable. This is orthogonal to the moral issue of animal suffering. If I’m right, animal EAs need to change their messaging around vegan diets, and start self-policing misinformation. If I’m wrong, I need to write some retractions and/or shut the hell up.
The post unfortunately suffers for its length, detailed explanations, and rebuttal of many motivated misreadings - many of which can be found in the comments, so it's unclear whether this helped. It's also well-researched and cited, well organized, offers cruxes and anticipates objections - vegan advocates are fortunate to have such high-quality criticism.
This could have been a shorter post, which was about rather than engaged in epistemics and advocacy around veganism, with less charitable assumptions. I'd have shared that shorter post more often, but I don't think it would be better.
I remain both skeptical some core claims in this post, and convinced of its importance. GeneSmith is one of few people with such a big-picture, fresh, wildly ambitious angle on beneficial biotechnology, and I'd love to see more of this genre.
One one hand on the object level, I basically don't buy the argument that in-vivo editing could lead to substantial cognitive enhancement in adults. Brain development is incredibly important for adult cognition, and in the maybe 1%--20% residual you're going well off-distribution for any predictors trained on unedited individuals. I too would prefer bets that pay off before my median AI timelines, but biology does not always allow us to have nice things.
On the other, gene therapy does indeed work in adults for some (much simpler) issues, and there might be valuable interventions which are narrower but still valuable. Plus, of course, there's the nineteen-ish year pathway to adults, building on current practice. There's no shortage of practical difficulties, but the strong or general objections I've seen seem ill-founded, and that makes me more optimistic about eventual feasibility of something drawing on this tech tree.
I've been paying closer attention to the space thanks to Gene's posts, to the point of making some related investments, and look forward to watching how these ideas fare on contact with biological and engineering reality over the next few years.
I think this is the most important statement on AI risk to date. Where ChatGPT brought "AI could be very capable" into the overton window, the CAIS Statement brought in AI x-risk. When I give talks to NGOs, or business leaders, or government officials, I almost always include a slide with selected signatories and the full text:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
I believe it's true, that it was important to say, and that it's had an ongoing, large, and positive impact. Thank you again to the organizers and to my many, many co-signatories.
I've been to quite a few Python conferences; typically I find the unstructured time in hallways, over dinner, and in "sprints" both fun and valuable. I've made great friends and recruited new colleagues, conceived and created new libraries, built professional relationships, hashed out how to fix years-old infelicities in various well-known things, etc.
Conversations at afterparties led me to write concrete reasons for hope about AI, and at another event met a friend working on important-to-me biotechnology (I later invested in their startup). I've also occasionally taken something useful away from AI safety conversations, or in one memorable late-night at LessOnline hopefully conveyed something important about my work.
There are many more examples, but it also feels telling that I can't give you examples of conference talks that amazed me in person (there are some great ones recorded but your odds are low, and most I'd prefer to read a good written verion instead), and structured events I've enjoyed are things like "the Python language summit" or "conference dinners which are mostly socializing" - so arguably the bar is low
And I've received an email from Mieux Donner confirming Lucie's leg has been executed for 1,000€. Thanks to everyone involved!
If if anyone else is interested in a similar donation swap, from either side, I'd be excited to introduce people or maybe even do this trick again :D
For what it's worth I think this accurately conveys "Zac endorses the Lightcone fundraiser and has non-trivially donated", and dropping the word "unusually" would leave the sentence unobjectionable; alternatively maybe you could have dropped me from the list instead.
I just posted this because I didn't want people to assume that I'd donated >10% of my income when I hadn't :-)
I (and many others) recently received an email about the Lightcone fundraiser, which included:
Many people with (IMO) strong track records of thinking about existential risk have also made unusually large personal donations, including ..., Zac Hatfield-Dodds, ...
and while I'm honored to be part of this list, there's only a narrow sense in which I've made an unusually large personal donation: the $1,000 I donated to Lightcone is unusually large from my pay-what-I-want budget, and I'm fortunate that I can afford that, but it's also much less than my typical annual donation to GiveWell. I think it's plausible that Lightcone has great EV for impartial altruistic funding, but don't count it towards my efffective-giving goal - see here and here.
(I've also been happy to support Lightcone by attending and recommending events at Lighthaven, including an upcoming metastrategy intensive, and arranging a donation swap, but don't think of these as donations)
If they're not, let me know by December 27th and I'll be happy to do the swap after all!
I reached out to Lucie and we agreed to swap donations: she'd give 1000€ to AMF, and I'd give an additional[1] $810 to Lightcone (which I would otherwise send to GiveWell). This would split the difference in our tax deductions, and lead to more total funding for each of the organizations we want to support :-)
We ended up happily cancelling this plan because donations to Lightcone will be deductible in France after all, but I'm glad that we worked through all the details and would have done it. update: because we're doing it after all!
I think it's plausible that Lightcone has great EV for impartial altruistic funding, but due to concerns about community dynamics / real-or-perceived conflicts of interest / etc, I don't give to AI-related or socially-close causes out of my 10+% to effective charity budget. But I've found both LessWrong and Lighthaven personally valuable, and therefore gave $1,000 to Lightcone on the same basis that I pay-what-you-want for arts or events that I like. I also encouraged Ray to set up rationality.training in time for end-of-2024 professional development spending, and I'm looking forward to just directly paying for a valuable service! ↩︎
"POC || GTFO culture" need not be literal, and generally cannot be when speculating about future technologies. I wouldn't even want a proof-of-concept misaligned superintelligence!
Nonetheless, I think the field has been improved by an increasing emphasis on empiricism and demonstrations over the last two years, in technical research, in governance research, and in advocacy. I'd still like to see more carefully caveating of claims for which we have arguments but not evidence, and it's useful to have a short handle for that idea - "POC || admit you're unsure", perhaps?