Views purely my own unless clearly stated otherwise
I assume by "health-optimizing genetic manipulation" you mean embryo selection (seeing as gene editing is not possible yet). Indeed, Rationalists are more likely to be interested in embryo selection. And indeed, it is costly. But I'd say this is different from costly parenting - it's a one-time upfront cost to improve your child's genetics.
I ~never hear the 2nd thing among rationalists ("improve your kid's life outcomes by doing a lot of research and going through complicated procedures!").
Homeschooling is often preferred not because it substantially improves life outcomes but because it's nicer for the children and often parents. School involves a lot of wasted time/effort, and is frustrating and boring for many children. And so by homeschooling you can make their childhood nicer irrespective of life outcomes.
I was actually thinking to make a follow-up post like this. I basically agree.
Let's talk about two kinds of choice:
- choice in the moment
- choice of what kind of agent to be
I think this is the main insight - depending on what you consider the goal of decision theory, you're thinking about either (1) or (2) and they lead to conflicting conclusions. My implicit claim in the linked post is that when describing thought experiments like Newcomb's Problem, or discussing decision theory in general, people appear to be referring to (1), at least in classical decision theory circles. But on LessWrong people often switch to discussing (2) in a confusing way.
the core problem in decision theory is reconciling these various cases and finding a theory which works generally
I don't think this is a core problem because in this case it doesn't make sense to look for a single theory that does best at two different goals.
I think those other types of startups also benefit from expertise and deep understanding of the relevant topics (for example, for advocacy, what are you advocating for and why, how well do you understand the surrounding arguments and thinking...). You don't want someone who doesn't understand the "field" working on "field-building".
My bad, I read you as disagreeing with Neel's point that it's good to gain experience in the field or otherwise become very competent at the type of thing your org is tackling before founding an AI safety org.
That is, I read "I think that founding, like research, is best learned by doing" as "go straight into founding and learn as you go along".
I naively expect the process of startup ideation and experimentation, aided by VC money
It's very difficult to come with AI safety startup ideas that are VC-fundable. This seems like a recipe for coming up with nice-sounding but ultimately useless ideas, or wasting a lot of effort on stuff that looks good to VCs but doesn't advance AI safety in any way.
I disagree with this frame. Founders should deeply understand the area they are founding an organization to deal with. It's not enough to be "good at founding".
This makes sense as a strategic choice, and thank you for explaining it clearly, but I think it’s bad for discussion norms because readers won’t automatically understand your intent as you’ve explained it here. Would it work to substitute the term “alignment target” or “developer’s goal”?
When I say "human values" without reference I mean "type of things that human-like mind can want and their extrapolations"
This is a reasonable concept, but should have a different handle from “human values”. (Because it makes common phrases like “we should optimize for human values” nonsensical. For example, human-like minds can want chocolate cake but that tells us nothing about the relative importance of chocolate cake and avoiding disease, which is relevant for decision making.)
I think the only possible tension here is re. embryo selection. And it's not a real tension. The claim is something like "if what's giving you pause is the high demand on parents, just wing it and have kids anyway and anyhow" + "if you already know you want to have a kid and want to optimize their genes/happiness here are some ways to do it". I think most Rationalists would agree that the life of an additional non-embryo-selected, ordinary-parented child is still worth creating. Or in other words, one set of claims is about the floor of how much effort you can put in per child and it still be a good idea to have the child. The other set of claims is about effective ways to put more effort in if you want to (mainly what's discussed is embryo selection for health/intelligence).