A number of people seem to have departed OpenAI at around the same time as you. Is there a particular reason for that which you can share? Do you still think that people interested in alignment research should apply to work at OpenAI?
In the case of reducing mutational load to near zero, you might be doing targeted changes to huge numbers of genes. There is presumably some point at which it's easier to create a genome from scratch.
I agree it's an open question though!
An alternative to editing many genes individually is to synthesise the whole genome from scratch, which is plausibly cheaper and more accurate.
I would find this more useful if you spelled out a bit more about your scoring method. You say:
They must be loyal, intelligent, and hardworking, they must have a sense of dignity, they must like humans, and above all they must be healthy.
Which of these do you think are the most important? Why do these traits matter? (for example, hardworking dogs are not really necessary in the modern world)
And why these traits and not others? (for example: size, cleanliness, appearance, getting along with other animals)
a dog which is as close to being a wolf as one can get without sacrificing any of those essential characteristics which define a dog as such
Why do you think a dog that is close to a wolf is objectively better than dogs which are further away?
OpenPhil gave Carl Shulman $5m to re-grant
I didn't realise this was happening. Is there somewhere we can read about grants from this fund when/if they occur?
Would this approach have any advantages vs brain uploading? I would assume brain uploading to be much easier than running a realistic evolution simulation, and we would have to worry less about alignment.
I filled in the survey! Like many people I didn't have a ruler to use for the digit ratio question.
Also, I'm torn between how to interpret Snape's last question - my first thought was that he was verifying the truth of a story he had been told("Your master tortured her, now join the light side already!" being the most likely), but upon rereading, I wonder if he was worried that she had been used as Horcrux fuel.
Or verifying a deal he made with Voldemort, though that might not make as much sense with Snape's character.
I found it really helpful to have a list of places where Eliezer and Paul agree. It's interesting to see that there is a lot of similarity on big picture stuff like AI being extremely dangerous.