Anyone writing an effortful response to the original post should be presumed to have good faith to some reasonable degree, and any point that you think they ignored was probably either misunderstood, or the relevance of the point is not obvious to the author of the comment. By responding in a harsh way to what might be a non-obvious misunderstanding, you're essentially adopting the conflict side of the "mistake vs conflict theory" side of things.
Any comments which aren't effortful and are easily seen to have an answer in the original post will probably just be downvoted anyway, and the proper response from OP is to just not respond at all.
To be clear, I think that the community here is probably kind enough so that these aren't big problems, but it still kind of irks me to make it slightly easier to be unkind.
Hmm, some of these reacts seem kind of passive-aggressive to me, the "Not planning to respond" and "I already addressed this" in particular just close off conversational doors in a fairly rude way. How do you respond to someone saying "I already addressed this" to a long paragraph of yours in such a low-effort way? It's like texting "ok" to a long detailed message.
If you believe this, and you have not studied quantum chemistry, I invite you to consider as to how you could possibly be sure about this. This is a mathematical question. There is a hard, mathematical limit to the accuracy that can be achieved in finite time.
Doesn't the existence of AlphaFold basically invalidate this? The exact same problems you describe for band-gap computation exist for protein folding: the underlying true equations that need to be solved are monstrously complicated in both cases, and previous approximate models made by humans aren't that accurate in both cases... yet this didn't prevent AlphaFold from destroying previous attempts made by humans by just using a lot of protein structure data and the magic generalisation power of deep networks. This tells me that there's a lot of performance to be gained in clever approximations to quantum mechanical problems.
I could ask just the same why you'd identify so strongly with a mere pattern of neural activation that make up the memes in the child's mind. This preference of mine is getting close to the bedrock of my preference ordering, I want my child to share my genes because that's just kind of what I want, I don't know how to explain that in terms of any more fundamental desire of mine.
But like I said, I'd be fine with CRISPR to change a small fraction of the genes which have an out-sized impact on success, what I don't want is to change (or worse, take from someone else) the large number of genes which don't particularly influence success or intelligence, but which make me who I am.
But wait! Why stop with two parents? Couldn’t we get chromosomes from the embryos of more than one couple?
I'm very, very interested in embryo/chromosomal selection of this kind for my future children... but there is absolutely no chance, no fucking chance at all, that I'd be okay with using the DNA of more than my spouse and I, the idea repulses me on an incredibly deep level. I want my children to look like me, and it's very important to me that a plurality of their genes be mine. I'm okay with doing CRISPR to change specific genes in addition to the chromosomal selection, so they wouldn't be 50% my genes, maybe a bit less, but if you can point to some specific third human and say "yeah an equal fraction of genes came from this one other dude", I'm out.
There is an idea that I’ve sometimes heard around rationalist and EA circles, that goes something like “you shouldn’t ever feel safe, because nobody is actually ever safe”.
Wait, really?! If this is true then I had severely overestimated the sanity minimum of rationalists. The objections in your post are all true, of course, but they should also pop out in a sane person's mind within like 15 seconds of actually hearing that statement...
The main advantage of Tool-AIs is that they can be used to solve alignment for more agentic approaches. You don't need to prevent people from building agentic AI for all time, just in the intermittent period while we have Tool AI, but don't yet have alignment.
The way to actually make the universe colder and preserve all the energy currently going to waste in stars is to dump all the matter in your galaxy in two giant spinning black holes, and then extract energy via the Penrose process. There's no way that a civilisation would just say "oops, we want to use reversible computing, I guess we now have no use for all those stars and giant gas clouds, let's just leave them be as they are now..."
But it's just that we don't see any evidence of alien civilisation when we look at the stars, implying that any alien civ that does exist has a very, very strong preference for not being seen... which doesn't square at all with the "oh well if humans see us a bit it's no big deal" attitude, this is a civilisation who has hampered its own technological growth probably for millenia (required for travel between stars) in order not to be seen. The seas are so vast compared to the area that fighter jets can survey, and apparent capabilities of the alien ships so incredible, that it should be trivial for them to evade literally all observation. ( And the CMV temperature placed an upper bound as a function of time on the lowest temperature you can achieve in outer space anyway)
Has Cade Metz bothered to perhaps read a bit more on AI risk than the one-sentence statement in the safe.ai open letter? To my eye this article is full of sneering and dismissive insinuations about the real risk. It's like the author is only writing this article in the most grudging way possible, because at this point the prestige of the people talking about AI risk has gotten so large that he can't quite so easily dismiss it without losing status himself.
I think rationalists need to snap out of the "senpai noticed me" mode with respect to the NYT, and actually look at the pathetic level its AI articles operate on. Is quoting the oldest, most famous and most misunderstood meme of AI safety really the level you ought to expect from what is ostensibly the peak of journalism in the western world?