That might be a crux here, since I view a lot of our knowledge of causality and physics essentially we take on trust, so that we don't need to repeat experimentation.
A better example than the Asilomar conference was the He Jiankui response. Scientists had strong norms against such research, and he went to jail for genetically engineering humanity for 3 years. That was a stronger response to perceived ethical violations than is the norm in science.
Basically, with the assumption of this trend continues, there's no criticality threshold that's necessary for discontinuity, and the most severe issues of AI Alignment are in the FOOM scenario, where we only get one chance to do it right. Basically, this trend line shows no discontinuity, but continuously improving efforts, so there's no criticality for FOOM to be right.
Basically, a FOOM scenario in AI basically means that once it reaches a certain level of intelligence, it reaches a criticality threshold where 1 improvement on average generates 1 or more improvements, essentially shortening the time it takes to get Super-intellegent.
The important part of his argument is in the second paragraph, and I agree because by and large, pretty much everything we know about science and casuality, at least in the beginning for AI is on trusting the scientific papers and experts. Virtually no knowledge is given by experimentation, but instead by trusting the papers, experts and books.
My own biggest disagreement with you is the idea that morality and values are objective. While I'm a moral realist, I'm of the weakest kind of realist and view morals and values as inherently subjective. In other words there's no fact of the matter here, and post-modernism is actually useful here (I'm a strong critic of post-modernism, but it's basically correct vis-a-vis morality and values.)
Thanks, I'll retract that comment.
The entire P vs NP problem basically boils down to " is it easier to verify the correct answer than generate it?" And while it's still unproven, in our universe the answer seems to be yes. So conditioning on P not equaling NP, it's much easier to verify that it's correct than to generate a proof or hypothesis.
That might be a crux here, because my view is that hardware improvements are much harder to do effectively, especially in secret around the human level, due to Landauer's Principle essentially bounding efficiency of small scale energy usage close to that of the brain (20 Watts.) Combine this with 2-3 orders of magnitude worse efficiency than the brain and basically any evolutionary object compared to human objects, and the fact it's easier to get better software than hardware due to the virtual/real life distinction, and this is a crux for me.