You could also try to fit an ML potential to some expensive method, but it's very easy to produce very wrong things if you don't know what you're doing (I wouldn't be able for one)
Ahh for MD I mostly used DFT with VASP or CP2K, but then I was not working on the same problems. For thorny issues (biggish and plain DFT fails, but no MD) I had good results using hybrid functionals and tuning the parameters to match some result of higher level methods. Did you try meta-GGAs like SCAN? Sometimes they are suprisingly decent where PBE fails catastrophically...
My job was doing quantum chemistry simulations for a few years, so I think I can comprehend the scale actually. I had access to one of the top-50 supercomputers and codes just do not scale to that number of processors for one simulation independently of system size (even if they had let me launch a job that big, which was not possible)
Isn't this a trivial consequence of LLMs operating on tokens as opposed to letters?
True, but this doesn't apply to the original reasoning in the post - he assumes constant probability while you need increasing probability (as with the balls) to make the math work.
Or decreasing benefits, which probably is the case in the real world.
Edit: misred the previous comment, see below
It seems very weird and unlikely to me that the system would go to the higher energy state 100% of the time
I think vibrational energy is neglected in the first paper, it would be implicitly be accounted for in AIMD. Also, the higer energy state could be the lower free energy state - if the difference is big enough it could go there nearly 100% of the time.
Although they never take the whole supercomputer, so if you have the whole supercomputer for yourself and the calculations do not depend on each other you can run many in parallel
That's one simulation though. If you have to screen hundreds of candidate structures, and simulate every step of the process because you cannot run experiments, it becomes years of supercomputer time.
There are plenty of people on LessWrong who are overconfident in all their opionions (or maybe write as if they are, as a misguided rhetorical choice?). It is probably a selection effect of people who appreciate the sequences - whatever you think of his accuracy record, EY definitely writes as if he's always very confident in his conclusions.
Whatever the reason, (rhetorical) overconfidence is most often seen here as a venial sin, as long as you bring decently-reasoned arguments and are willing to change your mind in response to other's. Maybe it's not your case, but I'm sure many would have been lighter with their downvotes had the topic been another one - just a few people strong downvoting instead of simple downvoting can change the karma balance quite a bit
I disagree. You seem to think that the list of missing technologies sketched by Crawford is exhaustive, but it's not. One example that ties in your conclusions: paper. Maybe the Romans could have invented the printing press, I'm not sure, but printing on super-expensive vellum or papyrus is pointless.
And it's just one example. I make another. The Romans spread and improved watermills, so they were interested in labor-saving technology contra your argument. But their mills were not as good or widespread as modern or even late medieval ones. (mill technology was very important to the industrial revolution as you mention too)