Posts

Sorted by New

Wiki Contributions

Comments

jacopo1mo10

You could also try to fit an ML potential to some expensive method, but it's very easy to produce very wrong things if you don't know what you're doing (I wouldn't be able for one)

jacopo1mo10

Ahh for MD I mostly used DFT with VASP or CP2K, but then I was not working on the same problems. For thorny issues (biggish and plain DFT fails, but no MD) I had good results using hybrid functionals and tuning the parameters to match some result of higher level methods. Did you try meta-GGAs like SCAN? Sometimes they are suprisingly decent where PBE fails catastrophically...

jacopo1mo20

My job was doing quantum chemistry simulations for a few years, so I think I can comprehend the scale actually. I had access to one of the top-50 supercomputers and codes just do not scale to that number of processors for one simulation independently of system size (even if they had let me launch a job that big, which was not possible)

jacopo2mo12

Isn't this a trivial consequence of LLMs operating on tokens as opposed to letters?

jacopo5mo10

True, but this doesn't apply to the original reasoning in the post - he assumes constant probability while you need increasing probability (as with the balls) to make the math work.

Or decreasing benefits, which probably is the case in the real world.

Edit: misred the previous comment, see below

jacopo7mo10

It seems very weird and unlikely to me that the system would go to the higher energy state 100% of the time

I think vibrational energy is neglected in the first paper, it would be implicitly be accounted for in AIMD. Also, the higer energy state could be the lower free energy state - if the difference is big enough it could go there nearly 100% of the time.

jacopo7mo74

Although they never take the whole supercomputer, so if you have the whole supercomputer for yourself and the calculations do not depend on each other you can run many in parallel

jacopo7mo73

That's one simulation though. If you have to screen hundreds of candidate structures, and simulate every step of the process because you cannot run experiments, it becomes years of supercomputer time.

jacopo7mo20

There are plenty of people on LessWrong who are overconfident in all their opionions (or maybe write as if they are, as a misguided rhetorical choice?). It is probably a selection effect of people who appreciate the sequences - whatever you think of his accuracy record, EY definitely writes as if he's always very confident in his conclusions.

Whatever the reason, (rhetorical) overconfidence is most often seen here as a venial sin, as long as you bring decently-reasoned arguments and are willing to change your mind in response to other's. Maybe it's not your case, but I'm sure many would have been lighter with their downvotes had the topic been another one - just a few people strong downvoting instead of simple downvoting can change the karma balance quite a bit

jacopo8mo241

(Phd in condensed matter simulation) I agree with everything you wrote where I know enough (for readers, I don't know anything about lead contacts and several other experimental tricky points, so my agreement should not be counted too much).

I just add on the simulation side (Q3): this is what you would expect to see in a room-T superconductor unless it relies on a completely new mechanism. But, this is something you see also in a lot of materials that superconduct at 20K or so. Even in some where the superconducting phase is completely suppressed by magetism or structural distortions or any other phase transition. In addition, DFT+U is a quick-and-dirty approach for this kind of problem, as fits the speed at which the preprint was put out. So from the simulation bayesian evidence in favor but very weak

Load More