LESSWRONG
LW

976
LaplaceHolder
8140
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
1LaplaceHolder's Shortform
1mo
0
1LaplaceHolder's Shortform
1mo
0
Jacob_Hilton's Shortform
LaplaceHolder14d30

This is what came to mind for me:

"But once [protein structure prediction] is solved [-ish], you'd be able to effectively go through dozens of [de novo proteins] per day for, say, $1000 [each], while previously, each one would've taken six months and $50,000."

Reply
Penny's Hands
LaplaceHolder25d310

Hands

Reply2
Cheap Labour Everywhere
LaplaceHolder1mo20

McDonald's of course!

Where precisely would you go in India? I can recommend some restaurants, but depending where you start, you may not consider them worth the drive.

Reply
Open Thread Autumn 2025
LaplaceHolder1mo*4-2

I have not seen much written about the incentives around strategic throttling of public AI capabilities. Links would be appreciated! I've seen speculation and assumptions woven into other conversations, but haven't found a focused discussion on this specifically.

If knowledge work can be substantially automated, will this capability be shown to the public? My current expectation is no.

I think it's >99% likely that various national security folks are in touch with the heads of AI companies, 90% likely they can exert significant control over model releases via implicit or explicit incentives, and 80% likely that they would prevent or substantially delay companies from announcing the automation of big chunks of knowledge work. I expect a tacit understanding that if models which destabilize society beyond some threshold are released, the toys will be taken away. Perhaps government doesn't need to be involved, and the incentives support self-censorship to avoid regulation.

This predicts public model performance which lingers at "almost incredibly valuable" whether there is a technical barrier there or not, while internal capabilities advance however fast they can. Even if this is not happening now, this mechanism seems relevant to the future.

A Google employee might object by saying "I had lunch with Steve yesterday, he is the world's leading AI researcher, and he's working on public-facing models. He's a terrible liar (we play poker on Tuesdays), and he showed me his laptop". This would be good evidence that the frontier is visible, at least to those who play poker with Steve.

There might be some hints of an artificial barrier in eval performances or scaling metrics, but it seems things are getting more opaque.

Also, I am new, and I've really been enjoying reading the discussions here!

Reply