Posts

Sorted by New

Wiki Contributions

Comments

wobblz1y10

My point is that, as you said, you take the safest route when not knowing what others will do - do whatever is best for you and, most importantly, guaranteed. You take some years, and yes, you lose the opportunity to walk out of doing any time, but at least you're in complete control of your situation. Just imagine a PD with 500 actors... I know what I'd pick. 

wobblz1y51

"The moratorium on new large training runs needs to be indefinite and worldwide."

Here lies the crux of the problem. Classical prisoners' dilemma, where individuals receive the greatest payoffs if they betray the group rather than cooperate. In this case, a bad actor will have the time to leapfrog the competition and be the first to cross the line to super-intelligence. Which, in hindsight, would be an even worse outcome.

The genie is out of the bottle. Given how (relatively) easy it is to train large language models, it is safe to assume that this whole field is now uncontrollable. Every actor with enough data and processing power can give it a go. You, me, anyone. Unlike, for example, advanced semiconductor manufacturing, where controlling ASML, the only sufficiently advanced company specialising in photolithography machines used in the production of chips, is equal to effectively overseeing the entire chip manufacturing industry.