I don't think we used high-powered lobbyists in NY or CA (someone correct me if I'm wrong); their legislators already wanted to regulate the big AI companies, and they (and their staffers) are smart enough to distinguish it from the sloppy AI bills they usually see.
At the federal level, both Dems and the GOP want to go after the big AI companies, and I believe there's a bill with teeth that almost all of Congress would privately agree with. The problem is that the anti-regulation lobby already has Trump in their pocket, so they just need to buy one-third of the Senate to stop a veto override. MAGA senators are the most obvious targets, because Republicans don't remain in office for long if they feud with Trump.
The value of having 1 legislator who Gets It is vastly higher than the value of having 0 legislators who Get It, because that legislator can introduce bills. Alex Bores and Scott Wiener are the perfect examples of this; it didn't take a majority of their state legislatures to Get It in order to get some kind of AI regulation bill passed, but there would not have been a comparably good bill on the table without them.
This is a tangent, but the reason that capitalist societies outproduce communist societies really isn't about money as a motivator, it's about allocation of resources by supply and demand vs allocation by a bureaucracy of humans. The more complicated the supply chain, the more intractable it becomes for people to predict how resources are best spent at each stage of it in order to create complex final goods; but the simple answer "what creates the biggest profit margin for each economic actor" turns out to be a pretty efficient solution.
...with major caveats like "efficiently creates goods to meet people's desires weighted by purchasing power" and "this doesn't account for externalities" and "money can corrupt politics in various ways".
A market economy would work even better if people were completely unselfish! They would enact full redistribution (i.e. consumption completely decoupled from earnings) and price in the externalities and resist corruption. In such a society, money would be the true unit of caring: maximizing profit would be identical to doing your most efficient part in creating things people want.
When it specifically comes to loss-of-control risks killing or sidelining all of humanity, I don't believe Sam or Dario or Demis or Elon want that to happen, because it would happen to them too. (Larry Page is different on that count, of course.) You do have conflict theory over the fact that some of them would like ASI to make them god-emperor of the universe, but all of them would definitely take a solution to "loss of control" if it were handed to them on a silver platter.
Arms races are bad things. First best by far is if nobody has the doomsday devices, but second best is if we attempt nonproliferation of doomsday devices.
As a parallel, we would have been still at risk in a world where DeepMind was working on building ASI but where Elon didn't freak out and start a competitor (followed by another competitor), but not as much risk. That's not because DeepMind are "the good guys", it's because of race dynamics.
Capabilities being more jagged reduces p(doom), less jagged increases it.
Ceteris paribus, perhaps, but I think the more important factor is that more jagged capabilities imply a faster timeline. In order to be an existential threat, an AI only needs to have one superhuman ability that suffices for victory (be that superpersuasion or hacking or certain kinds of engineering etc), rather than needing to exceed human capabilities across the board.
rot13: Gur snvyfnsr jnf gur bayl cneg gung fhecevfrq zr, naq gur qvrtrgvp rkcynangvba sbe ubj guvf frghc pbhyq unir unccrarq ng nyy jnf phgr.
I may never stop finding it funny the extent to which Trump will seek out the one thing we know definitively is going badly, then and choose that to lie and brag about.
We're not the target audience for those posts. He's telling everyone who's kissing his ass that this is a topic he wants them to lie about as well.
If we were doing a better job catching cheaters presumably people would be doing it less?
Seems trivially easy for an idiot to start an account, lose some games, get frustrated, and start asking a chess bot what to do so that they can rescue a bad position / show that bastard what's what / vicariously enjoy winning. I expect that crowd to be the majority of cheaters, and for them to be essentially inelastic to the probability with which they get caught (the first time).
This post was certainly fun to write, and apparently fun to read as well, but I'm not very satisfied with it in retrospect: