The simplest answer is progress is stalling. They could have gone for the engagement optimization angle since 2023, but there were promising alternatives then. By 2025, these all failed. Pretraining returns stalled and reasoning proved too inefficient to scale.
Small note: negative consensus seems to be concentrated in the Anglosphere
I'm not in law, but it seems more like an online course trying to be sold to me than a real conference. There is a long list of company logos, a bunch of credits and certifications promised, and a large blob of customer testimonials.
Did some quick googling and an actual conference would look like this. https://www.lsuite.co/techgc
I'm surprised this comment has so many upvotes. Did anyone actually click the link?
She is? She just seems like a standard LinkedIn grifter.
Seems like a highly speculative post built on complete guesses with little evidence or faulty claims.
How is any of that wrong or related to the question of ai being aligned. Do doomers seriously think you can indefinitely stop automation? It’s been happening for centuries.
They’re ignoring alignment but so are most labs. I still don’t get how this is not irrational. If it was worded as AI will inevitably become smarter then no one here would care.
The reaction to Mechanize seems pretty deranged. As far as I can tell they don't deny or hasten existential risk any more than other labs. They just don't sugarcoat it. It's quite obvious that the economic value of AI is for labor automation, and that the only way to stop this is to stop AI progress itself. The forces of capitalism are quite strong, labor unions in the US tried to slow automation and it just moved to China as a result (among other reasons). There is a reason Yudkowsky always implies measures like GPU bans.
It just seems like they hit a nerve since apparently a lot of doomerism is fueled by insecurities of job replacement.
In practice, this likely boils down to a race. On one side are people trying to empower humanity by building coordination technology and human-empowering AI. On the other side are those working to create human-disempowering technology and render human labor worthless as fast as possible.
I mean if we're being completely candid here, there is almost no chance the first group wins this race right?
the world had more centralization, such that the Industrial Revolution never started in an uncontrolled way
What motive does a centralized dominant power have to allow any progress? The entire world would likely look more like North Korea.
AGI misalignment is less likely to look like us being gray goo'd and more like the misalignment of the tiktok recommendation algorithm (but possibly less since that one doesn't understand human values at all).