LESSWRONG
LW

1941
O O
765Ω473510
Message
Dialogue
Subscribe

swe, speculative investor

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
1O O's Shortform
2y
130
O O's Shortform
O O8d-3-5

AGI misalignment is less likely to look like us being gray goo'd and more like the misalignment of the tiktok recommendation algorithm (but possibly less since that one doesn't understand human values at all). 

Reply
Why is OpenAI releasing products like Sora and Atlas?
Answer by O OOct 26, 20251-2

The simplest answer is progress is stalling. They could have gone for the engagement optimization angle since 2023, but there were promising alternatives then. By 2025, these all failed. Pretraining returns stalled and reasoning proved too inefficient to scale. 
 

Reply
Origins and dangers of future AI capability denial
O O14d80

Small note: negative consensus seems to be concentrated in the Anglosphere 

Reply
Musings from a Lawyer turned AI Safety researcher (ShortForm)
O O1mo10

I'm not in law, but it seems more like an online course trying to be sold to me than a real conference. There is a long list of company logos, a bunch of credits and certifications promised, and a large blob of customer testimonials.

Did some quick googling and an actual conference would look like this. https://www.lsuite.co/techgc

I'm surprised this comment has so many upvotes. Did anyone actually click the link?




 

Reply
Musings from a Lawyer turned AI Safety researcher (ShortForm)
O O1mo73

She is? She just seems like a standard LinkedIn grifter.

Reply
We won’t get docile, brilliant AIs before we solve alignment
O O1mo-1-3

Seems like a highly speculative post built on complete guesses with little evidence or faulty claims.

Reply
O O's Shortform
O O1mo10

How is any of that wrong or related to the question of ai being aligned. Do doomers seriously think you can indefinitely stop automation? It’s been happening for centuries.


They’re ignoring alignment but so are most labs. I still don’t get how this is not irrational. If it was worded as AI will inevitably become smarter then no one here would care. 

Reply
O O's Shortform
O O1mo1-14

The reaction to Mechanize seems pretty deranged. As far as I can tell they don't deny or hasten existential risk any more than other labs. They just don't sugarcoat it. It's quite obvious that the economic value of AI is for labor automation, and that the only way to stop this is to stop AI progress itself. The forces of capitalism are quite strong, labor unions in the US tried to slow automation and it just moved to China as a result (among other reasons). There is a reason Yudkowsky always implies measures like GPU bans.

 It just seems like they hit a nerve since apparently a lot of doomerism is fueled by insecurities of job replacement. 

Reply2
Jan_Kulveit's Shortform
O O1mo816

In practice, this likely boils down to a race. On one side are people trying to empower humanity by building coordination technology and human-empowering AI. On the other side are those working to create human-disempowering technology and render human labor worthless as fast as possible.
 

I mean if we're being completely candid here, there is almost no chance the first group wins this race right?

Reply
Wei Dai's Shortform
O O1mo811

the world had more centralization, such that the Industrial Revolution never started in an uncontrolled way

What motive does a centralized dominant power have to allow any progress? The entire world would likely look more like North Korea. 

Reply
Load More
5If the DoJ goes through with the Google breakup,where does Deepmind end up?
Q
1y
Q
1
26Thoughts on Francois Chollet's belief that LLMs are far away from AGI?
Q
1y
Q
17
5What happens to existing life sentences under LEV?
Q
1y
Q
7
14Hot take: The AI safety movement is way too sectarian and this is greatly increasing p(doom)
1y
15
27Supposing the 1bit LLM paper pans out
Q
2y
Q
11
13OpenAI wants to raise 5-7 trillion
2y
29
1O O's Shortform
2y
130