LESSWRONG
LW

1976
O O
762Ω473480
Message
Dialogue
Subscribe

swe, speculative investor

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
1O O's Shortform
2y
129
Musings from a Lawyer turned AI Safety researcher (ShortForm)
O O5d10

I'm not in law, but it seems more like an online course trying to be sold to me than a real conference. There is a long list of company logos, a bunch of credits and certifications promised, and a large blob of customer testimonials.

Did some quick googling and an actual conference would look like this. https://www.lsuite.co/techgc

I'm surprised this comment has so many upvotes. Did anyone actually click the link?




 

Reply
Musings from a Lawyer turned AI Safety researcher (ShortForm)
O O6d73

She is? She just seems like a standard LinkedIn grifter.

Reply
We won’t get docile, brilliant AIs before we solve alignment
O O8d-1-3

Seems like a highly speculative post built on complete guesses with little evidence or faulty claims.

Reply
O O's Shortform
O O8d10

How is any of that wrong or related to the question of ai being aligned. Do doomers seriously think you can indefinitely stop automation? It’s been happening for centuries.


They’re ignoring alignment but so are most labs. I still don’t get how this is not irrational. If it was worded as AI will inevitably become smarter then no one here would care. 

Reply
O O's Shortform
O O9d1-14

The reaction to Mechanize seems pretty deranged. As far as I can tell they don't deny or hasten existential risk any more than other labs. They just don't sugarcoat it. It's quite obvious that the economic value of AI is for labor automation, and that the only way to stop this is to stop AI progress itself. The forces of capitalism are quite strong, labor unions in the US tried to slow automation and it just moved to China as a result (among other reasons). There is a reason Yudkowsky always implies measures like GPU bans.

 It just seems like they hit a nerve since apparently a lot of doomerism is fueled by insecurities of job replacement. 

Reply2
Jan_Kulveit's Shortform
O O9d816

In practice, this likely boils down to a race. On one side are people trying to empower humanity by building coordination technology and human-empowering AI. On the other side are those working to create human-disempowering technology and render human labor worthless as fast as possible.
 

I mean if we're being completely candid here, there is almost no chance the first group wins this race right?

Reply
Wei Dai's Shortform
O O14d811

the world had more centralization, such that the Industrial Revolution never started in an uncontrolled way

What motive does a centralized dominant power have to allow any progress? The entire world would likely look more like North Korea. 

Reply
This is a review of the reviews
O O24d10

War is not the only potential response. I don't know why this is being framed as normal when a normal treaty would have something like sanctions as a response. 

Reply
This is a review of the reviews
O O24d10

Keep in mind propagandizing it is also an easy way to get political polarization.

Reply
The Problem with Defining an "AGI Ban" by Outcome (a lawyer's take).
O O25d20

How has nuclear non proliferation been a success? 
 

Short of something that would stop us from even pondering this, we’ve gotten dangerously close to nuclear exchanges multiple times and several rogue states have nukes or use how close they are to a nuke as a bargaining tool.
 

Reply
Load More
5If the DoJ goes through with the Google breakup,where does Deepmind end up?
Q
1y
Q
1
26Thoughts on Francois Chollet's belief that LLMs are far away from AGI?
Q
1y
Q
17
5What happens to existing life sentences under LEV?
Q
1y
Q
7
14Hot take: The AI safety movement is way too sectarian and this is greatly increasing p(doom)
1y
15
27Supposing the 1bit LLM paper pans out
Q
2y
Q
11
13OpenAI wants to raise 5-7 trillion
2y
29
1O O's Shortform
2y
129