LESSWRONG
LW

konstantin
410211
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
FLI open letter: Pause giant AI experiments
konstantin2y63

Update: I think it doesn't make much sense to interpret the letter literally. Instead, it can be seen as an attempt to show that a range of people think that slowing down progress would be good, and I think it does an okay job at that (though I still think the wording could be much better, and it should present arguments for why we should decelerate.)

Reply
FLI open letter: Pause giant AI experiments
konstantin2y30

Thanks! Haven't found good comments on that paper (and lack the technical insights to evaluate it myself)

Are you implying that China has access to compute required for a) GPT-4 type models or b) AGI?

Reply
FLI open letter: Pause giant AI experiments
konstantin2y144

The letter feels rushed and leaves me with a bunch of questions.

1. "recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control." 

Where is the evidence of this "out-of-control race"? Where is the argument that future systems could be dangerous?


2. "Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders." 

These are very different concerns that water down what the problem is the letter tries to address. Most of them are deployment questions more than development questions.

3. I like the idea of a six-month collaboration between actors. I also like the policy asks they include.

4. The main impact of this letter would obviously be getting the main actors to halt development (OpenAI, Anthropic, DeepMind, MetaAI, Google). Yet, those actors seem not to have been involved in this letter/ haven't publicly commented. (afaik) This seems like a failure.

5. Not making it possible to verify the names is a pretty big mistake.

6. In my perception, the letter mostly appears alarmist at the current time, especially since it doesn't include an argument for why future systems should be dangerous. It might just end up burning political capital.

Reply
FLI open letter: Pause giant AI experiments
konstantin2y54

1 Haven't seen an impressive AI product come out of China (Please point me to some if you disagree)

2 They can't import A100/ H100 anymore after the US chip restrictions

Reply
FLI open letter: Pause giant AI experiments
konstantin2y20

Because if we do it now and then nothing happens for five years, people will call it hysteria, and we won't be able to do this once we are close to x-risky systems.

Reply
FLI open letter: Pause giant AI experiments
konstantin2y54

Russia is not at all an AI superpower. China also seems to be quite far behind the west in terms of LLMs, so overall, six months would very likely not lead to any of them catching up.

Reply
FLI open letter: Pause giant AI experiments
konstantin2y30

Edit: I need to understand more context before expressing my opinion.

Reply
My Objections to "We’re All Gonna Die with Eliezer Yudkowsky"
konstantin2y30

Relatedly, humans are very extensively optimized to predictively model their visual environment. But have you ever, even once in your life, thought anything remotely like "I really like being able to predict the near-future content of my visual field. I should just sit in a dark room to maximize my visual cortex's predictive accuracy."?

Nitpick: That doesn't seem like what you would expect. Arguably I have very little conscious access to the part of my brain predicting what I will see next, and the optimization of that part is probably independent of the optimization that happens in the more conscious parts of my brain.

Reply
Why do we assume there is a "real" shoggoth behind the LLM? Why not masks all the way down?
konstantin2y20

I resonate a lot with this post and felt it was missing from the recent discussions! Thanks

Reply
Compendium of problems with RLHF
konstantin2y10

I found this quite helpful, even if some points could use a more thorough explanation.

Reply
Load More
Calibration
3y
No posts to display.