LESSWRONG
LW

1740
Beckeck
1461340
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
2Beckeck's Shortform
3y
1
2Beckeck's Shortform
3y
1
LessWrong Feed [new, now in beta]
Beckeck4mo10

Also have this issue on galax  s24. (And not on other parts of the website)

Reply
LessWrong Feed [new, now in beta]
Beckeck4mo30

Hope this goes well!

random pitch but - maybe add anki integrations for extra nutritious content?  

Reply
Can you donate to AI advocacy?
Answer by BeckeckMay 27, 202510

Yes, see PauseAI (even if I disagree with some of their positions, i'm glad they exist and hope that soon there exist multiple such orgs (but don't donate to StopAI, they don't appear serious imo)) 

Reply
Why does LW not put much more focus on AI governance and outreach?
Beckeck5mo-2-2

upvoted for topic importance. 

 

Reply
Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?
Beckeck7mo10

thanks, I appreciate the reply.
It sounds like I have somewhat wider error bars but mostly agree on everything but the last sentence, where I think it's plausibly but not certainly less worrying.
If you felt like you had crisp reasons why you're less worried, I'd be happy to hear them, but only if it feels positive for you to produce such a thing. 

Reply
Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?
Beckeck7mo10

we might disagree some. I think the original comment is pointing at the (reasonable as far i can tell) claim that oracular AI can have agent like qualities if it produces plans that people follow 

Reply
Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?
Beckeck7mo10

yeah, if the system is trying to do things I agree it's (at least a proto) agent. My point is that creation happens in lots of places with respect to an LLM, and it's not implausible that use steps (hell even sufficiently advanced prompt engineering) can effect agency in a system, particularly as capabilities continue to advance. 

Reply
Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?
Beckeck7mo10

"Seems mistaken to think that the way you use a model is what determines whether or not it’s an agent. It’s surely determined by how you train it?" 
---> Nah, pre training, fine tuning, scaffolding and especially RL seem like they all affect it. Currently scaffolding only gets you shitty agents, but it at least sorta works 


 

Reply
Principles for the AGI Race
Beckeck1y10

Top post claims that while principle one (seek broad accountability) mightbe useful in a more perfect world, but that here in reality it doesn't work great.

Reasons include that the pressure to be held in high standards by the Public tend to cause orgs to Do PR, rather then speak truth.

Reply
CFAR Takeaways: Andrew Critch
Beckeck2y36

know " sentence needs an ending 

 

Reply
Load More