Hope this goes well!
random pitch but - maybe add anki integrations for extra nutritious content?
Yes, see PauseAI (even if I disagree with some of their positions, i'm glad they exist and hope that soon there exist multiple such orgs (but don't donate to StopAI, they don't appear serious imo))
upvoted for topic importance.
thanks, I appreciate the reply.
It sounds like I have somewhat wider error bars but mostly agree on everything but the last sentence, where I think it's plausibly but not certainly less worrying.
If you felt like you had crisp reasons why you're less worried, I'd be happy to hear them, but only if it feels positive for you to produce such a thing.
we might disagree some. I think the original comment is pointing at the (reasonable as far i can tell) claim that oracular AI can have agent like qualities if it produces plans that people follow
yeah, if the system is trying to do things I agree it's (at least a proto) agent. My point is that creation happens in lots of places with respect to an LLM, and it's not implausible that use steps (hell even sufficiently advanced prompt engineering) can effect agency in a system, particularly as capabilities continue to advance.
"Seems mistaken to think that the way you use a model is what determines whether or not it’s an agent. It’s surely determined by how you train it?"
---> Nah, pre training, fine tuning, scaffolding and especially RL seem like they all affect it. Currently scaffolding only gets you shitty agents, but it at least sorta works
Top post claims that while principle one (seek broad accountability) mightbe useful in a more perfect world, but that here in reality it doesn't work great.
Reasons include that the pressure to be held in high standards by the Public tend to cause orgs to Do PR, rather then speak truth.
know " sentence needs an ending
Also have this issue on galax s24. (And not on other parts of the website)