Hi - lil bit of anonymity intended so I can feel more free to contribute and engage more closely with this community. Trying to learn quickly by experimenting.
low commit here but I've previously used nanotech as an example (rather than a probable outcome) of a class somewhat known unknowns - to portray possible future risks that we can imagine as possible while not being fully conceived. So while grey goo might be unlikely, it seems that precursor to grey goo of a pretty intelligent system trying to mess us up is the thing to be focused on, and this is one of its many possibilities that we can even imagine
I rather liked this post (and I’ll put it on both EAF and LW versions)
https://www.lesswrong.com/posts/PQtEqmyqHWDa2vf5H/a-quick-guide-to-confronting-doom
Particularly the comment by Jakob Kraus reminded me that many people have faced imminent doom (not of human species, but certainly quite terrible experiences).
Hi, writing this while on the go but just throwing it out there, this seems to be Sam Altman’s intent with OpenAI in pursuing fast timelines with slow takeoffs.
I am unaware of those decisions at the time. I imagine people are some degree of ‘making decisions under uncertainty’, even if that uncertainty could be resolved by info somewhere out there. Perhaps there’s some optimization of how much time you spend looking into something and how right you could expect to be?
Anecdote of me (not super rationalist-practiced, also just at times dumb) - I sometimes discover stuff I briefly took to be true in passing to be false later. Feels like there’s an edge of truth/falsehoods that we investigate pretty loosely but still use a heuristic of some valence of true/false maybe a bit too liberally at times.
LLMs as a new benchmark for human labor. Using ChatGPT as a control group versus my own efforts to see if my efforts are worth more than the (new) default
Thanks for writing this, enjoyed it. I was wondering how to best represent this to other people, perhaps with an example of 5 and 10 where you let a participant make the mistake, and then question their reasoning etc. lead them down the path laid out in your post of rationalization after the decision before finally you show them their full thought process in post. I could certainly imagine myself doing this and I hope I’d be able to escape my faulty reasoning…
BTW - this video is quite fun. Seems relevant re: Paperclip Maximizer and nanobots.