> “And Pharaoh’s heart grew hard, and he did not heed them, as the Lord had said.” > > — Exodus 7:13, NKJV Oxycodone It is easy, in ethical thinking, to ask catechism questions: questions whose answers are already waiting for us. What in individuals leads to addiction? Unprocessed childhood...
<crossposted from AI and Intrinsic Motivation to Learn - by M Flood) Zvi Mowshowitz, in a recent AI roundup (emphasis added): > In order to use an opportunity to learn, LLM or otherwise, you need to be keeping up with the material so you can follow it, and then choose...
Question to the AI Safety researchers here: if, in the next four years, the US government locks down the domestic AI labs to begin a Manhattan Project style dash for AGI, would you consider (if offered) a position in such a project, understanding that you would need to abide by...
Cross posted from Substack Continuing the Stanford CS120 Introduction to AI Safety course readings (Week 2, Lecture 1) This is likely too elementary for those who follow AI Safety research - my writing this is an aid to thinking through these ideas and building up higher-level concepts rather than just...
Cross-posted from Substack Feeling intellectually understimulated, I've begun working my way through Max Lamparth's CS120 - Introduction to AI Safety. I'm going to use this Substack as a kind of open journaling practice to record my observations on the ideas presented, both in the lectures and in the readings. The...
[ epistemic status: first Less Wrong post, developing hypothesis, seeking feedback and help fleshing out the hypothesis into something that could be researched and about which a discussion paper can be written. A comment/contribution to Eliezer Yudkowsky's "Cognitive biases potentially affecting judgment of global risks" in Bostrom & Cirkovic's "Global...