Meetup #44 - Murphyjitsu
Anki (Memorization Software) for Beginners
Triangle SSC Meetup-January 30th
Sorted by Magic (New & Upvoted)
Magic (New & Upvoted)
Show Low Karma
Monday, January 13th 2020
Mon, Jan 13th 2020
Why a New Rationalization Sequence?
Moral uncertainty: What kind of 'should' is involved?
Drawing Toward Power
How do you do hyperparameter searches in ML?
What is the relationship between Preference Learning and Value Learning?
Milk as a Metaphor for Existential Risk
Rational Ottawa - Group Rationality I
While reading Focusing today, I thought about the book and wondered how many exercises it would have. I felt a twinge of aversion. In keeping with my goal of increasing internal transparency, I said to myself: "I explicitly and consciously notice that I felt averse to some aspect of this book". I then Focused on the aversion. Turns out, I felt a little bit disgusted, because a part of me reasoned thusly: (Transcription of a deeper Focusing on this reasoning) I'm afraid of being slow. Part of it is surely the psychological remnants of the RSI I developed in the summer of 2018. That is, slowing down is now emotionally associated with disability and frustration. There was a period of meteoric progress as I started reading textbooks and doing great research, and then there was pain. That pain struck even when I was just trying to take care of myself, sleep, open doors. That pain then left me on the floor of my apartment, staring at the ceiling, desperately willing my hands to just get better. They didn't (for a long while), so I just lay there and cried. That was slow, and it hurt. No reviews, no posts, no typing, no coding. No writing, slow reading. That was slow, and it hurt. Part of it used to be a sense of "I need to catch up and learn these other subjects which [Eliezer / Paul / Luke / Nate] already know". Through internal double crux, I've nearly eradicated this line of thinking, which is neither helpful nor relevant nor conducive to excitedly learning the beautiful settled science of humanity. Although my most recent post [https://www.lesswrong.com/posts/eX2aobNp5uCdcpsiK/on-being-robust] touched on impostor syndrome, that isn't really a thing for me. I feel reasonably secure in who I am, now (although part of me worries that others wrongly view me as an impostor?). However, I mostly just want to feel fast, efficient, and swift again. I sometimes feel like I'm in a race with Alex2018, and I feel like I'm losing.
So here's two extremes. One is that human beings are a complete lookup table. The other one is that human beings are perfect agents with just one goal. Most likely both are somewhat true. We have subagents that are more like the latter, and subsystems more like the former. But the emphasis on "we're just a bunch of hardcoded heuristics" is making us stop looking for agency where there is in fact agency. Take for example romantic feelings. People tend to regard them as completely unpredictable, but it is actually possible to predict to some extent whether you'll fall in and out of love with someone based on some criteria, like whether they're compatible with your self-narrative and whether their opinions and interests align with yours, etc. The same is true for many intuitions that we often tend to dismiss as just "my brain" or "neurotransmitter xyz" or "some knee-jerk reaction". There tends to be a layer of agency in these things. A set of conditions that makes these things fire off, or not fire off. If we want to influence them, we should be looking for the levers, instead of just accepting these things as a given. So sure, we're godshatter, but the shards are larger than we give them credit for.
So a thing Galois theory does is explain: Which makes me wonder; would there be a formula if you used more machinery that normal stuff and radicals? What does "more than radicals" look like?
Updated the Predicted AI alignment event/meeting calendar [https://www.lesswrong.com/posts/h8gypTEKcwqGsjjFT/predicted-ai-alignment-event-meeting-calendar] . * Application deadline for the AI Safety Camp Toronto extended. If you've missed it so far, you still have until 19th to apply. * Apparently no AI alignment workshop at ICLR, but another somewhat related one.