codingquark
codingquark has not written any posts yet.

codingquark has not written any posts yet.

I have never before tried explicitly writing rational ideas. So I tried: https://codingquark.com/2023/08/25/rational-laptop-insurance-calculation.html
What all did I do wrong? There are two obvious attack surfaces:
Will appreciate feedback, it will help me execute the Mission of Tsuyoku Naritai ;)
I was reading Obvious advice and noticed that at times when I'm overrun by emotions, or in a hurry to make the decision, or for some other reasons I'm not able to articulate verbally I fail to see the obvious. During such times, I might even worry that whatever I'm seeing is not one of the obvious — I might be missing something so obvious that the whole thing would've worked out differently had I thought of that one simple obvious thing.
Introspecting, I feel that perhaps I am not exactly sure what this "obvious" even means. I am able to say "that's obvious" sometimes on the spot and sometimes in hindsight. But when I sit down and think about it, I come up things like "what's obvious is what feels obvious!" and I am not satisfied really.
Can someone link me to resources to explore this topic further? A discussion here is appreciated as well.
Thank you!
Since I've been reading so much about guilt, I have been thinking about how many emotions I feel at once when something undesirable happens. It is no simple task for a human to handle such a huge set of variables. And yet somehow, these sequences are helpful.
Hey! Reading Lawful Uncertainty, Replacing Guilt, once again listening to HPMOR. I started out reading Meditations on Moloch this weekend but got steered to Replacing Guilt. Replacing Guilt is something I have not been able to help others with. So far, the tools suggested fit quite well with what I have figured out, but I have never been so clear as to be able to say "refinement, internalisation, realism". Given Nate's clarity, there are many things I had not thought about. I am having fun thinking about guilt with this much concreteness :D
What about you?
I stumbled upon LessWrong via AI & SlateStarCodex, but quickly noticed the rationality theme. My impressions on rationality were that of Sheldon Cooper-esque (the Big Bang theory) and I had put it aside as entertainment. I had read some of Eliezer's work earlier, such as staring into the singularity and saw these things called "sequences" under the library section. The first few posts I read made me think "Oh! Here are my people! I get to improve!"
But of course the library made it easy to notice HPMOR, and that's where I actually "began". I've listened to it twice so far. I have begun suggesting friends to give it to their kids in... (read more)
Given this, Chapter 21: Rationalization section with Harry and Draco meeting is HILARIOUS!