reminded me of Uriel explaining Kabbalah:> “THEY BELIEVE YOU CAN CARVE UP THE DIFFERENT FEATURES OF THE UNIVERSE, ENTIRELY UNLIKE CARVING A FISH,” the angel corrected himself. “BUT IN FACT EVERY PART OF THE BLUEPRINT IS CONTAINED IN EVERY OBJECT AS WELL AS IN THE ENTIRETY OF THE UNIVERSE. THINK OF IT AS A FRACTAL, IN WHICH EVERY PART CONTAINS THE WHOLE. IT MAY BE TRANSFORMED ALMOST BEYOND RECOGNITION. BUT THE WHOLE IS THERE. THUS, STUDYING ANY OBJECT GIVES US CERTAIN DOMAIN-GENERAL KNOWLEDGE WHICH APPLIES TO EVERY OTHER OBJECT. HOWEVER, BECAUSE ADAM KADMON IS ARRANGED IN A WAY DRAMATICALLY DIFFERENTLY FROM HOW OUR OWN MINDS ARRANGE INFORMATION, THIS KNOWLEDGE IS FIENDISHLY DIFFICULT TO DETECT AND APPLY. YOU MUST FIRST CUT THROUGH THE THICK SKIN OF CONTINGENT APPEARANCES BEFORE REACHING THE HEART OF -”
https://arxiv.org/abs/1806.00952 gives a theoretical argument that suggests SGD will converge to a point that is very close in L2 norm to the initialization. Since NNs are often initialized with extremely small weights, this amounts to implicit L2 regularization.
Not OP, but I have a similar hotkey. I use Todoist as my capture system and mapped Alt+Super+o the following script (is there a way to embed code in comments?):
wmctrl -x -a "Todoist"xdotool keyup Alt Super oxdotool type --clearmodifiers --delay=3 q
wmctrl -x -a "Todoist"
xdotool keyup Alt Super o
xdotool type --clearmodifiers --delay=3 q
Script performs: select Todoist window, lift hotkeys, wait tiny amount of time, press q (q is the hotkey to add a task for Todoist).
I've found one of the main benefits of getting a virtual assistant type device (Alexa, Google Home) is allowing me to capture ideas by verbalizing them. This is especially useful if I'm falling asleep and don't want to pull out a notebook/phone.
This looks like me saying things like "Alexa, add 'is it meaningful to say that winning the lottery is difficult' to my todo list".
I went to a CFAR workshop more recently, so there might be some content that is slightly newer. Additionally, my sequence is not yet completed and I am worse at writing.
The most important thing about reading any such sequence is to actually practice the techniques. I suggest reading the sequence that is most likely to get you to do that. If you think both are equally likely, I would recommend the Hammertime sequence.
copying my comment from https://www.lesswrong.com/posts/PX7AdEkpuChKqrNoj/what-are-your-greatest-one-shot-life-improvements?commentId=t3HfbDYpr8h2NHqBD
Note that this is in reference to voting on question answers.
> Downvoting in general confuses me, but I think that downvoting to 0 is appropriate if the answer isn't quite answering the question, but downvoting past zero doesn't make sense. Downvoting to 0 feels like saying "this isn't that helpful" whereas downvoting past 0 feels like "this is actively harmful".
My rough take: https://elicit.ought.org/builder/oTN0tXrHQ
3 buckets, similar to Ben Pace's
If I thought about this for 5 additional hours, I can imagine assigning the following ranges to the scenarios:
Texas confusion might be explained partially by a coding error.
Gwern's essay about how everything is correlated seems related/relevant: https://www.gwern.net/Everything
This is personal to me, but I once took a class at school where all the problems were multiple choice, required a moderate amount of thought, and were relatively easy. I got 1/50 wrong, giving me a 2% base rate for making the class of dumb mistakes like misreading inequalities or circling the wrong answer.
This isn't quite a meta-prior, but it seemed sort of related?