This essay explained an idea which I think was implicit in many parts of the sequences but which I didn’t successfully identify or understand before now. It filled a gap that was one of the main reasons that I had difficulty in understanding and coming to my own conclusions about this worldview. It also provided a philosophical perspective in which I could rethink certain aspects of AI existential risk.
"Inventing Temerature" is excellent. It has helped me better understand the process and problems of attaining knowledge. It was also helpful in pointing to how to recognize gaps in my own theories and accepted paradigms. It would be nice to have a complementary work which translates these ideas into a practical toolbox but even on its own it still is helpful.
The book also showed why studying philosophy is lacking if it isn’t complemented with a study of the history of science and other ideas. (A gap in my education which I am still trying to remedy.)
The main problem of this review is noted within. Namely, that this is not the kind of area where reading a review will give you most of the benefit of the book itself.
I’m not sure to what degree this post can stand alone apart from the whole sequence. But in general I found the naturalism method a useful tool for understanding the world. (Is there reason that the Lesswrong review isn’t also done at the sequence level in addition to the post level)
Within the sequence, this post in particular stood out. Many people describe the importance of sitting through a period of being stuck without abandoning a project since often that is a stage towards clarity. This post pointed to a general actionable strategy for doing so which I haven't seen elsewhere. It went beyond that by showing some concrete expressions of that strategy and attitude. For example: a. reconnecting with the felt sense of the topic and direction. b. Returning to a more fluid approach if the investigation has become distorted by something like looking over ones shoulder at what others might think, or an over-commitment to the explicit structure of ones method (but checking afterward if ones method needs a reformulation or whether it was intuitively driving you and the problem was trying to legibly meet the standard)
One thing I didn’t like about it was the chatGPT dialogue, and think that might be worth skipping
Hi I am working on Rob Miles' Stampy project (https://aisafety.info/), which is creating a centralized resource for answering questions about AI safety and alignment. Would we be able to incorporate your list of frequently asked questions and answers into our system (perhaps with some modification)? I think they are really nice answers to some of the basic questions and would be useful for people curious about the topic to see.
have you seen the stampy project https://aisafety.info/ although it is currently a work in progress. there was also some examples of it here https://www.lesswrong.com/posts/EELddDmBknLyjwgbu/stampy-s-ai-safety-info-new-distillations-2
@drocta @Cookiecarver We started writing up an answer to this question for Stampy. If you have any suggestions to make it better I would really appreciate it. Are there important factors we are leaving out? Something that sounds off? We would be happy for any feedback you have either here or on the document itself https://docs.google.com/document/d/1tbubYvI0CJ1M8ude-tEouI4mzEI5NOVrGvFlMboRUaw/edit#
But in that kind of situation, wouldn't those people also pick A over B for the same reason?
I really liked this post since it took something I did intuitively and haphazardly and gave it a handle by providing the terms to start practicing it intentionally. This had at least two benefits:
First it allowed me to use this technique in a much wider set of circumstances, and to improve the voices that I already have. Identifying the phenomenon allowed it to move from a knack which showed up by luck, to a skill.
Second, it allowed me to communicate the experience more easily to others, and open the possibility for them to use it as well. Unlike many lesswrong posts, I found that the technique in this post spoke to a bunch of people outside of the lesswrong community. For example, one friend who liked this idea. tried applying it to developing an Elijah the Prophet figure that he could interact with.
This post helped me personally in two ways.
1. Recognizing that even picking 4 things to focus on is too much. And that focusing on only 1 or 2 (at least at any specific time) would be exponentially more effective. In this sense it served as a nice complement to the book “4000 weeks”.
2. Consciously splitting my time between exploring and exploiting allowed the exploration to be more free. I allowed myself to try things I otherwise may not have, by not feeling like I needed to commit to any of the explorations as itself being most worth doing.
An added bonus in reading this essay is that the prose is a pleasure to read.