rationality lessons we've accumulated and made part of our to our thinking
Seems like some duplicated words here.
weird idea like AIs being power and dangerous in the nearish future.
Perhaps: "weird ideas like AIs being powerful and dangerous"
Similar to being able to reply freely to comments on our posts, it would be nice if we could reply freely to comments on our own comments.
This seems like a reasonable mechanism, but I thought we already had one: belief-in-belief makes it easier to lie without being caught.
The phrase "the map is not the territory" is not just a possibly conceivable map, it's part of my map.
Thinking in terms of programming, it's vaguely like I have a class instance s where one of the elements p is a pointer to the instance itself. So I can write *(s.p) == s. Or go further and write *(*(s.p).p) == s.
As far as I want with only the tools offered to me by my current map.
My immediate mental response was that I value this post, but it doesn't fit with the mood of lesswrong. Which is kind of sad because this seems practical. But this is heavily biased by how upvotes are divvied out, since I typically read highly-upvoted posts.
It seems less likely to maximize my happiness or my contribution to society, but it doesn't make me not want it
I thought this was clear to me, but then I thought some more and I no longer think it's straightforward. It pattern matched against
But deductions from them seem of spurious use.
I agree that it's a good idea to give things a try to collect data before making longer term plans. Since you're explicitly exploring rather than exploiting, I suggest trying low-effort wacky ideas in many different directions (eg. Not on lesswrong)
This reminds me of dual N-back training. Under this frame, dual N-back would improve your ability to track extra things. It's still unclear to me whether training it actually improves mental skills in other domains.
The improvement to my intuitive predictive ability is definitely a factor to why I find it comforting, I don't know what fraction of it is aesthetics, I'd say a poorly calibrated 30%. Like maybe it reminds me of games where I could easily calculate the answer, so my brain assumes I am in that situation as long as I don't test that belief.
I'm definitely only comparing the sizes of changes to the same stat. My intuition also assumes diminishing returns for everything except defense which is accelerating returns - and knowing the size of each step helps inform this.
> Offering the player a choice between +5 armor and +10 accuracy implies that the numbers "5" and "10" are somehow expected to be relevant to the player.
When I imagine a game which offers "+armor" or "+accuracy" vs a game which offers "+5 armor" or "+10 accuracy", the latter feels far more comfortable even if I do not intend to do the maths. I suspect it gives something for my intuition to latch onto, to give me a sense of scale.
Here is our last set of flashcards to provide you with the key takeaways from the section “Project-based Learning.”
Here is our last set of flashcards to provide you with the key takeaways from the section “put your learning into practice.”
This dynamic reminds me of arguments-as-soldiers from The Scout Mindset. If people are used to wielding arguments as soldiers on themselves then it seems relatively easy to extend those patterns to reasoning with others.
Testing this hypothesis seems tricky. One avenue is "people with more internal conflicts are more predisposed to the soldier mindset". I can see a couple ways in-model for this to be untrue though.