Or Communityware (as a word nudges me towards smallness, togetherness)
I'm surprised fatebook.io isn't an answer here. I had in the past tried a bunch of personal prediction tools and felt dis-satisfied. Either because of the complexity, UI, or something else. Anyways, I've been using fatebook for a couple weeks now and loving it.
Good luck! :)
The way you use intelligence is different from how many people here using that word mean it.
Check this out (for a partial understanding of what they mean): https://www.lesswrong.com/posts/aiQabnugDhcrFtr9n/the-power-of-intelligence
Interesting! I've recently been thinking a bunch about "narratives" (frames) and how strongly they shape how/what we think. Making it much harder to see "the" truth since changing the narrative changes things quite a bit.
I'm curious if anyone has an example of how they would go about applying frame-invariance to rationality.
These kinds of explorations (unusual and truth-seeking) are why I love lesswrong :)
I've found the post "Reward is not the optimization target" quite confusing. This post cleared the concept up for me. Especially the selection framing and example. Thank you!
Finland too (and I expect quite a few other EU countries to do so as well)
https://mobile.twitter.com/i/lists/1185207859728076800 AGI Safety core by JJ (From AI Safety Support)
This is well-documented, and I'm happy the program turned out well!