I love the mood that you are describing. I do think that it would still be valuable to have some amount of pressure towards shared concepts and symbols, but it really could be a lot less.
One already common example is that in the past decades many small languages and cultures have been on the decline due to shared concepts just being too useful such that most young people just don't learn the local language in the first place. I am also reminded of the quote (not sure of the origin), "nowadays, walking around in any big city across the world kind of feels the same", and now I start to imagine how much more vibrant culture could be and feel if interfaces were fluid.
I could also get used to saying "cGive" pronounced similar to "sea Give" which has nice connotations spoken aloud and has c as coefficient right in the written version. But I agree that "Coefficient" has a good sound compared to which "cGive" seems more generic
Maybe the secretions on the contaminated coffee mug handles and plastic tiles took longer to dry than those on the playing cards & poker chips.
Wild guess: air temperature and humidity should make a big difference in how quickly things dry. As hands are constantly sweating, I could imagine that there is a kind of threshold humidity/temperature beyond which hands have enough moisture to sustain viruses?
Deeply relatable, I can't wait to be genemodded to be able to photosynthesize.
I sometimes wonder whether I should want to be a digital mind at some point as this would definitely simplify the problem of suffering created by how I eat. I prefer your solution :)
I was also confused by this, and think that it does work out with the usual 'given that' (I'll write instead of as I get confused with the other notation):
The statement becomes
If is evidence of , then
where I would have intuitively phrased this as being evidence of . But this turns out to be the same thing: If knowing makes more likely, finding out that is true also makes more likely.
If we already know Bayes theorem, this becomes clear:
where the fractions being is equivalent to the two things being evidence for each other.
I call these little '>' referring to other cards 'handles' and I use them all the time to keep cards short.
Do you use any technological method for making it easy to look up these handles? I am a long-time user of Obsidian for note-taking, and there is a great Obsidian_to_Anki plugin which allows for creating and managing Anki cards as part of one's Obsidian notes and it inserts functional links on both Desktop and Android.
It might also integrate well with the AutoHotkey script that you use.
I have not kept up my Anki usage after setting it up once and will now try again. I really did find it extremely demotivating to find the "400 cards due" every time I did get myself to open Anki.
Thanks for the great tips!
A reasonable prior does not put zero mass on the hypothesis that the literally infinite characters in our stories are moral patients. A reasonable protocol does not therefore let this hypothesis dominate its decisions regardless of evidence.
I do agree that we need some distinction in our decision-making for uncertain ethical problems where a simple expected value is the right solution and uncertain ethical problems where the type of the uncertainty requires handling it differently.
And I do agree that insect suffering is deep enough in the territory of fundamental uncertainty that this question needs to be asked.
When you use the example of "the hypothesis that the literally infinite characters in our stories are moral patients", I could imagine you having several possible aims:
My understanding is that you mean the first two, but not the third?
Grok 3 told me 9.11 > 9.9. (common with other LLMs too), but again, turning on Thinking solves it.
This is unrelated to Grok 3, but I am not convinced that the above part of Andrej Karpathy's tweet is a "gotcha". Software version numbers use dots with a different meaning than decimal numbers and there 9.11 > 9.9 would be correct.
I don't think there is a clear correct choice of which of these contexts to assume for an LLM if it only gets these few tokens.
E.g. if I ask Claude, the pure "is 9.11>9.9" question gives me a no, whereas
"I am trying to install a python package. Could you tell me whether `9.11>9.9`?" gives me a yes.
For me, a strong reason why I do not see myself[1] doing deliberate practice as you (very understandably) suggest is that, on some level, the part of my mind which decides on how much motivational oomph and thus effort is put into activities just in fact does not care much about all of these abstract and long-term goals.
Deliberate practice is a lot of hard work and the part of my mind which makes decisions about such levels of mental effort just does not see the benefits. There is a way in which a system that circumvents this motivational barrier is working against my short-term goals and and it is the latter who significantly controls motivation: Thus, such a system will "just sort of sputter and fail" in such a way that, consciously, I don't even want to think about what went wrong.
If Feedbackloop Rationality wants to move me to be more rational, it has to work with my current state of irrationality. And this includes my short-sighted motivations.
And I think you do describe a bunch of the correct solutions: Building trust between one's short-term motivations and long-term goals. Starting with lower-effort small-scale goals where both perspectives can get a feel for what cooperation actually looks like and can learn that it can be worth the compromises. In some sense, it seems to me that once one is capable of the kind of deliberate practice that you suggest, much of this boot strapping of agentic consistency between short-term motivation and deliberate goals has already happened.
On the other hand, it might be perfectly fine if Feedbackloop Rationality requires some not-yet-teachable minimal proficiency at this which only a fraction of people already have. If Feedbackloop Rationality allows these people to improve their thinking and contribute to hard x-risk problems, that is great by itself.
To some degree, I am describing an imaginary person here. But the pattern I describe definitely exists in my thinking even if less clearly than I put it above.
I think this is a little tricky as the theory uses the actual continuum for wave functions such that the amount of information is and remains infinite whether we remove far-away branches or not. On the other hand, we can only ever calculate finite approximations of the true thing and here we definitely can be more or less efficient.
When we "collapse the wave function" during a measurement process, we decide to remove the other branches from our description such that in some sense a pragmatic version of hardly computing any of the pilot wave is the standard approach. The description of Open Quantum Systems has really helped formalize this perspective and one can find that even just cosmic radiation or gravitational interaction is strong enough to create quick branching of the wave function for anything macroscopic. Unfortunately a mathematically equivalent simplification is not feasible as technically the distant contributions are not ruled out to become important at some point in the future (e.g. the wave function tends to spread out in configuration space such that in a non-expanding universe we would expect branches to touch at some point), but at least for many situations one can get good effective descriptions which put us closer to a quantum-native description of how classical behaviour comes about. From that perspective, the probabilistic predictions tell us about which part of a simplified pilot wave (/many worlds branch) we expect to update towards.