I call these little '>' referring to other cards 'handles' and I use them all the time to keep cards short.
Do you use any technological method for making it easy to look up these handles? I am a long-time user of Obsidian for note-taking, and there is a great Obsidian_to_Anki plugin which allows for creating and managing Anki cards as part of one's Obsidian notes and it inserts functional links on both Desktop and Android.
It might also integrate well with the AutoHotkey script that you use.
I have not kept up my Anki usage after setting it up once and will now try again. I really did find it extremely demotivating to find the "400 cards due" every time I did get myself to open Anki.
Thanks for the great tips!
A reasonable prior does not put zero mass on the hypothesis that the literally infinite characters in our stories are moral patients. A reasonable protocol does not therefore let this hypothesis dominate its decisions regardless of evidence.
I do agree that we need some distinction in our decision-making for uncertain ethical problems where a simple expected value is the right solution and uncertain ethical problems where the type of the uncertainty requires handling it differently.
And I do agree that insect suffering is deep enough in the territory of fundamental uncertainty that this question needs to be asked.
When you use the example of "the hypothesis that the literally infinite characters in our stories are moral patients", I could imagine you having several possible aims:
My understanding is that you mean the first two, but not the third?
Grok 3 told me 9.11 > 9.9. (common with other LLMs too), but again, turning on Thinking solves it.
This is unrelated to Grok 3, but I am not convinced that the above part of Andrej Karpathy's tweet is a "gotcha". Software version numbers use dots with a different meaning than decimal numbers and there 9.11 > 9.9 would be correct.
I don't think there is a clear correct choice of which of these contexts to assume for an LLM if it only gets these few tokens.
E.g. if I ask Claude, the pure "is 9.11>9.9" question gives me a no, whereas
"I am trying to install a python package. Could you tell me whether `9.11>9.9`?" gives me a yes.
For me, a strong reason why I do not see myself[1] doing deliberate practice as you (very understandably) suggest is that, on some level, the part of my mind which decides on how much motivational oomph and thus effort is put into activities just in fact does not care much about all of these abstract and long-term goals.
Deliberate practice is a lot of hard work and the part of my mind which makes decisions about such levels of mental effort just does not see the benefits. There is a way in which a system that circumvents this motivational barrier is working against my short-term goals and and it is the latter who significantly controls motivation: Thus, such a system will "just sort of sputter and fail" in such a way that, consciously, I don't even want to think about what went wrong.
If Feedbackloop Rationality wants to move me to be more rational, it has to work with my current state of irrationality. And this includes my short-sighted motivations.
And I think you do describe a bunch of the correct solutions: Building trust between one's short-term motivations and long-term goals. Starting with lower-effort small-scale goals where both perspectives can get a feel for what cooperation actually looks like and can learn that it can be worth the compromises. In some sense, it seems to me that once one is capable of the kind of deliberate practice that you suggest, much of this boot strapping of agentic consistency between short-term motivation and deliberate goals has already happened.
On the other hand, it might be perfectly fine if Feedbackloop Rationality requires some not-yet-teachable minimal proficiency at this which only a fraction of people already have. If Feedbackloop Rationality allows these people to improve their thinking and contribute to hard x-risk problems, that is great by itself.
To some degree, I am describing an imaginary person here. But the pattern I describe definitely exists in my thinking even if less clearly than I put it above.
Thank you for another beautiful essay at real thinking! This time about the mental stance itself.
But I’ll describe a few tags I’m currently using, when I remind myself to “really think.” Suggestions/tips from readers would also be welcome.
I think there is a strong conceptual overlap with what John Vervaeke describes as Relevance Realisation and wisdom.
I'll attempt a summary of my understanding of John Vervaeke's Relevance Realisation.
A key capability of any agentic/living beings is to prune the exponentially exploding space of possibilities when making any decision or thought. We are computationally bounded, and how we deal with that is crucial. Vervaeke terms the process that does this Relevance Realisation.
There is a lot of detail to his model, but let's jump to how some of this plays out in human thinking: A core aspect in how we are agentic is our use of memeplexes that form an "agent-arena-relationship" - we combine a worldview with an agent that is suited to that world and then tie our identity to that agent. We build toy versions all the time (many games are like this), but --according to Vervaeke's theses in the "Awakening from the Meaning Crisis" lectures-- modern western culture has somewhat lost track of the cultural structures which allow individuals to coordinate on and (healthily) grow their mind's agent-arena-relationship. We have institutions of truth (how to reach good factual statements rather than falsehoods), but not of wisdom (how to live with a healthy stance towards reality rather than bullshit. Historically religious institutions played that role but they now successfully do that for fewer and fewer people)
Inhabiting a functional such relationship feels alive and vibrant ("inhabit" = tying one's identity into these memes), whereas the lack of a functional agent-arena-relationship feels dream-like (zombies are a good representation; maybe "NPC" is a more recent meme that points at this).
A related thing is people having a spiritual experience: this involves glimpsing a new agent-arena-relationship which then sometimes gets nourished into a re-structuring of one's self-concept and priorities.
tying this back to "real thinking"
Although the process I described is not the same thing as real thinking, I do think that there are important similarities.
Regarding how to do this well, one important point of Vervaeke's is that humans necessarily enter the world with very limited concepts of "self", "agent" or "arena". This perspective makes it clear that a core part of what we do while growing up is refining these concepts. A whole lot of our nature is about doing this process of transcending our previous self-concept. Verveake loves the quote "the sage is to the adult as the adult is to the child" to point at what the word wisdom means.
The process according to his recommendations, involves
and from my impression, a lot of this is practicing the ability of inhabiting/changing the currently active agent-arena-perspective, exploring its boundaries, not flinching away from noticing its limitations, engaging with the perspectives which others have built and so on. Generally, a kind of fluidity in the mental motions which are involved in this.
I hope my descriptions are sufficiently right to give an impression of his perspective and whether there are some ideas that are valuable to you :)
It happens to be the case that 1 kWh = 3,600,000 kg/s². You could substitute this and cancel units to get the same answer.
This should be kg m²/s².[1] On the other hand, this is a nice demonstration of how useful computers are: it really is too easy to accidentally drop a term when converting units
I double checked
I read
to have a dude clearly describe a phenomenon he is clearly experiencing as “mostly female”
as "he makes a claim about a thought pattern he considers mostly female", not as "he himself is described by the pattern" (QC does demonstrate high epistemic confidence in that post). Thus, I don't think that Elizabeth would disagree with you.
Thanks for the guidance! Together with Gwern's reply my understanding now is that caching can indeed be very fluidly integrated into the architecture (and that there is a whole fascinating field that I could try to learn about).
After letting the ideas settle for a bit, I think that one aspect that might have lead me to think
In my mind, there is an amount of internal confusion which feels much stronger than what I would expect for an agent as in the OP
is that a Bayesian agent as described still is (or at least could be) very "monolithic" in its world model. I struggle with putting this into words, but my thinking feels a lot more disjointed/local/modular. It would make sense if there is a spectrum from "basically global/serial computation" to "fully distributed/parallel computation" where going more to the right adds sources of internal confusion.
What a reply, thank you!
I was also confused by this, and think that it does work out with the usual 'given that' (I'll write P(A) instead of A as I get confused with the other notation):
The statement becomes
where I would have intuitively phrased this as B being evidence of A. But this turns out to be the same thing: If knowing A makes B more likely, finding out that B is true also makes A more likely.
If we already know Bayes theorem, this becomes clear: P(A|B)=P(B|A)P(B)P(A)>P(A)⇔P(A|B)P(A)P(B)=P(B|A)>P(B)
where the fractions being >1 is equivalent to the two things being evidence for each other.