I'm writing a book about epistemology. It's about The Problem of the Criterion, why it's important, and what it has to tell us about how we approach knowing the truth.
I've also written a lot about AI safety. Some of the more interesting stuff can be found at the site of my currently-dormant AI safety org, PAISRI.
Memory reconsolidation
Also, more generally, no prediction market price means you can immediately conclude what the probability of any outcome is, because most markets we have only subjective probability (maybe this is always true but I'm trying to ignore things like fair coin flips that have agreed upon "objective" probabilities), so there is no fact of the matter about what the real probability of something happening is, only the subjective probability based on the available information.
Instead a prediction market is simply, in the ideal case, the market clearing price at which people are willing to take bets on either side of the question at this moment in time. This price represents a marginal trading point—participants with higher subjective probabilities than the market price will buy, while those with lower will sell. This is importantly different from the true probability of an outcome, and it's a general mistake people make to treat them as such.
Then there are other factors, like you mention with interest, but also issues with insufficient volume, large traders intentionally distorting the market, etc. that can make the market clearing price less useful for inferring what subjective probability an observer should treat a possible outcome as having.
Instead a prediction market provides aggregate information that can be used for a person to make their own assessment of the subjective probability of an outcome, and if they differ from the market in their assessment they can make a bet that will be subjectively positive value in expectation, but still in no way is the market price of any prediction market the probability of any outcome.
Honestly, this fits my intuition. If I think of all the rationalists I know, they feel like they are on average near 120 IQ, with what feels like a standard distribution around it, though in reality it's probably not quite normal with a longer upper tail than lower tail, i.e. fewer 90s than 150s, etc. Claims that the average is much higher than 120 feel off to me, relative to folks I know and have interacted with in the community (insert joke about how I have "dumb" friends maybe).
Mine:
The world is perfect, meaning it is exactly as it is and always was going to be. However, the world as we know it is an illusion in that it only exists in our minds. We only know our experience, and all (metaphysical) claims to know reality, no matter how useful and predictive they are, are contingent and not fundamental. But we get confused about this because those beliefs are really useful and really predictive, and we separate ourselves from reality by first thinking the world is real, and then thinking our beliefs are about the world rather than of the world itself.
Thus the first goal of all self-aware beings is to get straight in their mind that everything is an illusion. This changes nothing about daily life because everything adds up to normality, but we are no longer confused. Knowing that all is illusion eliminates our fundamental source of suffering that's created by seeing ourselves as separate from the world, and thus we allow ourselves to return to the original joy of experience.
Having gotten our minds straight, now we can approach the task of shaping the world (which is, again, an illusion we construct in our minds, and is only very probably a projection of some external reality into our minds) to better fit our preferences. We can take our preferences far. They weren't designed to be maximized, but nonetheless we can do better than we do today. We can build machines and social technologies and communities (or at least create the illusion of these things in the very ordinary way we create all our illusions) to make possible the world we more want to live in. And everyone can do this, for they are not separate from us. Their preferences are our own; ours theirs. Together we can create a beautiful illusion free of pain and strife and full of flourishing.
I can't help but wonder if part of the answer is that they seem dangerous and people are selecting out of producing them.
Like I'm not an expert but creating AI agents seems extremely fun and appealing, and I'm intentionally working on it none because it seems safer not to build them. (Whether you think my contributions to trying to build them would matter or not is another question.)
Most arguments I see in favor of AGI ignore economic constraints. I strongly suspect that we can't actually afford to create AGI yet; world GDP isn't high enough. They seem to be focused on inside-view arguments for why method X will make it happen, which sure, maybe, but even if we achieve AGI, if we aren't rich enough to run it or use it for anything it hardly matters.
So the question in my mind is, if you think AGI is soon, how are we getting the level of economic growth needed in the next 2-5 years to afford to use AGI at all before AGI is created?
Just to verify, you were also eating rice with those lentils? I'd expect to be differently protein deficient if you only eat lentils. The right combo is beans and rice (or another grain).
If someone has gone so far as to buy supplements, they have already done far more to engineer their nutrition than the vegans who I've known who struggle with nutrition.
I generally avoid alts for myself, and one of the benefits I see is that I feel the weight of what I'm about to post.
Maybe I would sometimes writer funnier, snarkier things on Twitter that would get more likes, but because my name is attached I'm forced to reconsider. Is this actually mean? Do I really believe this? Does this joke go to far?
Strange to say perhaps, but I think not having alts makes me a better person, in the sense of being better at being the type of person I want to be, because I can't hide behind anonymity.
No, I've only tried it with Claude so far. I did think about trying other models to see how it compares, but I think Claude gave me enough info that trying to do this in chat is unlikely to be useful. I got enough info to feel like, in theory, teaching LLMs to meditate is not exactly a useful thing to do, but if it is then it needs to happen as part of training.