I'm writing a book about epistemology. It's about The Problem of the Criterion, why it's important, and what it has to tell us about how we approach knowing the truth.
I've also written a lot about AI safety. Some of the more interesting stuff can be found at the site of my currently-dormant AI safety org, PAISRI.
A couple notes:
No, I've only tried it with Claude so far. I did think about trying other models to see how it compares, but I think Claude gave me enough info that trying to do this in chat is unlikely to be useful. I got enough info to feel like, in theory, teaching LLMs to meditate is not exactly a useful thing to do, but if it is then it needs to happen as part of training.
Memory reconsolidation
Also, more generally, no prediction market price means you can immediately conclude what the probability of any outcome is, because most markets we have only subjective probability (maybe this is always true but I'm trying to ignore things like fair coin flips that have agreed upon "objective" probabilities), so there is no fact of the matter about what the real probability of something happening is, only the subjective probability based on the available information.
Instead a prediction market is simply, in the ideal case, the market clearing price at which people are willing to take bets on either side of the question at this moment in time. This price represents a marginal trading point—participants with higher subjective probabilities than the market price will buy, while those with lower will sell. This is importantly different from the true probability of an outcome, and it's a general mistake people make to treat them as such.
Then there are other factors, like you mention with interest, but also issues with insufficient volume, large traders intentionally distorting the market, etc. that can make the market clearing price less useful for inferring what subjective probability an observer should treat a possible outcome as having.
Instead a prediction market provides aggregate information that can be used for a person to make their own assessment of the subjective probability of an outcome, and if they differ from the market in their assessment they can make a bet that will be subjectively positive value in expectation, but still in no way is the market price of any prediction market the probability of any outcome.
Honestly, this fits my intuition. If I think of all the rationalists I know, they feel like they are on average near 120 IQ, with what feels like a standard distribution around it, though in reality it's probably not quite normal with a longer upper tail than lower tail, i.e. fewer 90s than 150s, etc. Claims that the average is much higher than 120 feel off to me, relative to folks I know and have interacted with in the community (insert joke about how I have "dumb" friends maybe).
I can't help but wonder if part of the answer is that they seem dangerous and people are selecting out of producing them.
Like I'm not an expert but creating AI agents seems extremely fun and appealing, and I'm intentionally working on it none because it seems safer not to build them. (Whether you think my contributions to trying to build them would matter or not is another question.)
Most arguments I see in favor of AGI ignore economic constraints. I strongly suspect that we can't actually afford to create AGI yet; world GDP isn't high enough. They seem to be focused on inside-view arguments for why method X will make it happen, which sure, maybe, but even if we achieve AGI, if we aren't rich enough to run it or use it for anything it hardly matters.
So the question in my mind is, if you think AGI is soon, how are we getting the level of economic growth needed in the next 2-5 years to afford to use AGI at all before AGI is created?
Just to verify, you were also eating rice with those lentils? I'd expect to be differently protein deficient if you only eat lentils. The right combo is beans and rice (or another grain).
This is gonna sound mean, but the quality of EA-oriented online spaces has really gone downhill in the last 5 years. I barely even noticed Kat Woods' behavior, because she is just one more in a sea of high volume, low quality content being posted in EA spaces.
That's what I've given up mostly on EA sites and events, other than attending EA Global (can't break my streak), and just hang out here on Less Wrong where the vibes are still good and the quality bar is higher.