I like the title of this post! The content of the post isn't bad.
This was supposed to be a grand post explaining that belief. In practice it’s mostly a bunch of pointers to facets of truthseeking and ideas for how to do better.
I want the grand post! (I want clear articulations of the thing I feel is true and important.) Especially after you point out that it might have been.
The points in the post aren't bad, though it feels like fewer examples in greater depth that I could better memorize would have more value than a lot of short ones. I think the alphabetical fictional names make me bounce of a bit.
My guess for this is the content surpassed the writing; mostly a choice on Elizabeth's part as her strongest writing is very strong. I'd be interested in taking some of what's here and expanding it (expanding into a "meditation" that helps some lesson sink in). And of course, the grand post, please!
+1 in the Review
This is a good post. The examples are clear and it deepened my intuition (though I'm judging from the reread, I don't remember the delta from before my first reading). From the second-read, I think I might notice more instances in the wild of adverse selection, though I don't think the first read had much impact on me.
The intended subsequent posts look really great and like they'd have interesting models I don't yet have. I think I had the concept of adversarial selection before this, so wasn't a conceptual breakthrough.
Then again, maybe the title should have been "availability is an update against goodness because of adverse selection", which is depressing but perhaps true. I feel like I don't know what to do with that though. I kind of already know the best restaurants are crowded and the most attractive people aren't single? Maybe there's some gain from remembering to make an update on things once they prove available.
It's a good post, but didn't give me obvious large value. So giving it a 1 in the review.
I could imagine giving the sequels more though. I suppose it figures, this post was probably adversely selected for being easier to write due to its simpler content ;)
Curated! Beliefs should pay rent in anticipated experiences is a foundational tenet of this site, and AI is turning out to be an important topic, so thank you to jessicata for compiling this list.
Somehow this feels a lot more interesting than the list of predictions on Manifold, perhaps because of the selection/curation and it not being filtered to predictions that ended up on a market. I'd be interested in someone making an @record-prediction on Twitter that you can reply to tweets with, and then they get added to some database like this.
Predictions often get down to operationalization and that'd be a neat expansion, yet even without feels neat to have collected them.
Thanks. I hope we can eventually evaluate every prediction collected here!
I read this and think "ah, yes, this is valuable and important and I should be trying to do that more". And thought as much when I first read it. I don't think it stayed on my mind. It's too compressed and not a ready a cognitive strategy.
But taking a few moments to extrapolate it into something better, starting with why I'm not doing it to begin with:
Yet I don't fully buy all the above.
I do think that to do more of this, to make it a habit, it'll need intentional practice. Scheduled blocks of 30-min on the schedule. Seems worth it, I should add it to ye old exobrain to remind me. I'm forming an intention to try it.
The other piece is the noticing. I don't think I have a part of my brain that registers a "reached some milestone" event such that other actions could be triggered by it. Something, something Logan's Noticing sequence. I'll try that.
Ok, so where does that leave me regarding this crosspost?
I want to give this a 4 because it's Rationality stuff from Eliezer. I don't think I can because that great seeming, I don't see that people will have a lot to do with it, without a bunch of unpacking (as I'm attempting). Then again, if I do the post-inspired work for a while and get great gains, I might want to say "it was short, but it had such a large effect on me it, it was def worth a 4 or even 9!"
I think this post does a good job of conveying the challenges here, grounded in actual cases. (It's hard for me to evaluate whether it does a great job because my pre-existing knowledge on the topic.) I think this stuff is hard and I have so much sympathy for anyone who's been caught up in it, if they're weren't the instigator.
I don't feel convinced it's impossible to do this much better. My own median world isn't very fleshed out, but my gut tells me that dath ilan has figured out some good wisdom and process here, and I trust it. I'd also guess that if Lightcone did more in this realm[1], we'd eventually figure out some better processes here that make things better for all involved, and possibly not just within our own domain, but guidelines for other groups to follow too.
Given that, I think it's kinda bad to call the problem impossible. Even if something is hard and we're unlikely to make progress on it, don't cause people not try through excessive pessimism.
I'll probably give this post a 1. I'd be excited if anyone wrote a sequel that was guesses at how to design a better system (inspired by the actual challenges encouraged) and perhaps experimental guide for others to try out.
We were doing much more of these while running the Lightcone Offices. Lighthaven is typically rented out to other groups, and not maintaining enough of it's own persistent in-person community for us to have reason to do a lot of this kind of adjudication. (Ben's big investigation started in the Offices day, and is a case from which there is a lot to be learned, but I do think if we unfortunately did more such cases, we would learn and get better).
I think this is a valuable post. I say that less engaging with the specific ideas (they all seem like plausibly correct analyses to me), but for exploring the problem at all.
1. There's a societal taboo against discussions of intelligence and IQ that although it is much weaker on LessWrong, I wonder if it is not completely absent, and therefore we don't get that many posts like this one.
2. I often feel annoyed and judgmental that broader society doesn't clamor for longevity increases – it's seems so correct to think these are possible and important. Reading this post, I wonder if I commit the same mistake regarding intelligence enhancement. It clearly should be doable.
The argument against thinking about this stuff is that we have more dire urgent problems (AI) and in contrast there isn't that much tractability here. But was I justified in believing that before this post? Am I still justified in believing it?
In reality, I (and others?) feel stuck regarding AI, if so, isn't intelligence stuff worth more attention? This actually does that. I'm caught between giving it a 4 and a 9.
Feels like this points at correct things and I'm amenable to it being one of the top posts for 2024. It didn't change much for me (as opposed to @Ben Pace, who thinks about it many times per month according to his review) or feel so spot on that I'd want to give a high vote. I'll probably give something between 1-4.
Areas where I think it strikes me (admittedly with not that much thought or careful reading) as not perfectly right:
Notwithstanding the heading contra this, my instinct to want to reduce "believing in" statements to a combination of "I believe (Bayesian-style) that good things happen if I invest in X" + "I am publicly declaring myself for X (kickstarter / commitment mechanism)". Which is a little bit interesting, but also known phenomena. Added to that, you get boring old motivated cognition to tell yourself "I'll get this done in three hours". This might be an effective semi-self-aware self-deception to get yourself to do things that you wouldn't otherwise do, but that is also manipulation of the Bayesian belief slots in your head in order to get some result.
So believing in's are Bayesian beliefs with some indirection + an expression of commitment and/or group affiliation. If so, that is useful to point out.
An extension here that'd be neat is to analyze how often expressed "values" are believing-in's, e.g. "I believe in family", "I believe in democracy". If those are actually just Bayesian beliefs + commitment, then they're a lot more defeasible than the intrinsic inherent base "values" LessWrong normally talks about.
This post is entertaining and was valuable for describing to me a group of people with whom I never interact (highly incompetent liars), but not all that useful given that I never interact with such people. I don't think I especially need an existence proof for lying at all; I do think it'd help to get a post about examples of lying that are closer to what I'd encounter, or at that sophisticated enough to pass if you're too credulous.
I have a feeling of distaste for this post from an unusual angle: when we were first introducing a recommendation engine, out-of-box, the algorithm started maximizing click-through-rate (and was pretty good at it), but definitely doing so via promoting posts with the most click-baity titles. This one was perhaps the algorithms favorite post. (We then adjusted things to pull towards a distribution we liked better, at the expense of some CTR). The exercise gave me a sense of what's going wrong with the internet at large.
It's not a bad post, but I think something has gone wrong if it's high up on our list of "best posts". Looking at the post so far, a lot of people voted on it, probably because a lot of people read it given the alluring title...though many votes are 4s and I'm curious what people thought was so valuable.
I'm giving a -1 because it's not the intellectual progress I hope to see, and didn't find it all the helpful. A fun read though.
The first concept in this post really stuck with me, that of computational kindness vs <whatever the kindness of letting the other choose is>. The OP writes they got it from elsewhere, but I appreciate it having made it to me.
I'd really love it if it had a better solution of how to pick between kindnesses as I can find myself wondering which is more preferred.
The other concepts are great too. They hadn't stuck in my mind from original reading but perhaps will now.
I really wouldn't mind more posts just providing me with useful handles like this, so good stuff.
Noted! Thanks for responding and clarifying. If you had any examples you'd encountered, that might be helpful.
In my case, I have encountered, e.g. startup founders who lied to their clients blatantly, but semi-competently in that they could hope to not get caught. Things like "our product can [already] do that too" and then run to the engineers.