Dagon

Just this guy, you know?

Comments

On Falsifying the Simulation Hypothesis (or Embracing its Predictions)

I think you're missing an underlying point about the Boltzmann Brain concept - simulating an observer's memory and perception is (probably) much easier than simulating the things that seem to cause the perceptions.

Once you open up the idea that universes and simulations are subject to probability, a self-contained instantaneous experiencer is strictly more probable than a universe which evolves the equivalent brain structure and fills it with experiences, or a simulation of the brain plus some particles or local activity which change it over time.

How long would you wait to get Moderna/Pfizer vs J&J?

A lot depends on how much personal control you have of when and what kind of vaccine you get.  If you knew for certain you could wait 3 weeks and get your preferred vaccine, that's probably better than taking J&J today.  But if there's a fair chance that you WON'T be able to - either you'll have to wait much longer or take J&J anyway, you're probably better off just taking it now.

The driving factor is just how much COVID-19 sucks, and the cost of getting it during that voluntary gap.  If you're truly comfortable and truly locked down, then waiting longer is more reasonable, and it also lets people who need it more than you get it sooner.  In that case, delays of up to a few months may be justified.  If you're only mostly locked down (as I am - I still go out briefly a few times a week for things that can't easily be delivered), then delay is riskier and you should prioritize any vaccine, delaying no more than a week or two.

Wanting to Succeed on Every Metric Presented

The flip side of this is tradeoff bias (a term I just made up - it's related to false dichotomies).  Assuming you have to give up something to get a desired goal, and that costs always equal rewards is a mistake.  Some people CAN get all As, excel at sports, and have a satisfying social life.  Those people should absolutely do so.  

I think the post has good underlying advice: don't beat yourself up or make bad tradeoffs if you CAN'T have it all.  Experimenting to understand tradeoffs, and making reasoned choices about what you really value is necessary.  But don't give up things you CAN get, just because you assume there's a cost you can't identify.

On Falsifying the Simulation Hypothesis (or Embracing its Predictions)

Anthropic reasoning is hard.  It's especially hard when there's no outside position or evidence about the space of counterfactual possibilities (or really, any operational definition of "possible").  

I agree that we're equally likely to be in any simulation (or reality) that contains us.  But I don't think that's as useful as you seem to think.  We have no evidence of the number or variety of simulations that match our experience/memory.  I also like the simplicity assumption - Occam's razor continues to be useful.  But I'm not sure how to apply it - I very quickly run into the problem that "god is angry" is a much simpler explanation than a massive set of quantum interactions. 

Is is simpler for someone to just simulate this experience I'm having, or to simulate a universe that happens to contain me?  I really don't know.  I don't find https://en.wikipedia.org/wiki/Boltzmann_brain to be that compelling as a random occurrence, but I have to admit that as the result of an optimization/intentional process like a simulation, it's simpler than the explanation that there has actually existed or been simulated the full history of specific things which I remember.

niplav's Shortform

On self-reflection, I just plain don't care about people far away as much as those near to me.  Parts of me think I should, but other parts aren't swayed.  The fact that a lot of the motivating stories for EA don't address this at all is one of the reasons I don't listen very closely to EA advice.  

I am (somewhat) an altruist.  And I strive to be effective at everything I undertake.  But I'm not an EA, and I don't really understand those who are.

Good insight, but I'm not sure if it's an error, or just a feature of the fact that reality is generally entangled.  Most actions do, in fact, have multiple consequences on different axes.  

One often ends up pulling multiple levers, to try to amplify the effects you like and dampen the ones you don't.

Preventing overcharging by prosecutors

For something that's impossible to implement, this seems like a surprisingly small step toward EITHER futarchy and the embedding of good conditional predictions in more policy and behavioral decisions, OR to resolving the broken incentives of prosecutors (convictions, not truth).

Raemon's Shortform

I was mostly reacting to "I'd previously talked about how it would be neat if LW reacts specifically gave people affordance to think subtler epistemically-useful thoughts. ", and failed my own first rule of evaluation: "compared to what?".

As something with more variations than karma/votes, and less distracting/lower hurdle than comments, I can see reacts as filling a niche.  I'd kind of lean toward more like tagging and less like 5-10 variations on a vote.  

Raemon's Shortform

I don't participate in a very wide swath of social media, so this may vary beyond FB and the like.  But from what I can tell, reacts do exactly the opposite of what you say - they're pure mood affiliation, with far less incentive nor opportunity for subtlety or epistemically-useful feedback than comments have.

The LW reacts you've discussed in the past (not like/laugh/cry/etc, but updated/good-data/clear-modeling or whatnot) probably DO give some opportunity, but can never be as subtle or clear as a comment.  I wonder if something like Slack's custom-reacts (any user can upload an icon and label it for use as a react) would be a good way to get both precision and ease.  Or perhaps just a flag for "meta-comment", which lets people write arbitrary text that's a comment on the impact or style or whatnot, leaving non-flagged comments as object-level comments about the topic of the post or parent.

Forcing yourself to keep your identity small is self-harm

Thanks for this - I mostly agree, but it's important to note that a lot of this is confusion about the metaphor(s) of identity such that "keep it small" actually means anything.  Let alone that it means to me what Paul Graham intended.  Let alone whether what works for him works for me or you.

I tend to think of "keep my identity small" as "keep my attachments to identity dimensions weak".  I am not my (current) identity - neither the salient points of a self-image at any point in time, nor the things that any friends or acquaintances use to summarize and predict me.  I am a collection of related identities across contexts and time.  I'm not sure if I'm more than that, but I'm at least that, not any given point-identity.

I've very happy to take your reminder that the best path (for most of us) to this is acceptance rather than denial or force.  Accept that your self-perception is incomplete, and that your experiences will be deeply impacted by self-image and the many many variations of others' image of you.  You CAN adjust these images and perceptions, but you probably CANNOT just declare them to be different.

Load More