Ok, break this down a bit for me - I'm just a simple biological entity, with much more limited predictive powers.
It's worth simulating a vast number of possible minds which might, in some information -adjacent regions of a 'mathematical universe' be likely to be in a position to create you
This either well beyond my understanding, or is sleight-of-hand regarding identity and use of "you". It might help to label entities. Entity A has the ability to emulate and control entity B. It thinks that somehow its control over entity B is influential over entity C in the distant past or imaginary mathematical construct, who it wishes would create entity D in that disconnected timeline.
Nope, I can't give this any causal weight to my decisions.
I have a very hard time even justifying 1/1000. 1/10B is closer to my best guess (plus or minus 2 orders of magnitude). It requires a series of very unlikely events:
1) enough of my brain-state is recorded that I COULD be resurrected
2) the imagined god finds it worthwhile to simulate me
3) the imagined god is angry at my specific actions (or lack thereof) enough to torture me rather than any other value it could get from the simulation.
4) the imagined god has a decision process that includes anger or some other non-goal-directed motivation for torturing someone who can no longer have any effect on the universe.
5) no other gods have better things to do with the resources, and stop the angry one from wasting time.
Note, even if you relax 1 and 2 so the putative deity punishes RANDOM simulated people (because you're actually dead and gone) to punish YOU specifically, it still doesn't make it likely at all.
It's worth putting a number on that, and a different one (or possibly the same; I personally think my chances of being resurrected and tortured vary by epsilon based on my own actions in life - if the gods will it, it will happen, if they don't, it won't) based on the two main actions you're considering actually performing.
For me, that number is inestimably tiny. I suspect a fairly high neuroticism and irrational failure to limit the sum of their probabilities to 1 of anyone who thinks it's significant.
Within causal decision theory this is true, but if it were true in general then acausal decision theory would be pointles
Acausal decision theory is pointless, sure. Are there any? TDT and FDT are distict from CTD, but they're not actually acausal, just more inclusive of causality of decisions. CDT is problematic only because it doesn't acknowledge that the decisions being made themselves have causes and constraints.
First, a generalized argument about worrying. It’s not helpful, it’s not an organized method of planning your actions or understanding the world(s). OODA (observe, orient, decide, act) is a better model. Worry may have a place in this, as a way to remember and reconsider factors which you’ve chosen not to act on yet, but it should be minor.
Second, an appeal to consequentialism - it’s acausal, so none of your acts will change it. edit: The basilisk/mugging case is one-way causal - your actions matter, but the imagined blackmailer's actions cannot change your behavior. If you draw a causal graph, there is no influence/action arrow that leads them to follow-through on the imagined threat.
I read that section more as "go experience things, and participate in non-online activities and relationships" rather than "prove to strangers that you're human". It's not "real vs AI", it's "real vs virtual".
At twenty years old, I recently gained my independence following a period of estrangement from my family. The heart of our conflict lies in my attitude toward the kindness they extend to me. I define all their favors as "actions of their own choosing," maintaining the stance that while I am grateful for their financial support, it remains their choice and not my responsibility. My family condemns this as an "inhumane selfishness" stripped of both gratitude and regret.
I'm confused. "gained my independence" makes it sounds like you were being kept helpless, but if it was just optional financial gifts they gave you for their own reasons, didn't you already have independence? My suspicion is that you were disingenuously accepting support when it was convenient, and are now trying to claim that there was no reasonable expectation of behavior that came with the support.
IMO, it's a dick move. analogies to fictional dick moves don't make it better.
Upvoted, but disagree (well, kind of - I'd like to live in that world, but I don't think it's good advice today).
Once someone is well-known, and especially if they make a living and/or invest their self-identity in their public acceptance, they are well-advised to moderate themselves, and to stay in the overton window of their readers (even while pushing it in good directions for some readers). Even if they are convinced enough to tell their family something, they may want to water it down a bit so the public doesn't turn on them.
Speech is an action, and public speech has consequences beyond conveying information.
I congratulate you on realizing that there is no fully-trustworthy writer or authority. We're all humans, and we all have some motives that are not purely prosocial.
Note that the risks of being modeled are EXACTLY the reasons that you want to publish in the first place. You want other people to be able to understand you and to jointly update models with information each other provides. And you risk that people will misuse your posts in order to hurt or manipulate you.
i'd argue that AI increases both sides of the equation - you benefit greatly by not having to re-explain yourself, and from people AND AIs engaging with you and helping you refine/update your models. And the AIs have more time and focus to use it against you.
These may not be symmetric, so it may flip your personal equilibrium from public to private. But it should never have been binary anyway. What you publish has always been curated and limited to things you want in your permanent record, and that you expect more benefit than risk. If that line shifts by a little, that's reasonable. If it shifts entirely one way (or the other), you're overreacting.
thanks for the conversation, I'm bowing out here. I'll read further comments, but (probably) not respond. I suspect we have a crux somewhere around identification of actors, and mechanisms of bridging causal responsibility for acausal (imagined) events, but I think there's an inferential gap where you and I have divergent enough priors and models that we won't be able to agree on them.