I've been thinking of doing more things like this, however, I wonder about this part:
- P(E | H1) = 0.2 (if 150g is enough, poor recovery is unlikely)
- P(E | H2) = 0.7 (if 170g is needed, poor recovery is likely if you only ate 150g)
Any reason why these particular conditional probabilities are chosen here -- 0.2 and 0.7?
Would there be any principled way to update these probabilities while new evidence rolls in, and maybe even start both from 0.5? I think for simple observation our set of formulae might be under-constrained, so maybe we'd need to incorporate other stream of evidence to constrain it enough?
Think about it - you start to exist without consciousness and you die without consciousness. You never experience any form of suffering, because you are not physically able to feel the pain. You are going straight to heaven, because you have no opportunity to sin in any way. You don't even have a birth sin because well... you weren't born. You never experience human existence, you are just "born" in heaven and live in the perfect world from the beginning - like Adam and Eve would have if they hadn't sinned. In Christianity, that's the best position you could be in.
But then, there's also the darker side of it. If you want to create the highest amount of those sinless sufferless beings, you could... get abortions as frequently as you can. The problem is, it will send YOU to hell. In that scenario the woman who would do that would be an ultimate martyr, who would sacrifice her own soul to create many infinitely happy children...
Source: /u/Uszanka from reddit (lightly edited mostly to clean up typos and likely ESL quirks)
This thought is short and sweet. Saw it and figured others here might appreciate this view as well. This was a to-me-novel way to look at what might be a somewhat fundamental incoherence at the core of Christianity, maybe some of you will be inclined to argue against.
This also ties into something I've been thinking about lately, that there might be very few "true" christians who do not just believe if belief, but truly fully believe what they preach. Maybe with very heavy compartmentalization and low enough self-reflection it can happen, but the concept of heaven seems to cause some issues.
Here is another: if someone truly believes in heaven, they should take much fewer steps--than what their revealed preferences betray--to avoid dying. On a long enough time horizon, evolution might make sure that, of heaven, only belief-in-belief can persist, the others select themselves out.
Reference class tennis, yay!
I think I see somewhat where you are coming from, but can you spell it out for me a bit more? Maybe through describing a somewhat fleshed out concrete example scenario all the while I can acknowledge that this is just one hastily put together possibility of many.
Let me start by proposing one such possibility but feel free to start going in another direction entirely too. Let's suppose the altruistic few put together sanctuaries or "wild human life reserves", how might this play out after this? Will the selfish ones somehow try to intrude or curtail this practice? By our scenarios granted premises, the altruistic ones do wield real power, and they do use some fraction of it to maintain this sanctum. Even if the others are many, would they have a lot to gain by trying to mess with this? Is it just entertainment, or sport for them? What do they stand to gain? Not really anything economic or more power, or maybe you think that they do?
There is one counterargument that I sometimes hear that I'm not sure how convincing I should find:
Do you agree or disagree with any parts of this?
p.s. this might go without saying but this question might only be relevant if technical alignment can be and is solved in any fashion. With that said I think it's entirely good to ask this question lest we find ourselves in a world where we clear one impossible seeming hurdle and still find ourselves in a world of hurt all the same.
This only needs there to exist something of a pareto frontier of either very altruistic okay-offs, or well-off only-a-little-altruists, or somewhere in-between. If we have many very altruistic very-well-offs, then the argument might just make itself, so I'm arguing in a less convenient context.
This might truly be tiny indeed, like one one-millionth of someone's wealth, truly a rounding error. Someone arguing for side A might be positing a very large amount of callousness if all other points stand. Or indifference. Or some other force that pushes against the desire to help.
> The low-trust attractor starts to bend other people into reciprocal low-trust shapes, just like a prion twisting nearby proteins.
Convincing people using your actions sounds disgusting!
Could you expand on what you mean here? I'm not sure I or others followed you. Perhaps you mean what you say sarcastically?
(Formatting wise: not sure how to quote a quote here, perhaps someone knows?)
Is an audiobook version also planned per chance? Could preordering that one also help?
Judging from Stephen Fry's endorsement and, as I've seen, his interest in the topic for some time in general, perhaps a delightful and maybe even eager deal could be made where he narrates? Unless some other choice might be better for either party of course. And I also understand if negotiations or existing agreements prevent anyone from confirming anything on this aspect, I'd be happy to hear whether the audio version is planned/intended to begin with and when if that can be known.
You might find this concept helpful to look into as have I: Pervasive Drive for Autonomy.