There's been a handful of kashrus scandals where people mixed in meat from other supply chains into stuff that was supposed to be kosher certified, which seems a useful reference point for how this can slip through even with an extensive monitoring system intended to prevent that.
Technically part of the US, if you move here as a US citizen you get 0% capital gains tax rates which is really good if you're investing/trading/etc. Would love if more people moved here.
San Juan, Puerto Rico.
I'm currently looking into buying a bank or insurance company to do exactly that.
It's really non-trivial to borrow large amounts at low rates to lend. Way easier said than done.
Crypto yields are currently 20%+ annualized with fairly low risk if you know what you're doing
No, because that's a meaningless claim about external reality. The only meaningful claims in this context are predictions.
"Do you expect to see chaos, or a well formed world like you recall seeing in the past, and why?"
The latter. Ultimately that gets grounded in Occam's razor and Solomonoff induction making the latter simpler.
I've spent a lot of time and written a handful of posts (including one on the interaction between Solomonoff and SIA) building my ontology. Parts may be mistaken but I don't believe it's "confused". Tabooing core concepts will just make it more tedious to explain, probably with no real benefit.
In particular, the only actual observations anyone has are of the form "I have observed X", and that needs to be the input into Solomonoff. You can't input a bird's eye view because you don't have one.
Anyway, it seems weird that being altruistic affects the agent's decision as to a purely local bet. You end up with the same answer as me on that question, acting "as if" the probability was 90%, but in a convoluted manner.
Maybe you should taboo probability. What does it mean to say that the probability is 50%, if not that you'll accept purely local bets with better odds and not worse odds? The only purpose of probability in my ontology is for predictions for betting purposes (or decision making purposes that map onto that). Maybe it is your notion of probability that is confused.
A couple of things.
If you're ok with time inconsistent probabilities then you can be dutch booked.
I think of identity in terms of expectations. Right before you go to sleep, you have a rational subjective expectation of "waking up" with any number from 1-20 with a 5% probability.
It's not clear how the utility function in your first case says to accept the bet given that you have the probability as 50/50. You can't be maximizing utility, have that probability, and accept the bet - that's just not what maximizes probability under those assumptions.
My version of the bet shouldn't depend on if you care about other agents or not, because the bet doesn't affect other agents.
You can start with Bostrom's book on anthropic bias. https://www.anthropic-principle.com/q=book/table_of_contents/
The bet is just each agent is independently offered a 1:3 deal. There's no dependence as in EYs post.
You're just rejecting one of the premises here, and not coming close to dissolving the strong intuitions / arguments many people have for SIA. If you insist the probability is 50/50 you run into paradoxes anyway (if each agent is offered a 1:3 odds bet, they would reject it if they believe the probability is 50%, but you would want in advance for agents seeing green to take the bet.)