Sorted by New

Wiki Contributions



I read and understood the Least convenient possible world post. given that, then let me rephrase your scenario slightly

If every winner of a certain lottery receives $X * 300 million, a ticket costs $X, the chances of winning are 1 in 250 million, you can only buy one ticket, and $X represents an amount of money you would be uncomfortable to lose, would you buy that ticket?

answer no. If the ticket price crosses a certain threshold, then I become risk averse. if it were $1 or some other relatively inconsequential amount of money, then I would be rationally compelled to buy the nearly-sure loss ticket.


*kill traveler to save patients problem

assuming that

-the above solutions (patient roulette) were not viable

-upon recieving their new organs, the patients would be restored to full functionality, the equal of or better utility generators than the traveler

then I would kill the traveler. however, if the traveler successfully defended himself, and turned the tables on me, I would use my dying breath to happily congratulate his self preservation instinct and wish him no further problems on the remainder of his journey. and of course Id have left instructions w my nurse to put my body on ice and call the doctor from the next town over to come and do the transplants from my own organs.

  1. pascal wager

if catholicism is true, then Im already in hell. what else can you call an arbitrary, irrational universe?

  1. god hole

if there is a evolutionary trap in the human mind that requires irrational belief to achieve optimal happiness, then I just add that to the list of all the other 'design flaws' and ignore.

  1. extreme altruism

I can not imagine a least convenient world in which something resembling what we understand of the laws of economics operates, where both I and the africans would not be better off by me using my money to invest in local industry, or financing an anti-warlord coup dtat. if you want to fiat that these ppl cant work, or that the dictator is unstoppable and will nationalize and embezzle my investments, then I dont see how charity is going to do any better. if theres no way that my capital can improve their economy, then they are just flat doomed, and Id rather keep my money.


@ doug S

I defeat your version of the PW by asserting there is no rational lottery operator who goes forth with the business plan to straight up lose $50million. thus the probability of your scenario, as w the christian god, is zero.


humanity is doomed in this scenario. the Lotuseaters are smarter and the gap is widening. Theres no chance humans can militarily defeat them now or any point in the future. as galactic colonization continues exponentially, eventually they will meet again, perhaps in the far future. but the Lotusfolk will be even stronger relatively at that point. the only way humans can compete is developing an even faster strong-AI, which carries large chance of ending humanity on its own.
so the choices are:
-accept Lotusfolk offer now
-blow up the starline, continue expanding as normal, delay the inevitable
-blow up the starline, gamble on strong AI, hopefully powering-up human civ to the point it can destroy the Lotusfolk when they meet again

this choice set is based on the assumption that the decider values humanity for its own sake. I value raw intelligence, the chassis notwithstanding. so the only way I would not choose option 1 is if I thought that the Lotusfolk, while smarter currently, were disinclined to develop strong-AI and go exponential, and thus w humanity under their dominion, no one would. if humans could be coaxed into building strong AI in order to counter the looming threat of Lotusfolk assimiliation, and thus create something smarter than any of the 3 species combined, then I would choose option 3.