The absolute travel time matters less for disease spread in this case. It doesn't matter how long it would theoretically take to travel to North Sentinel Island if nobody is actually going there years on end. Disease won't spread to those places naturally.
And if an organization is so hell-bent on destroying humanity as to track down every last isolated pocket of human settlements on Earth (a difficult task in itself as they're obscure almost by definition) and plant the virus there, they'll most certainly have no trouble bringing it to Mars either.
I strongly believe that nuclear war and climate change are not existential risks, by a large margin.
For engineered pandemics, I don't see why Mars would be more helpful than any other isolated pockets on Earth - do you expect there to be less exchange of people and goods between Earth and Mars than, say, North Sentinel Island?
Curiously enough, the last scenario you pointed out - dystopias - might just become my new top candidate for x-risks amenable through Mars colonization. Need to think more about it though.
Moving to another planet does not save you from misaligned superintelligence.
Not only that, there is hardly any other existential risks to be avoided by Mars colonization, either.
Neuralink... I just don't see any scenario where humans have much to contribute to superintelligence, or where "merging" is even a coherent idea
The only way I can see Musk's position making sense is that it's actually a 4D chess move to crack the brain algorithm and using it to beat everyone else to AGI, and not the reasoning he usually gives in public for why Neuralink is relevant to AGI. Needless to say I am very skeptical of this hypothesis.
I would love to hear some longevity-related biotech investment advices from rationalists, which I (and presumably many others here) predict to be the second biggest deal in big picture futurism.
The only investment idea I can come up with myself are for-profit spin-off companies from SENS Research Foundation, but that's just the obvious option to someone without expertise in the field and trusting the most vocal experts.
Although some growth potential has already been lost due to the pandemic bringing a lot of attention towards this field, I think we're still early enough to capture some of the returns.
If you want to learn more about ongoing research into superheavy elements:
To me the most exciting prospect of this research is the potential discovery of not just an island, but an entire continent of stability that could open up endless engineering potential in the realm of nuclear chemistry.
No that's not what I meant; these two issues divide different tribes but the level of toxicity and fanaticism is similar. Heated debates around US-China war scenarios are very common in Taiwanese/Chinese overseas communities.
I also have a personal interest in trying to keep Lesswrong politics-free because for me fighting down the urge to engage in political discussions is a burden, like an ex-junkie constantly tempted with easily available drugs. Old habits die hard, so I immediately committed to not participate in any object-level discussions upon seeing the title of this post. I'm not sure whether this applies to anyone else.
I do have a sense that it's less likely to explode in bad ways, and less likely to attract bad people to the site.
I agree with the first part of the sentence but disagree with the second part. In my view, Lesswrong's best defense thus far has been a frontpage filled with content that appears bland to anyone with a combative attitude coming from other, more toxic social media environments. Posts like this one though stick out like a sore thumb and signal to onlookers that discussions about politics and geopolitics are now an integral part of Lesswrong, even when the discussions themselves are respectful and benign so far. If my hypothesis is correct, an early sign of deterioration would be an accumulation of newly registered accounts that solely leave comments on one or two politics-related posts.
Politics is politics. US vs China is about as divisive and tribal as you can go, on the same level as pro- vs anti-Trump. Would you encourage political discussions of the latter type on Lesswrong, too?
Why couldn't land-based delivery vehicle become autonomous though? That would also cut out the human in the loop.
One reason might be that autonomous flying drone are easier to realize. It is true that air is an easier environment to navigate than ground, but landing and taking off at the destination could involve very diverse and unpredictable situations. You might run into the same long tail problem as self-driving cars, especially since a drone that can lift several kilos has dangerously powerful propellers.
Another problem is that flying vehicles in general are energy inefficient due to having to overcome gravity, and even more so at long distances (tyranny of the rocket equation). Of course you could use drones just for the last mile, but that's an even smaller pool to squeeze value out of.
In general, delivery drones seem less well-suited for densely populated urban environments where landing spots are hard to come by and you only need a few individual trips to serve an entire apartment building. And that's where most of the world will live anyway.