I haven't read your post in detail. But 'effective disbelief' sounds similar to Stuart Armstrongs work on indifference methods.
Was thinking the same thing when I thought about me with 11 who was more than capable to stay at home alone just fine. I don't really get what is so special about being home alone at night.
For what it's worth my brain thinks of all of these as 'deep interesting ideas' which intuitively your post might have pushed me away from. Just noticing that I'd be super careful to not use this idea as a curiosity-killer.
And that's what explains the attractiveness of the appeal-to-persuading-third-parties. What "You'll never persuade people like that" really means is, "You are starting to persuade me against my will, and I'm laundering my cognitive dissonance by asserting that you actually need to persuade someone else who isn't here."
Big if true. Going to look out for this in future conversations.
Your example seems still confused to me. Maybe try something simpler like "Will it rain tomorrow? " because you want to pack for a trip. There's lots things you can inquire to figure out if this is likely. For example if it's cloudy now that probably has some bearing on whether it will rain. You can look up past weather records for your region. More recently we have detailed models informing forecasts that you can access through the internet to inform you about the weather tomorrow. All of these are evidence.
There is also lots of observations you can make that are for all you know uncorrelated with whether it will rain tomorrow. Like the outcome of a dice throw you do. These do not constitute evidence toward your question or at least not very informative evidence.
Also if you are very concerned about yourself cryonics seems like the more prosocial version. Like 0.1-10% seems still kinda high for my personal risk preferences.
Thus, capabilities work shift from being net-negative to net positive in expectation.
This feels to obvious to say, but I am not against building AGI ever, but because the stakes are so high and the incentives are aligned all wrong I think on the margin speeding up is bad. I do see the selfish argument and understand not everyone would like to sacrifice themselves, their loved ones or anyone likely to die before AGI is around for the sake of humanity. Also making AGI happen sooner is on the margin not good for taking over the galaxy I think (Somewhere in the EA forum is a good estimate for this. The basic argument is that space colonization is only O(n^2) or O(n^3) so very slow).
I now think the probabilities of AI risk have steeply declined to only 0.1-10%, and all of that probability mass is plausibly reducible to ridiculously low numbers by going to the stars and speeding up technological progress.
I think this is wrong (in that how does speeding up reduce risk? What do you want to speed up?) . I'd be actually interested in the case for this I got promised in the title.
Past me is trying to give himself too much credit here. Most of it was epistemic luck/high curiosity that lead him to join Søren Elverlin's reading group in 2019 and then I just got exposed to the takes from the community.
You mean my link to arXiv? The PDF there should be readable. Or do you mean the articles linked in the PDF? They seem to work as well just fine.