I'm an admin of this site; I work full-time on trying to help people on LessWrong refine the art of human rationality.
Longer bio: www.lesswrong.com/posts/aG74jJkiPccqdkK3c/the-lesswrong-team-page-under-construction#Ben_Pace___Benito
"AI maniacs" is maybe a term that meets this goal? Mania is the opposite side to depression, both of which are about having false beliefs just in opposite emotionally valenced directions, and also I do think just letting AI systems loose in the economy is the sort of thing a maniac in charge of a civilization would do.
The rest of my quick babble: "AI believers" "AI devotee" "AI fanatic" "AI true believer" "AI prophets" "AI ideologue" "AI apologist" "AI dogmatist" "AI propagandists" "AI priests".
I think I tend to base my level of alarm on the log of the severity*probability, not the absolute value. Most of the work is getting enough info to raise a problem to my attention to be worth solving. "Oh no, my house has a decent >30% chance of flooding this week, better do something about it, and I'll likely enact some preventative measures whether it's 30% or 80%." The amount of work I'm going to put into solving it is not twice as much if my odds double, mostly there's a threshold around whether it's worth dealing with or not.
Setting that aside, it reads to me like the frame-clash happening here is (loosely) between "50% extinction, 50% not-extinction" and "50% extinction, 50% utopia", where for the first gamble of course 1:1 odds on extinction is enough to raise it to "we need to solve this damn problem", but for the second gamble it's actually much more relevant whether it's a 1:1 or a 20:1 bet. I'm not sure which one is the relevant one for you two to consider.
(Strong-upvote, weak-disagree. I sadly don't have time right now to reflect and write why I disagree with this position but I hope someone else who disagrees does.)
Relatedly, when we made DontDoxScottAlexander.com, we tried not to wade into a bigger fight about the NYT and other news sites, nor to make it an endorsement of Scott and everything he's ever written/done. It just focused on the issue of not deanonymizing bloggers when revealing their identity is a threat to their careers or personal safety and there isn't a strong ethical reason to do so. I know more high-profile people signed it because the wording was conservative in this manner.
I also have a strong personal rule against making public time-bound commitments unless I need to. I generally regret it because unexpected things come up and I feel guilty about not replying in the time frame I thought I would.
I might be inclined to hit a button that says "I hope to respond further to this".
I've just had an interesting experience that changed my felt-sense of consciousness and being embodied.
I've played over 80 hours of the newly released Zelda game, which is a lot given that it's only been out for 14 days. I do not normally play video games very much, this has been a fairly drastic change in how I've spent my personal time.
I'm really focused while playing it, and feel very immersed in the world of the game. So much so that I had a quite odd experience coming back to the rest of my life.
Yesterday, after playing the game for an hour, I wandered around the place my team works, and found a newly remodeled bathroom I had not seen before. I looked in, remembering how the room used to be.
I'm not quite sure how to describe the experience, but as I looked into the room, my brain ran the check "Is this one of the rooms that my body is physically located in or not?" and I instinctively stuck my head in fully to confirm that I was visible in the mirror. (I was.)
It was as though my brain had fully separated "Viewing a location" and "My human body being physically present in the location".
Noticing this has further reduced my felt sense of "being in my body". I suspect one day if I am uploaded, remembering this experience will reduce any experience of "but is this really me" to zero.
Curated! I loved a lot of things about this post.
I think the post is doing three things, all of which I like. First, it documents what it was like for Joe as he made substantial updates about the world. Secondly, it exhibits the rationalist practice of explaining what those updates look like using the framework of probabilities, and considering what sorts of updates a rational agent would make in his position, and contrasted that with a helpful explicit model of how a human being would make updates (e.g. using its guts). And third it's a serious and sincere account of something that I care about and Joe cares about. I felt reading this post that I was finally sharing the same mental universe as the author (and likely other people reading the post).
There's lots of more specific things to say that I don't have the time to, but I'll say that the paragraph that explains you can't have a >50% chance of your credence later doubling (or a >10% chance of 10x-ing your credence) struck me immediately as a go-to tool I want to add to my mental toolkit for figuring out what probability I assign to a given statement.
I have not read this post, and I have not looked into whatever the report is, but I'm willing to take a 100:1 bet that there is no such non-human originating craft (by which I mean anything actively designed by a technological species — I do not mean that no simple biological matter of any kind could not have arrived on this planet via some natural process like an asteroid), operationalized to there being no Metaculus community forecast (or Manifold market with a sensible operationalization and reasonable number of players) that assigns over 50% probability to this being a craft of non-human design being true in the next 2 years.
(I am actually going to check that this post makes a claim like this now, before posting, in case I am off-topic. K, looks like I am broadly on-topic.)