You put it succinctly, I believe p(doom|personal_action) ≈ p(doom|~personal_action) for any personal action I can take. I do not see what I can do. I am also not trying to start a B2B SaaS, because spending my last days doing that is not the right thing to do.
Do you think this is wrong for most people / people trying to start an AI B2B SaaS / some other class of people you want to appeal to?
I admit, I don't quite follow the superrational part. If you're referring to some decision theoretic widget which allows one to cooperate with other people which are also capable of the same reasoning, to be effective these people have to exist and one has to be one of them, right?
...How does someone this idiotic ever stay in a position of authority?
In the best case, because they patch around the data science team being a bunch of midwits by having a calibrated intuition and using that to make good enough decisions.
In the worst case, because they're socially entrenched somehow (e.g. are on good terms with their boss).
On a related note, you have access to most of the worlds experts. You can reach them via email (or twitter). Some of them have office hours.
Of course, please don't annoy people with frivolous stuff or dump crackpot writeups on them.
Congratulations you made me make an account after a few years of reading the occasional thing or two here. :)
I confess I am one of the people who are not "brought up or brought into communities like this"; And it is not at all obvious to me why one should be always maximally truth seeking.
Truth is important im domains in which you can discover and apply useful predictive models of reality with your own intellectual capacity or by borrowing some from others. But many domains are not like this. For example, while there are good models for subsets of that endeavor, there is no "general theory of becoming a good politician". What people do instead is rely on their intuition (with some more rigorous reasoning mixed in sometimes). This has very effective outcomes, because our brains learn to solve problems without being able to verbalize how they do so.
Not only learning works this way, but communication too. A lot of stuff that people say is absolute gibberish when parsed for truth, but nontheless transmits useful information. Maybe some of the nonsense you heard at the philosophy event even had the right vibes to shift the thoughts of one of the participants who wrote down a result of this process rigorously? Or it influenced someones behavior positively etc etc.
The lower ones symbolic/verbal intelligence, the more one has to rely on this kind of implicit reasoning and the less one can produce for-truth-parseable rigorous statements. But this does not seem to matter for a lot of fields. I don't believe Donald Trump could tell you many coherent things about how to become president. People build entire business empires without coherent beliefs. Even a good chunk of engineering works without being truth seeking, just by iterating on something until it works (not until one understands it).
A failure mode might also be that the SaaS people are assuming the other players are not superrational. In that case a superrational player should also defect.
Without having put much thought into it, I believe (adult) humans cooperating via this mechanism is in general very unlikely. Agents cooperating relies on all agents coming to the same (or sufficiently similar?) conclusion regarding the payoff matrix and the nature of the other agents. So in human terms, this relies on everyones ability to reason correctly about the problem and everyone elses behavior AND everyone having the right information. I don't think that happens very often, if at all. The "everyone predicting each others behavior correctly" part seems especially unlikely to me. Also slightly different (predicted) information (e.g. AGI timelines in our case) can yield very different payoff matrices?