So you read Three Body Problem but not Dark Forest. Now that I think about it, that actually goes quite a long way to put the rest into context. I'm going to go read about conflict/mistake theory and see if I can get into a better headspace to make sense of this.
Have you read Cixin Liu's Dark Forest, the sequel to Three Body Problem? The situation on the ground might be several iterations more complicated than you're predicting.
I used the word "high-status men" as a euphemism that I'm not really comfortable talking about in public, did not notice it would be even harder to get for non-americans. My apologies.
I used "high-status men" mainly as the opposite of low-status men, in that they are men who are low status due to being short, ugly, unintelligent, or socially awkward, sufficiently so that they were not able to gain social status. These people are repellent to other men as well as women, sadly. @Roko has been tweeting about fixes to this problem such as reforms in the plastic surgery industry, and EA and rationalists are well above base rate communities (e.g. classical music society) for tolerating/improving low social skills and male shortness. This is due to primate instincts which usually cannot be overcome, in spite of people feeling optimistic about their ability to overcome them. The degree of social awkwardness is defined/measured by the harm it does someone; if someone looks "socially awkward" but in a charming or likable way that remains charming or likable, that is not a serious (or even significant) case, as it does not doom someone to low social status.
This is also a reason why so many people have so little tolerance for non-transhumanists as a class of ideologues; non-transhumanists accept the status quo of our current tech level, where human genetic diversity dooms a large portion of people to a pointlessly sad and miserable life without their consent, (on top of dooming everyone to a short life).
I think this might be typical-minding. The consequences of this dynamic are actually pretty serious at macro-scale e.g. due to reputation of meetups, and evaporative cooling of women and high-status men as they avoid public meetups and stop meeting people new to AI safety.
I'm glad to hear there's people who don't let it get to them, because it is frankly pretty stupid that this has the consequences that it does at the macro-scale. But it's still well-worthy of some kind of solution that benefits everyone.
such as making people feverishly in favor of the American side and opposed to the Russian side in proxy wars like Ukraine.
Woah wait a second, what was that about Ukraine?
I predict at 95% that similar types of automated manipulation strategies as these were deployed by US, Russia, or Chinese companies or agencies to steer people’s thinking on Ukraine War and/or Covid-related topics
Does stuff like the twitter files count? Because that was already confirmed, it's at 100%.
It seems like if capabilities are escalating like that, it's important to know how long ago it started. I don't think the order-of-magnitude-every-4-years would last (compute bottleneck maybe?), but I see what you're getting at, with the loss of hope for agency and stable groups happening on a function that potentially went bad a while ago.
Having forecasts about state-backed internet influence during the Arab Spring and other post-2008 conflicts seems like it would be important for estimating how long ago the government interest started, since that was close to the Deep Learning revolution. Does anyone have good numbers for these?
What probability do you put on AI safety being attacked or destroyed by 2033?
these circumstances are notable due to the risk of it being used to damage or even decimate the AI safety community, which is undoubtedly the kind of thing that could happen during slow takeoff if slow takeoff transforms geopolitical affairs and the balance of power
Wouldn't it probably be fine as long as noone in AI safety goes about interfering with these applications? I get an overall vibe from people that messing with this kind of thing is more trouble than it's worth. If that was the case, wouldn't it be better to leave it be? What's the goal here?
This is interesting, but why is this relevant? What are your policy proposals?