But there is no way to downvote a reaction? E.g. if you add the paperclip reaction, then all I can do is bump it by one and/or later remove my reaction, but there is no way to influence your one? So reactions are strictly additive?
The answer is to read the sequences (I'm not being facetious). They were written with the explicit goal of producing people with EY's rationality skills in order for them to go into producing Friendly AI (as it was called then). It provides a basis for people to realize why most approaches will by default lead to doom.
At the same time, it seems like a generally good thing for people to be as rational as possible, in order to avoid the myriad cognitive biases and problems that plague humanities thinking, and therefore actions. My impression is that the hope was to make the world more similar to Dath Ilan.
It depends what you mean by political. If you mean something like "people should act on their convictions" then sure. But you don't have to actually go in to politics to do that, the assumption being that if everyone is sane, they will implement sane policies (with the obvious caveats of Moloch, Goodhart etc.).
If you mean something like "we should get together and actively work on methods to force (or at least strongly encourage) people to be better", then very much no. Or rather it gets complicated fast.
Jehovah's Witnesses are what first came to mind when reading the OP. They're sort of synonymous with going door to door in order to have conversations with people, often saying that they're willing for their minds to be changed through respectful discussions. They also are one of few christian-adjacent sects (for lack of a more precise description) to actually show large growth (at least in the west).
No.
Atheism is totally irrelevant. A deist would come to exactly the same conclusions. A Christian might not be convinced of it, but mainly because of eschatological reasons. Unless you go the route of saying that AGI is the antichrist or something, which would be fun. Or that God(s) will intervene if things get too bad?
Reductive materialism also is irrelevant. It might sort of play an issue with whether an AGI is conscious, but that whole topic is a red herring - you don't need a conscious system for it to kill everyone.
This feeds into the computational theory of mind - it makes it a lot easier to posit the possibility of a conscious AGI if you don't require a soul for it, but again - consciousness isn't really needed for an unsafe AI.
I have fundamental Christian friends who are ardent believers, but who also recognize the issues behind AGI safety. They might not think it that much of a problem (pretty much everything pales in comparison to eternal heaven and hell), but they can understand and appreciate the issues.
1GB of text is a lot. Naively, that's a billion letters, much more if you use compression. Or you could maybe just do some kind of magic with the question containing a link to a wiki on the (simulated) internet?
If you have infinite time, you can go the monkeys on typewriters route - one of them will come up with something decent, unless an egregore gets them, or something. Though that's very unlikely to be needed - assuming that alignment is solvable by a human level intelligence (this is doing a lot of work), then it should eventually be solved.
This seems to be mixing 2 topics. Existing programs are more or less a set of steps to execute. A glorified recipe. The set of steps can be very complicated, and have conditionals etc., but you can sort of view them that way. Like a car rolling down a hill, it follows specific rules. An AI is (would be?) fundamentally different in that it's working out what steps to follow in order to achieve its goal, rather than working towards its goal by following prepared steps. So continuing the car analogy, it's like a car driving uphill, where it's working to forge a path against gravity.
An AI doesn't have to be a utility maximiser. If it has a single coherent utility function (pretty much a goal), then it will probably be a utility maximiser. But that's by no means the only way of making them. LLMs don't seem to be utility maximisers
worker bees are infertile
Only for social bees, like honey bees or bumblebees - > 90% of bee species are solitary, and most certainly fertile (if they are to have any chance of being successful evolutionary). Which I suppose only serves to support your point even more...
More that you get as many people in general to read the sequences, which will change their thinking so they make fewer mistakes, which in turn will make more people aware both of the real risks underlying superintelligence, but also of the plausibility and utility of AI. I wasn't around then, so this is just my interpretation of what I read post-facto, but I get the impression that people were a lot less doomish then. There was a hope that alignment was totally solvable.
The focus didn't seem to be on getting people into alignment, as much as it generally being better for people to think better. AI isn't pushed as something everyone should do - rather as what EY knows - but something worth investigating. There are various places where it's said that everyone could use more rationality, that it's an instrumental goal like earning more money. There's an idea of creating Rationality Dojos, as places to learn rationality like people learn martial arts. I believe that's the source of CFAR.
It's not that the one and only goal of the rationalist community was to stop an unfriendly AGI. It's just that is the obvious result of it. It's a matter of taking the idea seriously, then shutting up and multiplying - assuming that AI risk is a real issue, it's pretty obvious that it's the most pressing problem facing humanity, which means that if you can actually help, you should step up.
Business/economic/social incentives can work, no doubt about that. The issue is that they only work as long as they're applied. Actually caring about an issue (as in really care, like oppressed christian level, not performance cultural christian level) is a lot more lasting, in that if the incentives disappear, they'll keep on doing what you want. Convincing is a lot harder, though, which I'm guessing is your point? I agree that convincing is less effective numerically speaking, but it seems a lot more good (in a moral sense), which also seems important. Though this is admittedly a lot more of an aesthetics thing...
I most certainly recommend reading the sequences, but by no means meant to imply that you must. Just that stopping an unfriendly AGI (or rather the desirability of creating an friendly AI) permeates the sequences. I don't recall if it's stated explicitly, but it's obvious that they're pushing you in that direction. I believe Scott Alexander described the sequences as being totally mind blowing the first time he read them, but totally obvious on rereading them - I don't know which would be your reaction. You can try the highlights rather than the whole thing, which should be a lot quicker.