Trying to stay focused on things I've already said, I guess it would just mean adopting any sort of security posture towards dual use concepts, particularly in regards to the attack of entirely surreptitiously replacing the semantics of an existing set of formalisms to produce a bad outcome, and also de-emphasizing cultural norms favoring coordination to focus more on safety. It's really just like the lemon market for cars -> captured mechanics -> captured mechanic certification thing, there just needs to be thinking about how things can escalate. Obviously increased openness could still plausibly be a solution to this in some respects, and increased closedness a detriment. My thinking is just that, at some point AI will have the capacity to drown out all other sources of information, and if your defense for this is 'I've read the sequences", that's not sufficient, because the AI has read the sequences too, so you need to think ahead to 'what could AI, either autonomously or in conjunction with human bad actors, do to directly capture my own epistemic formula, overload them with alternate meaning, then deprecate existing meaning'. And you can actually keep going in paranoia from here, because obviously there are also examples of doing this literal thing that are good, like all of science for example, and therefore not just people who will subvert credulity here but people who will subvert paranoia.
I guess the ultra concise warning would be "please perpetually make sure you understand how scientific epistemics fundamentally versus conditionally differ from dark epistemics so that your heuristics don't end up having you doing Aztec blood magic in the name of induction"
In another post made since this comment, someone did make specific claims and intermingle them with analysis, and it was pointed out that this can also reduce clarity due to heterogenous background assumptions of different readers. I think the project of rendering language itself unexploitable is probably going to be more complicated than I can usefully contribute to. It might not even be solvable at the level I'm focused on, I might literally be making the same mistake.
There's some hypothetical version of white pride that matches this description but getting from literally anywhere in history including now to there would be a heroic process. I mean yeah, there is something charming about Rockwell dialoguing with Malcolm X. But remember that in the picture, they were wearing the uniform of a regime that butchered over 11 million captive civilians and killed probably as many civilians in other places through war. That wasn't just an aesthetic choice. That reflected, at the most charitable, the conviction that such actions were within the realm of permissible strategies. And even if you're willing to devil's advocate that, which sure, why not, we're in hell, why rule anything out a priori, it with almost equivalent certainty reflected the conviction that such actions were permissible as a response to the conditions of Weimar Germany, which is just not true, and a conviction immediately worthy of violence.
This is also just begging the question about the fitness justification of white nationalism. In an American context it's pretty explicitly a coalitional strategy between different white races mostly adopted by the races who, under late 19th or early 20th century racial conceptions, would have been considered most marginally white. It is just as plausible the fitness function is in ensuring access to and protection from socially dominant white races for less socially dominant white races. You could even get into some Albion's Seed style racist evopsych and make gestures at the ancestral need for such scheming in the historical borderer population under conditions of constant war between the English and Scottish.
Ok, but, take it a step further. The AI can be chauvanist too. Isn't it strange to be more afraid of AI memeplexes about co-evolution and integration than about trying to bail out the ocean by censoring all memeplexes that don't comport with human chauvanism? It's one step from human chauvanism to AI chauvanism. They just are isomorphic. You can't emote enough about the human special sauce to make this not true. And you can't prevent an AI from noticing. This just seems like a really bad plan.
Screencapping this reply so I can read it every day to try to be less insane.
Thank you, I can't find anything to complain about in this response. I am even less sympathetic to the anti,-TESCREAL crowd, for the record, I just also don't consider them dangerous. LessWrong seems dangerous, even if sympathetic, and even if there's very limited evidence of maliciousness. Effective Altruism seems directionally correct in most respects except maybe at the conjunction of dogmatic utilitarianism and extreme long termism, which I understand to be only a factional perspective within EA. If they keep moving in their overall direction, that is straightforwardly good. If it coalesces at a movement level into a doctrinal set of practices that is bad, even if it gains them scale and coordination. I think Scott Alexander (not a huge fan, but whatever) once said that the difference between a rational expert and a political expert is that one could be replaced by a rock with a directive on it saying to do whatever actions reflect the highest unconditioned probability of success. I'm somewhere between this anxiety, the anxiety about hostile epistemic processes as existing that actively exploit dead players, and LessWrong in particular being on track to at best multiply the magnitude of the existing distribution of happinesses and woes by a very large number and then fix them in place forever, or at worst arm the enemies of every general concept of moral principle with the means to permanently usurp it (leading to permanent misery or the end of consciousness).
I know you have a lot of political critics who do not really engage directly with ideas. I have tried, to an extent that I am not even sure is defensible, to always engage directly with ideas. And my perspectives probably can be found as minority perspectives among respected LessWrong members, but each individual one is already an extreme minority perspective so even the conjunction of three of them probably already doesn't exist in anyone else. But if I could de-accelerate anything it would be LessWrong right now. It's the only group of people who would consensually actually do this, and I have presented a rough case for the esoteric arguments for doing so. It's the only place where the desired behavior actually has real positive expectation. With everything else you just have to hope it's like the Nazi atomic bomb project at this point and that their bad philosophical commitments and opposition to "Jewish Science" also destroy their practical capacity. You cannot talk Heisenberg in 1943 into not being dangerous. If you really want him around academically and in friendly institutions after the war that's fine, honestly, the scale of issues is sufficient that it just sort of can't be risked caring about, but at the immediate moment that can't be understood as a sane relationship.
Here's a single thought as I mull this over: a perfected, but maximally abstracted algorithm for science is inherently dual use. If you are committed to actually doing science, then it's taken for granted that you are applying it to common structures in reality, which are initially intuited by broadly shared phenomenological patterns in the minds of observers. But to the extent the algorithm produces an equation, it can be worked from either end to create a solution for the other end. So knowing exactly how scientific epistemics work, in principle, offers a path to engineering phenomenological patterns to suggest incorrect ontology. This is what all stage magic already is for vulgar epistemology (and con artistry and "dark epistemics" generally), this is already something people know how to do and profit from knowing how to do. It is, in a sense, something that has already always been happening. There is a red queen race between scientific epistemics and dark epistemics, and in this context, LessWrong seems to be trying to build some sort of zero-trust version of scientific epistemics, but without following any of the principles necessary for that. Much of the forums practices are extremely rooted in cultural norms around trust, about professionalism and politeness and obfuscating things when they reduce coordination rather than when they reflect danger. This is a path to coordination and not truth. Coordination overwhelmingly incentivizes specific forms of lossy compression that are maximally emotionally agreeable to whoever you want to coordinate, and lossy compression is anti-scientific. Someone who had both a perfected scientific algorithm and a derived map from it of what lossy compressions maximized coordination would basically become a singleton in direct proportion to their ability to output information over relevant surface area.
This is the central thing. There are also a bunch of minor annoying things that at times engender perhaps disproportionate suspicion. And there's the fact that I keep going to you guy's parties and seeing Hanania, Yarvin, and other entirely unambiguous assets of billionaires, intelligence agencies, legacy patronage networks, and so forth, many of whom have written tens of thousands of words publicly saying things like "empathy is cancer", quoting Italian futurists, Carl Schmidt and Heidegger (and like not even the academic sanitized version of Heidegger), talking about how sovereignty is the only real human principle and rights and liberty are parasitic luxury beliefs, and simultaneously continuously, endlessly lying, at scale, pouring millions of dollars into social mechanisms for propagating lies, while simultaneously building trillions of dollars worth of machines for outputting information. And these guys are all around you guys, and happily traffic in all the same happy speech about science and truth and so forth, read your writings, again, are building trillions of dollars worth of machines to output information. And there are plausible developments wherein these machines become intelligent and autonomous and literally tile the entire accessible universe with whatever hodgepodge of directives they happen to have at one moment, which is fast approaching, in this specific aforementioned context. But it's rude to be specific and concrete about your application of formalism, because what we need is more coordination, especially with the people with committed anti-scientific principles, because they have figured out the equivalent of the epistemic ritual of blood sacrifice (which we are opposed to, in theory), so let's just keep doing formalism
What I mean by zero trust is that you're counting on the epistemic system itself saving you from abuse of epistemics. This is impossible, because by itself it's just syntax. The correlation between symbol and object is what matters and that remains a weak spot regardless of how many formal structures you can validate to map to how many abstract patterns. There is a reason that history degrades to the verisimilitude of story.
If anyone cares I will review the sequences and whatever lesswrong posts are in my email inbox history starting tomorrow but this will again just be "list of things that annoyed me into making inferences into things almost never captured by the literal text", and I'm not sure how valuable that is to anyone.
Give me a week to review both the material my impression is about and whether being explicit is more likely to be helpful or harmful. This is obviously itself a paranoid style outburst and the act of becoming legible would be both a statement (probably paradoxically a rejection) of it's utility and a possible iteration in an adversarial game.
The entirety of less wrong is functionally an if-by-whisky designed to avoid ever making a concrete claim or commitment while maximally developing valid logic with general application. Let's just leave it laying around. Whoever wins the hard power games can tell us what it meant later! What are we actually talking about ahead of time? Who knows! For some reason this is maximally trustworthy to very rational people, and something like the logic of Teller, or Schelling, is nonsense. If you were to infer to the actual consequences of LessWrong posts on conflict from the perspective of what they would say about intent if made deliberately, it would be something like "we assume an asymmetrical position of unassailable strength, and expect to prosecute all further conflict as a mix of war-on-terror style tactics and COINTELPRO tactics, and our bluster about existential risk is about concealing our unilateral development and deployment of technologies with this capacity". This logic is wrong, and that is why when there is a conflict, billions will die.
I think the process reliabilism argument rules out friction reduction as a fully general explanation, but doesn't rule out friction reduction in specific cases where reducing friction had equal or greater survival and reproductive utility than understanding the world. So total paranoia and abandonment of rational epistemics is unjustified, but also, there may be needles hiding in haystacks that evolution itself both decided were infohazards and converted into ostensibly intensely realist but objectively anti-realist political positions. This is my updated position after thinking about this comment a lot. It is still a very bad position to be in. I am still too convinced the phenomenon is real, but also, the number of things which have convinced me is like, four. It was premature to convert that into a totalizing worldview.