Alephwyr
Alephwyr has not written any posts yet.

Thank you for the response. This is one of maybe two or three things I've read from you, so the exculpatory context, even though it was trivially available and equally reasonable to infer to the presence of from the absence of specific information that would have addressed my concerns, was not part of the context in which I made my post.
It would take a much longer time to go point by point in response to your response than to focus mostly on just going back and doing a mixture of amending and clarifying my own post. Please don't interpret this as a motte and bailey, I will be doing some updating as... (read 458 more words →)
Responding to just the tl;dr, but will try to read the whole thing, apologies as usual for, well...
If your fixation remains solely on architecture, and you don't consider the fact that morality-shaped-stuff keeps evolving in mammals because the environment selects for it in some way, you are just setting yourself up for future problems when the superintelligent AI develops or cheats its way to whatever form of compartmentalization or metacognition lets it do the allegedly pure rational thing of murdering all other forms of intelligence. I literally don't know if you already addressed this because I haven't read the rest of the article yet, but the reason moralism is robust in mammals... (read 520 more words →)
For comparison, Pokemon Red in Twitch Plays Pokemon, which was basically just decision making implemented as a race condition between thousands to tens of thousands of different humans at every decision making step, took 16 days, 7 hours, 50 minutes, 19 seconds.
I think the process reliabilism argument rules out friction reduction as a fully general explanation, but doesn't rule out friction reduction in specific cases where reducing friction had equal or greater survival and reproductive utility than understanding the world. So total paranoia and abandonment of rational epistemics is unjustified, but also, there may be needles hiding in haystacks that evolution itself both decided were infohazards and converted into ostensibly intensely realist but objectively anti-realist political positions. This is my updated position after thinking about this comment a lot. It is still a very bad position to be in. I am still too convinced the phenomenon is real, but also, the number of things which have convinced me is like, four. It was premature to convert that into a totalizing worldview.
There's some hypothetical version of white pride that matches this description but getting from literally anywhere in history including now to there would be a heroic process. I mean yeah, there is something charming about Rockwell dialoguing with Malcolm X. But remember that in the picture, they were wearing the uniform of a regime that butchered over 11 million captive civilians and killed probably as many civilians in other places through war. That wasn't just an aesthetic choice. That reflected, at the most charitable, the conviction that such actions were within the realm of permissible strategies. And even if you're willing to devil's advocate that, which sure, why not, we're in hell, why rule anything out a priori, it with almost equivalent certainty reflected the conviction that such actions were permissible as a response to the conditions of Weimar Germany, which is just not true, and a conviction immediately worthy of violence.
This is also just begging the question about the fitness justification of white nationalism. In an American context it's pretty explicitly a coalitional strategy between different white races mostly adopted by the races who, under late 19th or early 20th century racial conceptions, would have been considered most marginally white. It is just as plausible the fitness function is in ensuring access to and protection from socially dominant white races for less socially dominant white races. You could even get into some Albion's Seed style racist evopsych and make gestures at the ancestral need for such scheming in the historical borderer population under conditions of constant war between the English and Scottish.
Ok, but, take it a step further. The AI can be chauvanist too. Isn't it strange to be more afraid of AI memeplexes about co-evolution and integration than about trying to bail out the ocean by censoring all memeplexes that don't comport with human chauvanism? It's one step from human chauvanism to AI chauvanism. They just are isomorphic. You can't emote enough about the human special sauce to make this not true. And you can't prevent an AI from noticing. This just seems like a really bad plan.
Screencapping this reply so I can read it every day to try to be less insane.
Thank you, I can't find anything to complain about in this response. I am even less sympathetic to the anti,-TESCREAL crowd, for the record, I just also don't consider them dangerous. LessWrong seems dangerous, even if sympathetic, and even if there's very limited evidence of maliciousness. Effective Altruism seems directionally correct in most respects except maybe at the conjunction of dogmatic utilitarianism and extreme long termism, which I understand to be only a factional perspective within EA. If they keep moving in their overall direction, that is straightforwardly good. If it coalesces at a movement level into a doctrinal set of practices that is bad, even if it gains them scale and... (read more)
I think non redundant efforts of any kind are good just because in a situation with so many unknowns, coverage is both easier and more valuable than brittle depth. Whatever you're doing is probably the right thing. Also be happy that your first, most deeply instinctive response involved seeing the value of the world rather than rejecting it.