[Originally regarding Said Achmiz and myself ca. 2023]
I feel like you both favor a more aggressive flavor of discourse than I tend to like.
The aggressiveness is, I think, a symptom of the underlying trait, which is being disagreeable about taking people's frames as valid
Most people, when given a weird framing of a situation which feels vaguely off but comes from someone who seems well-intentioned and cooperative, will go along with it and argue within that frame rather than contest it.
But this is very exploitable, and you don't have to actually be consciously trying to exploit it to do so. And so people who do this a lot (e.g. Duncan, but also numerous other people I respect more, including Eliezer except when he's explicitly being careful about it, which he usually is) can warp the whole field of discourse around them
Obviously most people who are disagreeable about this are disagreeable in general, and therefore usually aggressive about arguments and discourse. This isn't necessary in principle but if anyone knows how to teach it I've never met them
I heard someone (who I respect) say why they don't post on LessWrong more. They said when they talk about their thoughts and ideas with their friends, the friends won't question their basic frame / sanity, and so won't undermine their trust in themself. They said that this is acceptable on LessWrong, which is a more uncomfortable experience.
Ever since then I've tried a bit harder to make sure to question my friends' basic frames and sanity, so that I'm not encouraging self-blinding in the people around me, and so they will know that they're welcome to do the same to me.
Can you give an example of what you mean by aggressive discourse? because I think I'm bringing the baggage of assuming it refers to tone and inclusion of sarcasm, mocking the interlocutor, as well as name calling and Ad hominem arguments etc. etc.
Said had a habit of responding to posts that were reasonably well-reasoned but sparsely justified with one-word comments like "Sources?", or very short coments along the lines of "X point is insufficeiently justified." without any throat-clearing or praise, which is a good example of it done well.
In general, "Your premises are treated as obvious when they are actually bizarre, and your argument is therefore irrelevant." is maybe the central example of when this is both highly confrontational but also highly necessary.
Knowing that framing is a thing makes it safe, as you can work with thoughts about the same content presented in multiple alternative framings separately, there is no tradeoff (the danger is when you are unaware that framing can have a huge influence on reasoning, and needs to be given proper attention). Refusing to entertain a framing is then not very different from refusing to consider an idea, and similarly with insisting on a particular framing, or insisting that you consider a particular idea. So treatment of boundaries in discourse seems more of a crux than framing vs. content.
This got deleted from 'The Dictatorship Problem', which is catastrophically anxietybrained, so here's the comment:
This is based in anxiety, not logic or facts. It's an extraordinarily weak argument.
There's no evidence presented here which suggests rich Western countries are backsliding. Even the examples in Germany don't have anything worse than the US GOP produced ca. 2010. (And Germany is, due to their heavy censorship, worse at resisting fascist ideology than anyone with free speech, because you can't actually have those arguments in public.) If you want to present this case, take all those statistics and do economic breakdowns, e.g. by deciles of per-capita GDP. I expect you'll find that, for example, the Freedom House numbers show a substantial drop in 'Free' in the 40%-70% range and essentially no drop in 80%-100%.
Of the seven points given for the US, all are a mix of maximally-anxious interpretation and facts presented misleadingly. These are all arguments where the bottom line ("Be Afraid") has been written first; none of this is reasonable unbiased inference.
The case that mild fascism could be pretty bad is basically valid, I guess, but without the actual reason to believe that's likely, it's irrelevant, so it's mostly just misleading to dwell on it.
Going back to the US points, because this is where the underlying anxiety prior is most visible:
1. because of Biden's unpopularity, if the election were held again tomorrow, Biden would most likely lose. Biden won the tipping-point state, Wisconsin, by only half a percent in 2020, and both polls and favorability ratings show he has lost popularity since then;
Interpretation, not fact. We're still in early enough stages that the reality of Biden is being compared to an idealized version of Trump - the race isn't in full swing yet and won't be for a while. Check back in October when we see how the primary is shaping up and people are starting to pay attention.
2. the House, Senate, and Electoral College all have biased maps that will let Republicans win a governing trifecta, even with a minority of the popular vote;
This has been true for a while. Also, in assessing the consequences, it's assuming that Trump will win, which is correlated but far from guaranteed.
3. two-thirds of Republican congressmen voted to overturn the election immediately after January 6th, and most of Trump's primary opponents strongly support his actions, so even if Trump has a heart attack tomorrow, many in the party would still be hostile to democracy;
Premise is a fact, conclusion is interpretation, and not at all a reliable one. Trumpism isn't HYDRA - if the popular populist figurehead is cut off, there is no reason to believe another will take his place. That usually doesn't work.
4. there have been waves of Republican retirements in the House and Senate during 2018 and 2022, so that many of the Trump-skeptical congressmen in office during his first term have been replaced by far-right radicals and Trump loyalists;
I guess this one is basically true.
5. most of the "adults in the room" during Trump's first term were fired or resigned, and Trump plans to fill their roles with new staff, loyal to his own vision;
Probably true, and therefore it is unlikely he will be able to achieve much of anything.
6. if elected to a second term, Trump has said he will use "Schedule F" to purge the non-partisan professional civil service, law enforcement, and the American military, and replace them with Trumpists who won't resist attempts to end democracy;
Ditto, only stronger. And that assumes he carries out this promise and that he's successful in that, neither of which is terribly likely.
7. if re-elected, Trump plans to withdraw the US from NATO and end the post-WWII policy of an American "nuclear umbrella", which will likely trigger a Chinese invasion of a now-defenseless Taiwan; global nuclear proliferation; and general, worldwide instability not seen since 1945.
Premise is almost a fact, conclusion is wild interpretation. Trump has said he plans to do that. Will he do that? Possible. Will it trigger an invasion of Taiwan if he does? Possible. Will it trigger nuclear proliferation if he does? Sure, probably, but I'm not too concerned, it won't move fast enough to catch up to AI. Will it trigger worldwide instability? Not fucking likely. (Also, really, The Daily Beast? You couldn't find a source more credible or less biased than that?)
Or, in short:
But what is less well-known is that:
False things are rarely well-known.
And Germany is, due to their heavy censorship, worse at resisting fascist ideology than anyone with free speech, because you can't actually have those arguments in public.
The number of things you can't argue in Germany is tiny. You can't argue that there was no holocaust but that's not central to any ideological debate. Censorship is not preventing ideological debates in Germany.
Censorship always prevents debates. The number of things which are explicitly banned from discussion may technically be small, but the chilling effect is huge. And the fact that ideas and symbols are banned is - correctly! - taken as evidence that they can't be beaten by argument, that people are afraid of the ideas. Also, naturally, the opposite side never has to practice their arguments, so they look like weak debaters because they are.
Tried to leave this as a review comment, which is blocked:
Even with the benefit of hindsight proving that Trump could and would get reelected, this still looks just as badly-constructed as it did at the time. This was an argument based in fear and rationalization, not a clear-eyed prediction of the future. The bottom line was written first.
The 'new user' flag being applied to old users with low karma is condescending as fuck.
I'm not a new user. I'm an old user who has spent most of my recent time on LW telling people things they don't want to hear.
Well, most of the time I've actually spent posting weekly meetups, but other than that.
Possible new pandemic? China's concealing evidence again, looks like the smart money is against 'new virus' but thinks it's drug-resistant pneumonia, specifically resistant to the drugs that are safe for small children.
https://foreignpolicy.com/2023/11/28/chinese-hospitals-pandemic-outbreak-pneumonia/
A number of Manifold markets under https://manifold.markets/browse?topic=pandemic, looks like most are trading around 10% chance of anything happening outside China.
https://philpapers.org/rec/ARVIAA
This paper uses famous problems from philosophy of science and philosophical psychology—underdetermination of theory by evidence, Nelson Goodman’s new riddle of induction, theory-ladenness of observation, and “Kripkenstein’s” rule-following paradox—to show that it is empirically impossible to reliably interpret which functions a large language model (LLM) AI has learned, and thus, that reliably aligning LLM behavior with human values is provably impossible.
So, this seems provisionally to be bullshit because it doesn't admit of thinking probabilistically or simplicity priors. But I'm not totally sure it's worthless. Anyone read it in detail?
This paragraph also misses the possibility of constructing a LLM and/or training methodology such that it will learn certain functions, or can't learn certain functions. There is also a conflation of "reliable" with "provable" on top of that.
Perhaps there is some provision made elsewhere in the text that addresses these objections. Nonetheless, I am not going to search. I found that the abstract smells enough like bullshit to do something else.
Assume that digital minds will be most of the minds the future holds. Won't this overwhelmingly be after whatever capability escalation passes for "the Singularity", and therefore be addressed at 99.9% efficiency by delaying consideration of the problem after that capability exists and makes it vastly easier?
As usual after Solstice, I had an urge to write about Solstice, in this case a speech I may someday give.
Editing Essays into Solstice Speeches: Standing offer: if you have a speech to give at Solstice or other rationalist event, message me and I'll look at your script and/or video call you to critique your performance and help
Is there a graph of solar efficiency (fraction of energy kept in light -> electricity conversion) for solar tech that's deployed at scale? https://www.nrel.gov/pv/cell-efficiency.html exists for research models but I'm unsure of any for industrial-scale.
Paraphrasing Eddington: If your theory of morality is incompatible with factory farming, then so much the worse for factory farming. If it says not to touch the trolley problem, well, even nominally-obvious thought experiments can be wrong sometimes. But if it says to run a risk of death for all humanity for animals or minds that don't share human values, there is no hope for it; so much the worse for the theory, at best, or so much for morality at worst.