Worried that typical commenters at LW care way less than I expected about good epistemic practice. Hoping I'm wrong.
Software developer and EA with interests including programming language design, international auxiliary languages, rationalism, climate science and the psychology of its denial.
Looking for someone similar to myself to be my new best friend:
❖ Close friendship, preferably sharing a house ❖ Rationalist-appreciating epistemology; a love of accuracy and precision to the extent it is useful or important (but not excessively pedantic) ❖ Geeky, curious, and interested in improving the world ❖ Liberal/humanist values, such as a dislike of extreme inequality based on minor or irrelevant differences in starting points, and a like for ideas that may lead to solving such inequality. (OTOH, minor inequalities are certainly necessary and acceptable, and a high floor is clearly better than a low ceiling: an "equality" in which all are impoverished would be very bad) ❖ A love of freedom ❖ Utilitarian/consequentialist-leaning; preferably negative utilitarian ❖ High openness to experience: tolerance of ambiguity, low dogmatism, unconventionality, and again, intellectual curiosity ❖ I'm a nudist and would like someone who can participate at least sometimes ❖ Agnostic, atheist, or at least feeling doubts
Someone said to me "you're just repeating a lot of the talking points on the other side."
I pointed out that this was just a FGCA, so they linked to this post and said "Oh what tangled webs we weave when first we practice to list Fully General Counter Arguments. Of course that sentiment probably counts as a Fully General Counterargument: Round like a circle in a spiral, like a wheel within a wheel. Never ending or beginning on an ever spinning reel." Did I break him?
So Q=inner alignment? Seems like person 2 not only pointed to inner alignment explicitly (so it can no longer be "some implicit assumption that you might not even notice you have"), but also said that it "seems to contain almost all of the difficulty of alignment to me". He's clearly identified inner alignment as a crux, rather than as something meant "to be cynical and dismissive". At that point, it would have been prudent of person 1 to shift his focus onto inner alignment and explain why he thinks it is not hard.
Note that your post suddenly introduces "Y" without defining it. I think you meant "X".
I don't really know how GPTs work, but I read §"Only modifying certain residual stream dimensions" and had a thought. I imagined a "system 2" AGI that is separate from GPT but interwoven with it, so that all thoughts from the AGI are associated with vectors in GPT's vector space.
When the AGI wants to communicate, it inserts a "thought vector" into GPT to begin producing output. It then uses GPT to read its own output, get a new vector, and subtract it from the original vector. The difference represents (1) incomplete representation of the thought and (2) ambiguity. Could it then produce more output based somehow on the difference vector, to clarify the original thought, until the output eventually converges to a complete description of the original thought? It might help if it learns to say things like "or rather", "I mean", and "that came out wrong. I meant to say" (which are rare outputs from typical GPTs). Also, maybe an idea like this could be used to enhance summarization operations, e.g. by generating one sentence at a time, and for each sentence, generating 10 sentences and keeping only the one that best minimizes the difference vector.
I would point out that Putin's goal wasn't to make Russia more prosperous, and that what Putin considers good isn't the same as what an average Russian would consider good. Like Putin's other military adventures, the Crimean annexation and heavy military support of Donbas separatists in 2014 probably had a goal like "make the Russian empire great again" (meaning "as big as possible") and from Putin's perspective the operations were a success. Especially as (if my impression is correct) the sanctions were fairly light and Russia could largely work around them.
Partly he was right, since Russia was bigger. But partly his view was a symptom of continuing epistemic errors. For example, given the way the 2022 invasion started, it looks like he didn't notice the crucial fact that his actions caused Ukrainians to turn strongly against Russia after his actions in 2014.
In any case this discussion exemplifies why I want a site entirely centered on evidence. Baturinsky claims that when the Ukrainian parliament voted to remove Yanukovych from office 328 votes to 0 (about 73% of the parliament's 450 members) this was "the democratically elected government" being "deposed". Of course he doesn't mention this vote or the events leading up to it. Who "deposed the democratically elected government"? The U.S.? The tankies say it was the U.S. So who are these people, then? Puppets of the U.S.?
I shouldn't have to say this on LessWrong, but without evidence it's all just meaningless he-said-she-said. I don't see truthseeking in this thread, just arguing.
I don't know what you are referring to in the first sentence, but the idea that this is a war between US and Russia (not Russia and Ukraine) is Russian propaganda (which doesn't perfectly guarantee it's BS, but it is BS.)
In any case, this discussion exemplifies my frustration with a world in which a site like I propose does not exist. I have my sources, you have yours, they disagree on the most basic facts, and nobody is citing evidence that would prove the case one way or another. Even if we did go deep into all the evidence, it would be sitting here in a place where no one searching for information about the Ukraine war will ever see it. I find it utterly ridiculous that most people are satisfied with this status quo.
I'm saying that [true claims sound better]
The proof I gave that this is false was convincing to me, and you didn't rebut it. Here are some examples from my father:
ALL the test animals [in mRNA vaccine trials] died during Covid development.
The FDA [are] not following their own procedures.
There is not a single study that shows [masks] are of benefit.
[Studies] say the jab will result in sterility.
Vaccination usually results in the development of variants.
He loves to say things like this (he can go on and on saying such things; I assume he has it all memorized) and he believes they are true. They must sound good to him. They don't sound good to me (especially in context). How does this not contradict your view?
it feels like it's a choice whether or not I want to consider truth-seeking to be difficult.
Agreed, it is.
I don't understand why you say "should be difficult to distinguish" rather than "are difficult", why you seem to think finding the truth isn't difficult, or what you think truthseeking consists of.
For two paragraphs you reason about "what if true claims sound better". But true claims don't inherently "sound better", so I don't understand why you're talking about it. How good a claim "sounds" varies from person to person, which implies "true claims sound better" is a false proposition (assuming a fact can be true or false independently of two people, one of whom thinks the claim "sounds good" and the other thinks it "sounds bad", as is often the case). Moreover, the same facts can be phrased in a way that "sounds good" or "sounds bad".
I didn't say "false things monetize better than true things". I would say that technically correct and broadly fair debunkings (or technically correct and broadly fair publications devoted to countering false narratives) don't monetize well, certainly not to the tune of millions of dollars annually for a single pundit. Provide counterexamples if you have them.
people are inherently hardwired to find false things more palatable
I didn't say or believe this either. For such a thing to even be possible, people would have to easily distinguish true and false (which I deny) to determine whether a proposition is "palatable".
The dichotomy between good-seeming / bad-seeming and true / false.
I don't know what you mean. Consider rephrasing this in the form of a sentence.
I think that the people who are truthseeking well do converge in their views on Ukraine. Around me I see tribal loyalty to Kremlin propaganda, to Ukrainian/NAFO propaganda, to anti-Americanism (enter Noam Chomsky) and/or to America First. Ironically, anti-American and America First people end up believing similar things, because they both give credence to Kremlin propaganda that fits into their respective worldviews. But I certainly have a sense of convergence among high-rung observers who follow the war closely and have "average" (or better yet scope-sensitive/linear) morality. Convergence seems limited by the factors I mentioned though (fog of war, poor rigor in primary/secondary sources). P.S. A key thing about Chomsky is that his focus is all about America, and to understand the situation properly you must understand Putin and Russia (and to a lesser extent Ukraine). I recommend Vexler's video on Chomsky/Ukraine as well as this video from before the invasion. I also follow several other analysts and English-speaking Russians (plus Russian Dissent translated from Russian) who give a picture of Russia/Putin generally compatible with Vexler's.
do you think there are at least some social realities that if you magically downloaded the full spectrum of factual information into everyone's mind, people's opinions might still diverge
Yes, except I'd use the word "disagree" rather than "diverge". People have different moral intuitions, different brain structures / ways of processing info, and different initial priors that would cause disagreements. Some people want genocide, for example, and while knowing all the facts may decrease (or in many cases eliminate) that desire, it seems like there's a fundamental difference in moral intuition between people that sometimes like genocide and those of us who never do, and I don't see how knowing all the facts accurately would resolve that.
Rather, it's fine to say "that's a FGCA" if it's a FCGA, and not fine if it's not.
FGCAs derail conversations. Categorizing "that's a FGCA" as a FCGA is feeding the trolls.
If someone accuses you of making a FGCA when you didn't, you can always just explain why it's not a FGCA. Otherwise, you f**ked up. Admit your error and apologize.