I like this perspective. I would agree that there is more to knowing and being known by others than simply Aumann Agreement on empirical fact. I also probably have a tendency to expect more explicit goal-seeking from others than myself.
I haven't thought this through before, but I notice two things that affect how open I am. The first is how much the communication is private, has non-verbal cues, and has an existing relationship. So right now, I'm not writing this with a desired consequence in mind, but I am filtering some things out subconsciously - like if we were in person talking right now, I might launch into a random anecdote, but while writing online I stay on a narrower path.
The second is that I generally only start running my "consequentialist program" once I anticipate that someone may be upset by what I say. The anticipation of offense is what triggers me to think either "but it still needs to be said" or "saying this won't help". So maybe my implicit question was less "why does Eliezer not aim all his communication at his goals" and more "why doesn't he seem to have the same guardrail I do about only causing offense if it will help", which is a more subjective standard.
I accept your correction that I misquoted you. I paraphrased from memory and did miss real nuance. My bad.
Looking at the comment now, I do see that it has a score of -43 currently, and is the only negative karma comment on the post. So maybe a more interesting question is why I (and presumably several others) interpreted it as insult when logical content of "Intelligence(having <30y timeline in 2025) > Intelligence(potted plant)" doesn't contain any direct insult. My best guess is that people are running informal inference on "do they think of me as lower status", and any comparison to a lower intelligence entity is likely to trigger that. For instance, I actually find the thing you just said suggesting that I could have an LLM explain an LSAT-style question to me, to be insulting because it implies that you assign decent probability to my intelligence being lower than LLM or LSAT level. (Of course, I rank it less than "calling someone out publicly, even politely", so I still feel vague social debt to you in this interaction.) I also anticipate that you might respond that you are justified in that assumption given that I seem to not have understood something an LLM could, and that that would only serve to increase the perceived status threat.
The "polite about the house burning" is something I have changed my mind about recently. I initially judged some of your stronger rhetoric as unhelpful because it didn't help me personally, but have seen enough people say otherwise that I now lean toward that being the right call. The remaining confusion I have is over the instances where you take extra time to either raise your own status or lower someone else's instead of keeping discussion focused on the object level. Maybe that's simply because, like me, you sometimes just react to things. Maybe, as someone else suggested, its some sort of punishment strategy. If it is actually intentionally aimed at some goal, I'd be curious to know.
I'm sorry to hear about your health/fatigue. That's a very unfortunate turn of events, for everyone really. I think your overall contribution is quite positive, so I would certainly vote that you keep talking rather than stop! If I got a vote on the matter, I'd also vote that you leave status out of conversations and play to your strength of explaining complicated concepts in a way that is very intuitive for others. In fact, as much as I had high hopes for your research prospects, I never directly experienced any of that - the thing that has directly impressed me, (and if I'm honest, the only reason I assume you'd also be great at research) has been the way you make new insights accessible through your public writing. So, consider this my vote for more of that.
I suspect that some of my dissonance does result from an illusion of consistency and a failure to appreciate how multi-faceted people can really be. I naturally think of people as agents and not as a collection of different cognitive circuits. I'm not ready to assume that this explains all of the gap between my expectations and reality, but it's probably part of it.
I think this is an important perspective, especially for understanding Eliezer, who places a high value on truth/honesty, often directly over consequentialist concerns.
While this explains true but unpleasant statements like "[Individual] has substantially decreased humanity's odds of survival", it doesn't seem to explain statements like the potted plant one or other obviously-not-literally-true statements, unless one takes the position that full honesty also requires saying all the false and irrational things that pass through one's head as well. (And even then, I'd expect to see an immediate follow-up of "that's not true of course").
I agree with this decision. You reference the comment in one of your answers. If it starts taking over, it should be removed, but can otherwise provide interesting meta-commentary.
I think this makes sense as a model of where he is coming from. As a strategy, my understanding of social dynamics is that "I told you so" makes it harder, not easier, for people to change their minds and agree with you going forward.
Not an answer to the question, but I think it's worth noting that people asking for your opinion on EA may not be precise with what question they ask. For example, it's plausible to me that someone could ask "has EA been helpful" when their use case for the info is something like "would a donation to EA now be +EV", and not be conscious of the potential difference between the two questions.
I agree that we'll make new puzzles that will be more rewarding. I don't think that suffering need be involuntary to make its elimination meaningful. If I am voluntarily parched and struggling to climb a mountain with a heavy pack (something that I would plausibly reject ASI help with), I would nevertheless feel appreciation if some passerby offered me a drink or lightening my load. Given a guarantee of safety from permanent harm, I think I'd plausibly volunteer to play a role in some game that involved some degree of suffering that could be alleviated.
there are also donation opportunities for influencing AI policy to advance AI safety which we think are substantially more effective than even the best 501c3 donation opportunities
Would you be willing to list these (or to DM me if there's a reason to not list publicly)?
I do think that this is probably part of my misprediction - that I simply idealize others too much and don't give enough credit to how inconsistent humans actually are. "Idealize" is probably just the Good version of "flatten", with "demonize" being the Bad version, both of which are probably because it takes less neurons to model someone else that way.
I actually just recently had the displeasure of stumbling upon that reddit and it made me sad that people wanted to devote their energies to just being unkind without a goal. So I'm probably also not modeling how my own principle of avoiding offense unless helpful would erode over time. I've seen it happen to many public figures on twitter - it seems to be part of the system.