philh

Wiki Contributions

Comments

Endorsed. I wildly guess that in practice "counterparty might do better with the money than me" will rarely be a big consideration; but I could see "transaction costs plus externalities plus harm to counterparty, together burn more value than my charitable donations create" being a thing, especially if you're doing low-margin high-volume.

I think this relies on "Val is not successfully communicating with the reader" being for reasons analogous to "Val is speaking English which the store clerk doesn't, or only speaks it poorly". But I suspect that if we unpacked what's going on, I wouldn't think that analogy held, and I would still think that what you're doing seems bad.

(Also, I want to flag that "justify that we’re helping the clerk deepen their skill with interfacing with the modern world" doesn't pattern match to anything I said. It hints at pattern matching with me saying something like "part of why we should speak with epistemic rigor is to help people hear things with epistemic rigor", but I didn't say that. You didn't say that I did, and maybe the hint wasn't intentional on your part, but I wanted to flag it anyway.)

Yes, endorsed. That should probably be mentioned explicitly. (e: added to the post)

(Technically neither of the technical definitions I gave applies here. And this is a case where you can't maximize every percentile simultaneously - maximizing your 11th percentile returns means betting nothing, and maximizing your 10th percentile means betting everything. But yes, for a single bet, maximizing "probability of ending up richer than I would have, if I had bet a different amount but the result was the same" is probably the natural way to extend the concept to cases like this, and it means betting nothing in this case.)

My experience is that folk who need support out of tough spots like this have a harder time hearing the deeper message when it’s delivered in carefully caveated epistemically rigorous language.

I kinda feel like my reaction to this is similar to your reaction to frames:

I refuse to comply with efforts to pave the world in leather. I advocate people learn to wear shoes instead. (Metaphorically speaking.)

To be more explicit, I feel like... sure, I can believe that sometimes epistemic rigor pushes people into thinky-mode and sometimes that's bad; but epistemic rigor is good anyway. I would much prefer for people to get better at handling things said with epistemic rigor, than for epistemic rigor to get thrown aside.

And maybe that's not realistic everywhere, but even then I feel like there should be spaces where we go to be epistemically rigorous even if there are people for whom less rigor would sometimes be better. And I feel like LessWrong should be such a space.

I think the thing I'm reacting to here isn't so much the lack of epistemic rigor - there are lots of things on LW that aren't rigorous and I don't think that's automatically bad. Sometimes you don't know how to be rigorous. Sometimes it would take a lot of space and it's not necessary. But strategic lack of epistemic rigor - "I want people to react like _ and they're more likely to do that if I'm not rigorous" - feels bad.

A question I have about the FTX thing: people keep saying that the LUNA crash was part of the thing that sparked it. Is this the same Luna that was a blockchain-related dating service that Scott reviewed the whitepaper of?

So like, these do seem related, but... I think I feel like you think they're more closely related than I think they are? Like the kind of thing they're using as a branching-off point is different from the kind of thing my comment was.

So I'd summarize those posts as saying: "if you're going to say "let's _", it would be nice if you went into more detail about how to _ and what exactly _ looks like".

But I'm not saying "let's _". I'm saying "we might think we can't _ because [...], but that doesn't hold because [...]. I currently think _ is possible." And now I'm similarly being asked to go into detail about how to _ and what exactly _ looks like, and...

Yeah, there's an implied "let's _" in my comment, and it's a perfectly fine question in general, but...

It feels like it's missing the point of what I said; and in this context, and the way it's been asked, it feels kind of aggressive and offputting to me.

(I would much less have this reaction, if my second comment in this thread had been my first one. The kind of thing my second comment is, feels much more the kind of thing those posts are reacting to. But I only made my second comment after being asked, and I explicitly said that it was a different question and I didn't necessarily endorse my answers.)

This feels like an isolated demand for a thing that I'm not trying to do.

Yes, obviously if I have concrete suggestions that would be great, and likely those would involve looking inside EA at the people and organizations within it and identifying specific points of intervention that could have avoided this problem, or something.

But I'm not trying to identify a solution, I'm trying to identify a problem. A thing where I think EA could have done better. I think it's ridiculous to suggest either that I can't do that without also suggesting improvements, or that I can't do it without looking inside EA.

Maybe you're not intending to suggest anything like that? But it feels to me like you are, and I find it annoying.

Noting that that's a separate question, possible answers that come to mind (which I'm not necessarily endorsing) include:

  • Not holding up Sam as an exemplar of EA, as I gather kind of happened
  • Declining to take more than $X from Sam, on the grounds that "a large amount of EA funding being dependent on someone with bad ethics seems bad"
  • Noticing that the combination "bad ethics and bad capital controls" makes fraud both easy and likely, and explicitly warning people about that. (And taking the lack-of-ethics as a reason to look into capital controls, if they didn't know about that.)

I do think "EA knows about SBF's ethics and acts exactly as they did anyway" is not a story that's flattering about EA.

I think it's worth being clear about what exactly "this" is.

My mainline story right now (admitting that I'm not fully caught up) is that prior to 2022:

  • There was a lack of capital controls, that would have made fraud and large mistakes easier;
  • There was plenty of reason to doubt SBF's ethics;
  • But there was no actual fraud.

Professional investors and EA would both have cared about the first point. But it's not clear how investors would have felt about it; I could believe anything from "this is a dealbreaker" to "this is positive on net". (Is Sam doing fraud bad in expectation for his investors? He might not get caught; and if he does, they'll lose money but probably won't take most of the flak.) Professional investors probably wouldn't have cared about the second point much, though I could see it being a mild negative or mild positive.

So, "should EA have caught the fraud"? I think that might be asking too much.

"Should EA have noticed the lack of controls and reacted to that?" Or, "should EA have noticed Sam's lack of ethics and reacted to that?" I currently think those would have been possible, and "but professional investors didn't" isn't much of a defense.

Load More