Sniffnoy

I'm Harry Altman. I do strange sorts of math.

Posts I'd recommend:

Wiki Contributions

Comments

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Yeah, that sounds about right to me. I'm not saying that you should assume such people are harmless or anything! Just that, like, you might want to try giving them a kick first -- "hey, constant vigilance, remember?" :P -- and see how they respond before giving up and treating them as hostile.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

This seems exactly backwards, if someone makes uncorrelated errors, they are probably unintentional mistakes. If someone makes correlated errors, they are better explained as part of a strategy.

I mean, there is a word for correlated errors, and that word is "bias"; so you seem to be essentially claiming that people are unbiased? I'm guessing that's probably not what you're trying to claim, but that is what I am concluding? Regardless, I'm saying people are biased towards this mistake.

Or really, what I'm saying it's the same sort of phenomenon that Eliezer discusses here. So it could indeed be construed as a strategy as you say; but it would not be a strategy on the part of the conscious agent, but rather a strategy on the part of the "corrupted hardware" itself. Or something like that -- sorry, that's not a great way of putting it, but I don't really have a better one, and I hope that conveys what I'm getting at.

Like, I think you're assuming too much awareness/agency of people. A person who makes correlated errors, and is aware of what they are doing, is executing a deliberate strategy. But lots of people who make correlated errors are just biased, or the errors are part of a built-in strategy they're executing, not deliberately, but by default without thinking about it, that requires effort not to execute.

We should expect someone calling themself a rationalist to be better, obviously, but, IDK, sometimes things go bad?

I can imagine, after reading the sequences, continuing to have this bias in my own thoughts, but I don't see how I could have been so confused as to refer to it in conversation as a valid principle of epistemology.

I mean people don't necessarily fully internalize everything they read, and in some people the "hold on what am I doing?" can be weak? <shrug>

I mean I certainly don't want to rule out deliberate malice like you're talking about, but neither do I think this one snippet is enough to strongly conclude it.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I don't think this follows. I do not see how degree of wrongness implies intent. Eliezer's comment rhetorically suggests intent ("trolling") as a way of highlighting how wrong the person is; he is free to correct me if I am wrong, but I am pretty sure that is not an actual suggestion of intent, only a rhetorical one.

I would say moreover, that this is the sort of mistake that occurs, over and over, by default, with no intent necessary. I might even say that it is avoiding, not committing, this sort of mistake, that requires intent. Because this sort of mistake is just sort of what people fall into by default, and avoiding it requires active effort.

Is it contrary to everything Eliezer's ever written? Sure! But reading the entirety of the Sequences, calling yourself a "rationalist", does not in any way obviate the need to do the actual work of better group epistemology, of noticing such mistakes (and the path to them) and correcting/avoiding them.

I think we can only infer intent like you're talking about if the person in question is, actually, y'know, thinking about what they're doing. But I think people are really, like, acting on autopilot a pretty big fraction of the time; not autopiloting takes effort, and doing that work may be what a "rationalist" is supposed to do, it's still not the default. All I think we can infer from this is a failure to do the work to shift out of autopilot and think. Bad group epistemology via laziness rather than via intent strikes me as the more likely explanation.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I want to more or less second what River said. Mostly I wouldn't have bothered replying to this... but your line of "today around <30" struck me as particularly wrong.

So, first of all, as River already noted, your claim about "in loco parentis" isn't accurate. People 18 or over are legally adults; yes, there used to be a notion of "in loco parentis" applied to college students, but that hasn't been current law since about the 60s.

But also, under 30? Like, you're talking about grad students? That is not my experience at all. Undergrads are still treated as kids to a substantial extent, yes, even if they're legally adults and there's no longer any such thing as "in loco parentis". But in my experience grad students are, absolutely, treated as adults, nor have I heard of things being otherwise. Perhaps this varies by field (I'm in math) or location or something, I don't know, but I at least have never heard of that before.

Common knowledge about Leverage Research 1.0

I'm not involved with the Bay Area crowd but I remember seeing things about how Leverage is a scam/cult years ago; I was surprised to learn it's still around...? I expected most everyone would have deserted it after that...

Common knowledge about Leverage Research 1.0

I do worry about "ends justify the means" reasoning when evaluating whether a person or project was or wasn't "good for the world" or "worth supporting". This seems especially likely when using an effective-altruism-flavored lens that only a few people/organizations/interventions will matter orders of magnitude more than others. If one believes that a project is one of very few projects that could possibly matter, and the future of humanity is at stake - and also believes the project is doing something new/experimental that current civilization is inadequate for - there is a risk of using that belief to extend unwarranted tolerance of structurally-unsound organizational decisions, including those typical of "high-demand groups" (such as use of psychological techniques to increase member loyalty, living and working in the same place, non-platonic relationships with subordinates, secrecy, and so on) without proportionate concern for the risks of structuring an organization in that way.

There is (roughly) a sequences post for that. :P

In support of Yak Shaving

Seems to me the story in the original yak-shaving story falls into case 2 -- the thing to do is to forget about borrowing the EZPass and just pay the toll!

Founding a rationalist group at the University of Michigan

There used to be an Ann Arbor LW meetup group, actually, back when I lived there -- it seems to be pretty dead now best I can tell but the mailing list still exists. It's A4R-A2@googlegroups.com; I don't know how relevant this is to you, since you're trying to start a UM group and many of the people on that list will likely not be UM-affiliated, but you can at least try recruiting from there (or just restarting it if you're not necessarily trying to specifically start a UM group). It also used to have a website, though I can't find it at the moment, and I doubt it would be that helpful anyway.

According to the meetup group list on this website, there's also is or was a UM EA group, but there's not really any information about it? And there's this SSC meetup group listed there too, which has more recent activity possibly? No idea who's in that, I don't know this Sam Rossini, but possibly also worth recruiting from?

So, uh, yeah, that's my attempt (as someone who hasn't lived in Ann Arbor for two years) to survey the prior work in this area. :P Someone who's actually still there could likely say more...

A Contamination Theory of the Obesity Epidemic

Oh, huh -- looks like this paper is the summary of the blog series that "Slime Mold Time Mold" has been written about it? Guess I can read this paper to skip to the end, since not all of it is posted yet. :P

Can crimes be discussed literally?

Yeah. You can use language that is unambiguously not attack language, it just takes more effort to avoid common words. In this respect it's not much unlike how discussing lots of other things seriously requires avoiding common but confused words!

Load More