habryka

Coding day in and out on LessWrong 2.0. You can reach me at habryka@lesswrong.com

Sequences

Concepts in formal epistemology

Wiki Contributions

Comments

AI Safety Needs Great Engineers

FWIW, "plausible" sounds to me basically the same as "possibly". So my guess is this is indeed a linguistic thing.

Base Rates and Reference Classes

Yeah, let's also make it a link post then. Some people prefer more prominence, some prefer less, for their cross-posts.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Huh, that's surprising, if by that he means "no contracts between anyone currently at Leverage and anyone at CEA". I currently still think it's the case, though I also don't see any reason for Geoff to lie here. Maybe there is some technical sense in which there is no contract between Leverage and CEA, but there are contracts between current Leverage employees, who used to work at CEA, and current CEA employees? 

Improving on the Karma System

One of my ideas for this (when thinking about voting systems in general) is to have a rating that is trivially inconvenient to access. Like, you have a ranking system from F to A, but then you can also hold the A button for 10 seconds, and then award an S rank, and then you can hold the S button for 30 seconds, and award a double S rank, and then hold it for a full minute, and then award a triple S rank. 

The only instance I've seen of something like this implemented is Medium's clap system, which allows you to give up to 50 claps, but you do have to click 50 times to actually give those claps. 

Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation

I think some of these are pretty reasonable points, but I am kind of confused by the following: 

This Leverage drama is not important to anyone except a small group of people and does not belong on LW. Perhaps the relatively small group of Bay Area rationalists who are always at the center of these things need to create a separate forum for their own drama. Nobody outside of Berkeley needs to hear about this. This sort of thing gets upvoted because tribal instincts are being activated, not because this is good and ought to be here.

It seems to me that Leverage had a large and broad effect on the Effective Altruism and Rationality communities worldwide, with having organized the 2013-2014 EA Summits, and having provided a substantial fraction of the strategic direction for EAG 2015 and EAG 2016, and then shared multiple staff with the Centre For Effective Altruism until 2019. 

This suggests to me that what happened at Leverage clearly had effects that are much broader reaching than "some relatively small group of Bay Area rationalists". Indeed, I think the Bay Area rationality community wasn't that affected by all the stuff happening at Leverage, and the effects seemed much more distributed. 

Maybe you also think all the EA Summit and EA Global conferences didn't matter? Which seems like a fine take. Or maybe you think how CEA leadership worked also didn't matter, which also seems fine. But I do think these both aren't obvious takes, and I think I disagree with both of them. 

Speaking of Stag Hunts

Given that there is lots of "let's comment on what things about a comment are good and which things are bad" going on in this thread, I will make more explicit a thing that I would have usually left implicit: 

My current sense is that this comment maybe was better to write than no comment, given the dynamics of the situation, but I think the outcome would have been better if you had waited to write your long comment. This comment felt like it kicked up the heat a bunch, and while I think that was better than just leaving things unresponded, my sense is the discussion overall would have gone better if you had just written your longer comment.

Speaking of Stag Hunts

Seems great! It's a bit on ice this week, but we've been thinking very actively about changes to the voting system, and so right now is the right time to strike the iron if you want to change the teams opinion on how we should change things, and what we should experiment with.

Speaking of Stag Hunts

I liked the effort put into this comment, and found it worth reading, but disagree with it very substantially. I also think I expect it to overall have bad consequences on the discussion, mostly via something like "illusion of transparency" and "trying to force the discussion to happen that you want to happen, and making it hard for people to come in with a different frame", but am not confident. 

I think the first one is sad, and something I expect would be resolved after some more rounds of comments or conversations. I don't actually really know what to do about the second one, like, on a deeper level. I feel like "people wanting to have a different type of discussion than the OP wants to have" is a common problem on LW that causes people to have bad experiences, and I would like to fix it. I have some guesses for fixes, but none that seem super promising. I am also not totally confident it's a huge problem and worth focussing on at the margin.

Speaking of Stag Hunts

When counting down we are all savages dancing to the sun gods in a feeble attempt to change the course of history.

More seriously though, yeah, definitely when I count down, I see a ton of stuff that could be a lot better. A lot of important comments missing, not enough courage, not enough honesty, not enough vulnerability, not enough taking responsibility for the big picture.

Load More