cata

Programmer, rationalist, chess player, father, altruist.

Comments

"Infohazard" is a predominantly conflict-theoretic concept

The problem with gain-of-function research wasn't researchers publishing the sequence of a virus in a paper which can then get used by evil actors to synthezise a harmful virus.

Why do you think that's not a problem? I think that's a big problem. There can be more than one problem.

Frame Control

I thought this post was good and I barely know Aella from Adam.

Frame Control

Like you, my instant reaction to these kinds of spiky social behaviors is to try to use sarcasm or wit to "win" the interaction and make the other person look ridiculous or feel awkward or off balance. I think this works on two levels -- firstly, if I do "win", the other guy is not likely to keep trying this sort of thing on me. Secondly, it focuses my psychology in such a way that there's no chance I will actually be taking their "frame" seriously. I'm too busy trying to figure out how I can make it sound stupid.

I guess the obvious failure mode here is if they were actually saying something that I would benefit from taking seriously.

Tentative Anger

I like your example. Perhaps I am a little too inclined to update, either due to typical-minding (I am usually very frank and expect others to be) or due to a kind of misplaced irritation (I want others to justify themselves to me, so if they don't I ascribe bad things to them.)

Tentative Anger

One thing which I see often which is present in the Paxlovid example is a case that goes like this:

  • Someone does something which obviously looks bad to a naive onlooker, but might be justifiable for some reason I don't know.
  • Although they could easily attempt to explain why they did it, they make no such explanation.
  • Even when people complain, no explanation that makes sense is ever given.

In these cases, if my neighbor or my coworker was doing this, I would reason: Hey, if this was justifiable, this guy could gain a lot of goodwill with me trivially, and he's not doing so. Therefore it's probably not. As a result, I basically notch the probability of "this guy did a bad thing" up to 100%, and I think it is right to do so.

It's tempting to extend this intuition to "the FDA" or "my congressperson" or "a celebrity". But I think it's frequently epistemically questionable to do so. In these cases, there are a lot of reasons that could get in the way of the reasoning:

  • Maybe the actor is different enough from me and the complainers that they don't understand that their action looks bad.
  • Maybe the actor never heard the complaints, so they didn't know anyone noticed the problem.
  • Maybe the "reward" of goodwill is very small compared to other concerns, like drawing more attention to the action.
  • Maybe the actor is made of multiple people and there is a communications breakdown between "the person who made the decision" and "the person who can communicate justifications."

In the Paxlovid example, I think several of those reasons apply. The FDA probably thinks it doesn't look bad, because they have a reference class in mind of even slower approvals, and an inside view that highlights all the important things that need to get done in between now and approval. The reward from satisfying a lot of people like us by providing explanations is probably low -- most people don't seem to really notice that there is a problem or that anything could be better. The communications breakdown is probably real, too. So even if there were a pretty good explanation for the delay, I am not confident we would hear it.

AI Safety Needs Great Engineers

Thanks, I was aware of Eleuther but I wasn't previously aware how much they cared about alignment-related progress.

AI Safety Needs Great Engineers

I'm an experienced engineer and EA excited to work on these things, but I am only available part time remote because I am raising my kid, so I'm not applying right now.

If I knew of useful FOSS work that was directly applicable I might be spending time doing it.

The Opt-Out Clause

I enjoyed this a lot. Tiny feedback: First/Second/Three is inconsistent.

Tell the Truth

I am very frank by nature. In my career my coworkers have often expressed a lot of respect for the fact that I will ask "tough questions", give thoughtful critical feedback, and argue well for idiosyncratic opinions. I almost never decline to say something that I think is true and interesting or useful because I think it will have negative repercussions on me. (Although if I think it might be hurtful to someone, or they might misinterpret it, I may try to optimize how I say it.)

However, I am also noticeably poor at understanding other people's nonverbal behavior. I am frequently in situations where other people (who I consider to be worth their weight in gold to me when performing this function) show me that I have been oblivious to how I was affecting other people. For example, I have had to learn a lot about how to criticize people in ways that they will be receptive to, and about the function of public praise. Perhaps not unrelatedly, I also have exceptionally low amounts of general anxiety and neuroticism. So basically the thing in the world that I am absolutely the worst at noticing and responding to is when someone is unhappy or angry with me in some way.

I think these go together. I never get negative feedback on my behavior, because I am too oblivious to notice the negative feedback which is constantly flying at me. So I do the "brave truthteller" thing by default and enjoy the conspicuous respect. I bet that most "brave truthteller" types are in a similar position. That's not to say that it's a bad policy -- I think it's just a high-variance policy -- but it's a lot easier to execute the policy when you don't suffer the bad beats.

[$10k bounty] Read and compile Robin Hanson’s best posts

I will add a $1k bounty to this, since it seems like Twitter agrees this is a great idea and there should be more money in it.

Load More