Wiki Contributions

Comments

Concrete feedback signals I've received:

  • I don't find myself excited about the work. I've never been properly nerd-sniped by a mechanistic interpretability problem, and I find the day-to-day work to be more drudgery than exciting, even though the overall goal of the field seems like a good one.

  • When left to do largely independent work, after doing the obvious first thing or two ("obvious" at the level of "These techniques are in Neel's demos") I find it hard to figure out what to do next, and hard to motivate myself to do more things if I do think of them because of the above drudgery.

  • I find myself having difficulty backchaining from the larger goal to the smaller one. I think this is a combination of a motivational issue and having less grasp on the concepts.

By contrast, in evaluations, none of this is true. I am able to solve problems more effectively, I find myself actively interested in problems, (the ones I'm working on and ones I'm not) and I find myself more able to solve problems and reason about how they matter for the bigger picture.

I'm not sure how much of each is a contributor, but I suspect that if I was sufficiently excited about the day-to-day work, all the other problems would be much more fixable. There's a sense of reluctance, a sense of burden, that saps a lot of energy when it comes to doing this kind of work.

As for #2, I guess I should clarify what I mean, since there's two ways you could view "not suited".

  1. I will never be able to become good enough at this for my funding to be net-positive. There are fundamental limitations to my ability to succeed in this field.

  2. I should not be in this field. The amount of resources required to make me competitive in this field is significantly larger than other people who would do equally good work, and this is not true for other subfields in alignment.

I view my use of "I'm not suited" more like 2 than 1. I think there's a reasonable chance that, given enough time with proper effort and mentorship in a proper organisational setting (being in a setting like this is important for me to reliably complete work that doesn't excite me), I could eventually do okay at this field. But I also think that there are other people who would do better, faster, and be a better use of an organisation's money than me.

This doesn't feel like the case in evals. I feel like I can meaningfully contribute immediately, and I'm sufficiently motivated and knowledgable that I can understand the difference between my job and my mission (making AI go well) and feel confident that I can take actions to succeed in both of them.

If Omega came down from the sky and said "Mechanistic interpretability is the only way you will have any impact on AI alignment - it's this or nothing" I might try anyway. But I'm not in that position, and I'm actually very glad I'm not.

Anecdotally I have also noticed this - when I tell people what I do, the thing they are frequently surprised by is that we don't know how these things work.

As you implied, if you don't understand how NN's work, your natural closest analogue to ChatGPT is conventional software, which is at least understood by its programmers. This isn't even people being dumb about it, it's just a lack of knowledge about a specific piece of technology, and a lack of knowledge that there is something to know - that NN's are in fact qualitatively different from other programs.

Yes, this is an argument people have made. Longtermists tend to reject it. First off, applying a discount rate on the moral value of lives in order to account for the uncertainty of the future is...not a good idea. These two things are totally different, and shouldn't be conflated like that imo. If you want to apply a discount rate to account for the uncertainty of the future, just do that directly. So, for the rest of the post I'll assume a discount rate on moral value actually applies to moral value.

So, that leaves us with the moral argument.

A fairly good argument, and the one I subscribe to, is this:

  • Let's say we apply a conservative discount rate, say, 1% per year, to the moral value of future lives.
  • Given that, one life now is worth approximately 500 million lives two millenia from now. (0.99^2000 = approximately 2e-9)
  • But would that have been reasonably true in the past? Would it have been morally correct to save a life in 0 BC at the cost of 500 million lives today?
  • If the answer is "no" to that, it should also be considered "no" in the present.

This is, again, different from a discount rate on future lives based on uncertainty. It's entirely reasonable to say "If there's only a 50% chance this person ever exists, I should treat it as 50% as valuable." I think that this is a position that wouldn't be controversial among longtermists.

For the Astra Fellowship, what considerations do you think people should be thinking about when deciding to apply for SERI MATS, Astra Fellowship, or both? Why would someone prefer one over the other, given they're both happening at similar times?

The agent's context includes the reward-to-go, state (i.e, an observation of the agent's view of the world) and action taken for nine timesteps. So, R1, S1, A1, .... R9, S9, A9. (Figure 2 explains this a bit more) If the agent hasn't made nine steps yet, some of the S's are blank. So S5 is the state at the fifth timestep. Why is this important?

If the agent has made four steps so far, S5 is the initial state, which lets it see the instruction. Four is the number of steps it takes to reach the corridor where the agent has to make the decision to go left or right. This is the key decision for the agent to make, and the agent only sees the instruction at S5, so S5 is important for this reason.

Figure 1 visually shows this process - the static images in this figure show possible S5's, whereas S9 is animation_frame=4 in the GIF - it's fast, so it's hard to see, but it's the step before the agent turns.

I think there’s an aesthetic clash here somewhere. I have an intuition or like... an aesthetic impulse, telling me basically… “advocacy is dumb”. Whenever I see anybody Doing An Activism, they're usually… saying a bunch of... obviously false things? They're holding a sign with a slogan that's too simple to possibly be the truth, and yelling this obviously oversimplified thing as loudly as they possibly can? It feels like the archetype of overconfidence.

This is exactly the same thing that I have felt in the past. Extremely well said. It is worth pointing out explicitly that this is not a rational thought - it's an Ugh Field around advocacy, and even if the thought is true, that doesn't mean all advocacy has to be this way.

I find this interesting but confusing. Do you have an idea for what mechanism allowed this? E.g: Are you getting more done per hour now than your best hours working full-time? Did the full-time hours fall off fast at a certain point? Was there only 15 hours a week of useful work for you to do and the rest was mostly padding?

I think this makes a lot of sense. While I think you can make the case for "fertility crisis purely as a means of preventing economic slowdown and increasing innovation" I think your arguments are good that people don't actually often make this argument, and a lot of it does stem from "more people = good".

But I think if you start from "more people = good", you don't actually have motivated reasoning as much as you suspect re: innovation argument. I think it's more that the innovation argument actually does just work if you accept that more people = good. Because if more people = good, that means more people were good before penicillin and then are even more good afterwards, and these two don't actually cancel each other out.

In summary, I don't think that "more people = good" motivates the "Life is generally good to have, actually" argument - I think if anything it's the other way around. People who think life is good tend to be more likely to think it's a moral good to give it to others. The argument doesn't say it's "axiomatically good" to add more people, it's "axiomatically good conditional on life being net positive".

As for understanding why people might feel that way - my best argument is this.

Let's say you could choose to give birth to a child who would be born with a terribly painful and crippling disease. Would it be a bad thing to do that? Many people would say yes.

Now, let's say you could choose to give birth to a child who would live a happy and healthy positive life? Would that be a good thing? It seems that, logically, if giving birth to a child who suffers is bad, giving birth to a child who enjoys life is good.

That, imo, is the best argument for being in favor of more people if you think life is positive.

Note that I don't think this means people should be forced to have kids or that you're a monster for choosing not to, even if those arguments were true. You can save a life for 5k USD after all, and raising a kid yourself takes far more resources than that. Realistically, if my vasectomy makes me a bad person then I'm also a bad person for not donating every spare dollar to the AMF instead of merely 10%, and if that's a "bad person" then the word has no meaning.

Okay, I think I see several of the cruxes here.

Here's my understanding of your viewpoint:

"It's utterly bizarre to worry about fertility. Lack of fertility is not going to be an x-risk anytime soon. We already have too many people and if anything a voluntary population reduction is a good thing in the relative near-term. (i.e, a few decades or so) We've had explosive growth over the last century in terms of population, it's already unstable, why do we want to keep going?"

In a synchronous discussion I would now pause to see if I had your view right. Because that would take too much time in an asynchronous discussion, I'll reply to the imaginary view I have in my head, while hoping it's not too inaccurate. Would welcome corrections.

If this view of yours seems roughly right, here's what I think are the viewpoint differences:

I think people who worry about fertility would agree with you that fertility is not an existential threat.

I think the intrinsic value of having more people is not an important crux - it is possible to have your view on Point 3 and still worry about fertility.

I think the "fertility crisis" is more about replacement than continued increase. It is possible that many of the people who worry about fertility would also welcome still more people, but I don't think they would consider it a crisis if we were only at replacement rates, or close to it.

I think people who care about speed of innovation don't just care about imposed population deadlines looming, but also about quality of life - if we had invented penicillin a century earlier, many people would have lived much longer, happier lives, for example. One could frame technological progress as a moral imperative this way. I'm not sure if this is a major crux, but I think there are people with a general "More people = good" viewpoint for this reason, even ignoring population ethics. You are right that we could use the people we have better, but I don't see this as a mutually exclusive situation.

I think the people who worry about the fertility crisis would disagree with you about Point 4. I don't think it's obvious that "tech to deal with an older population" is actually easier than "tech to deal with a larger population". It might be! Might not be.

While you may not agree with these ideas, I hope I've presented them reasonably and accurately enough that it makes the other side merely different, rather than bizarre and impossible to understand.

Load More