All of Dunning K.'s Comments + Replies

Some takes I have come across from AI Safety researchers in Academia (Note that both are generally in favor of this work):

Stephen Casper

Erik Jenner

I only want to point out that right now, the approach basically doesn't work.

A good chunk of the general skills, at least when summarized like this:

  • It seems plausible that general training in things like “what to do when you’re stuck on a problem”, “how to use your network to effectively find solutions”, “when & how to ask for help”, “how to stay motivated even when you’re lost”, “how to lead meetings with your research mentors”, and “how to generally take care of your mental health” could be useful.

seem like things that I would learn in a PhD program (granted, some of them seem like things you would need to figure out fo... (read more)

As Sam says, PhDs are notoriously hard on mental health, and I think this is very not conducive to learning for most people.  For example, as someone who was a PhD student, I think I learned how to do these things: only in the few months after leaving my PhD, though a lot of the learning was based on experiences in my PhD. 
4Sam Bowman1y
I mostly agree, but it's messy. I don't think it's obvious that a PhD is anywhere near the ideal way to pick up some of these skills, or that earning a PhD definitely means that you've picked them up, but PhD programs do include lots of nudges in these directions, and PhD-holders are going to be much stronger than average at most of this.  In particular, like Johannes said, doing a PhD is notoriously hard on mental health for a number of reasons, even at a more-supportive-than-average lab. So to the extent that they teach 'taking care of your mental health' and 'staying motivated when you're lost', it's often by throwing you into stressful, confusing work situations without great resources and giving you the degree if you figure out how to navigate them.
2Johannes C. Mayer1y
I have not done a PhD. But my two cents here are that none of these skills seem very teachable, by traditional teaching methods. I would be surprised if people try to teach modern half of these things explicitly in a PhD. And I don't expect that they will teach them very well. I expect that you will need to figure out most of these things yourself. I have heard that most PhD students get depressed. That doesn't sound like they have good models of how the mind works and how to take care of their mental health. Though all off it depends on how good the people around you are of course.

More evidence for the point "generative models can contain agents", or specifically "generative models trained to imitation agents can learn to behave agentically". However, not more evidence for the claim "generative models trained to be generators / generative models trained to be useful tools will suddenly learn an internal agent". Does that seem right?

I've been in a similar situation and have had similar feelings. Is this really the most efficient use of the money? Surely reducing comforts such as catered food by a little bit can't have such a huge impact on added productivity?

4Sam F. Brown1y
I actually think that catering of high enough quality that people don't leave the permises for meals is a very efficient use of money. And there's a good argument to be made that the most efficient use of money isn't the most effective one. But also, thanks :)

I'm sympathetic to your argument, but I don't see how we can be certain that verifying / constructing benevolent AGI is just as easy as creating high-fidelity simulations. Certainly proficiency in these tasks might be orthogonal and it is not impossible to imagine that maybe it is computationally intractable to create superintelligence that we know is benevolent, so instead we opt to just run vast quantities of simulations -- kind of what is happening with empirical AI research right now.

IMO reasoning about what will be easy or not for a far advanced civil... (read more)

I think that's exactly a problem here:
The answer consists of two parts, you have to guess a food and an explanation.
For the first part I agree with Anisha: The banana is probably often answer to a question related to food.
Now the explanation really only requires to describe some properties of a banana. This could again just be simple pattern matching without really understanding the problem.
The fundamental problem is that for this question a model that understands and one that mostly guesses could provide the same answer, so given a correct answer we can't really distinguish between whether the model actually understands in the way we want.

Is manifestation #3 only limited to weird people? The way I was raised and in my extended social circle people say "we should stop global warming" and also actively try to minimize their negative impact. I never felt like anyone saw this as weird. Could the individual do more? Probably pretty much always. Does the average person have the information and capacity to determine what is the most effective way to reduce their environmental impact? Probably not, but that doesn't automatically mean hypocrisy.

The Principle of Normality still applies, but whether y... (read more)

My advice is to get regular enough sun exposure that you're not at risk for sunburn.

Are you sure this is the correct thing to do, though?

I believed this myself for a long time and this seems to be the common wisdom:
Get a natural tan -> you will get fewer sunburns -> therefore you are less at risk of cancer
So what I thought was that it is better to have tanned skin than pale skin (if your skin is naturally pale) and I should purposefully tan my skin to 'strengthen' it.

However, recently I have read things that seems to suggest that this is actually not... (read more)