Wiki Contributions

Comments

Some takes I have come across from AI Safety researchers in Academia (Note that both are generally in favor of this work):

Stephen Casper

Erik Jenner

I only want to point out that right now, the approach basically doesn't work.

A good chunk of the general skills, at least when summarized like this:

  • It seems plausible that general training in things like “what to do when you’re stuck on a problem”, “how to use your network to effectively find solutions”, “when & how to ask for help”, “how to stay motivated even when you’re lost”, “how to lead meetings with your research mentors”, and “how to generally take care of your mental health” could be useful.

seem like things that I would learn in a PhD program (granted, some of them seem like things you would need to figure out for yourself, where the advisor can't help a ton). I'm not sure a PhD is the most efficient possible way to learn these things, but at least it has a blueprint I can follow, where I will probably end up at where I want to be.

Since you have a first-hand perspective on this, would you say I'm off the mark here?

More evidence for the point "generative models can contain agents", or specifically "generative models trained to imitation agents can learn to behave agentically". However, not more evidence for the claim "generative models trained to be generators / generative models trained to be useful tools will suddenly learn an internal agent". Does that seem right?

I've been in a similar situation and have had similar feelings. Is this really the most efficient use of the money? Surely reducing comforts such as catered food by a little bit can't have such a huge impact on added productivity?

I'm sympathetic to your argument, but I don't see how we can be certain that verifying / constructing benevolent AGI is just as easy as creating high-fidelity simulations. Certainly proficiency in these tasks might be orthogonal and it is not impossible to imagine that maybe it is computationally intractable to create superintelligence that we know is benevolent, so instead we opt to just run vast quantities of simulations -- kind of what is happening with empirical AI research right now.

IMO reasoning about what will be easy or not for a far advanced civilization is always mostly speculation.

Then there is the question of fidelity. If you imagine that our current world is a simulation, it might just be a vastly simplified simulation which runs on the equivalent of a calculator in the base reality, however because we only know our own frame of reference it seems to us like it is the most high fidelity we can imagine. I think the most important part in creating such a simulation would be to keep it truly isolated: We can't introduce any inputs from our own world that are not internally consistent with the simulated world. E.g. if we were to include texts from our world in a lower fidelity simulation, it would most likely be easy to find out that something doesn't add up.

I think that's exactly a problem here:
The answer consists of two parts, you have to guess a food and an explanation.
For the first part I agree with Anisha: The banana is probably often answer to a question related to food.
Now the explanation really only requires to describe some properties of a banana. This could again just be simple pattern matching without really understanding the problem.
The fundamental problem is that for this question a model that understands and one that mostly guesses could provide the same answer, so given a correct answer we can't really distinguish between whether the model actually understands in the way we want.

Is manifestation #3 only limited to weird people? The way I was raised and in my extended social circle people say "we should stop global warming" and also actively try to minimize their negative impact. I never felt like anyone saw this as weird. Could the individual do more? Probably pretty much always. Does the average person have the information and capacity to determine what is the most effective way to reduce their environmental impact? Probably not, but that doesn't automatically mean hypocrisy.

The Principle of Normality still applies, but whether your actions and your talk agree seems to be entirely dependent on whether Normality in your frame of reference means being hypocritical or not. Therefore, this seems unrelated to whether you are weird or normal.

My advice is to get regular enough sun exposure that you're not at risk for sunburn.


Are you sure this is the correct thing to do, though?

I believed this myself for a long time and this seems to be the common wisdom:
Get a natural tan -> you will get fewer sunburns -> therefore you are less at risk of cancer
So what I thought was that it is better to have tanned skin than pale skin (if your skin is naturally pale) and I should purposefully tan my skin to 'strengthen' it.

However, recently I have read things that seems to suggest that this is actually not true. Unfortunately I haven't found great sources for this, but also nothing to the contrary. If you look at point number 3 here, it suggests what I mean.

If I understand that correctly, the argument is as follows:
- Every time your skin changes color you are already doing damage to your skin
- Yes, your skin gets more resistant to sunburns, however in order to regularly keep your skin tanned, the aggregated damage to your skin still puts you at higher risk of skin cancer and other problems

So the course of action would be
- Get enough sun for Vitamin D production (but this seems to be possible without tanning)
- Don't let your skin get to the point of getting tanned by using sunscreen
- Of course, don't get sunburnt

Most skin cancer information sites seem to support this interpretation (example), however I feel like a lot of people still seem to think tanned skin is healthy, at least I used to.