Wiki Contributions

Comments

Contest: An Alien Message

This suggests a different question. For non-participants who are given the program which creates the data, what probability/timeframe to assign to success.

On this one I think that I would have put a high probability to be solved but would have anticipated a longer timeframe.

Contest: An Alien Message

I think the resulting program has lower length (so whatever string it generates has lower KC)

I don’t think this follows - your code is shorter in python but it includes 3 new built in functions which is hidden complexity.

I do agree with the general point that KC isn’t a great measure of difficulty for humans - we are not exactly arbitrary encoders.

Evaluating a Corsi-Rosenthal Filter Cube

What were the noise levels on the Corsi-Rosenthal?

Humans are very reliable agents

Humans are very reliable agents for tasks which humans are very reliable for.

For most of these examples (arguably all of them) if humans were not reliable at them then the tasks would not exist or would exist in a less stringent form.

Against Active Shooter Drills

Curious as to what the get under the desks alarm was supposed to help with and how long ago this was? I’m having trouble fitting it into my world model.

Contra Hofstadter on GPT-3 Nonsense

I see that the standard Playground Q&A prompt on OpenAI uses a similar technique (although boringly uses "Unknown" instead of "Yo be real").

I think the thing which throws people off is that when GPT-3 goes wrong it goes wrong in ways that are weird to humans.

I wondered if humans sometimes fail at riddles that GPT-3 would think of as weird. I tried a few that I thought would be promising candidates (no prompt other than the questions itself)

 

Q: If a red house is made with red bricks, a blue house is made with blue bricks, a pink house is made with pink bricks, a black house is made with black bricks, what is a greenhouse made with? 

A: Greenhouses are made with glass.

 

Say silk 4 times 

Silk, silk, silk, silk. 

What does a cow drink? 

A cow drinks water.

 

You put a roast in a roaster, what do you put in a toaster? 

Bread.

 

These are very cherry picked - I deliberately chose trick riddles that I thought GPT-3 would be good at. I think there's possibly something here about how humans vs GPT-3 rely on context.

I imagine a society of GPT-4s discovering a human and discussing how hollow their intelligence is.

Yes, AI research will be substantially curtailed if a lab causes a major disaster

I think the natural/manmade comparison between COVID and Three Mile has alot of merit but there are other differences which might explain the difference. Some of them would imply that there would be a strong response to an AI , others less so. 

Local vs global

To prevent nuclear meltdowns you only need to ban them in the US - it doesn't matter what you do elsewhere. This is more complicated for pandemic preparedness.

Active spending vs loss of growth

Its easier to pass a law putting in nuclear regulations which limit growth as this isn't as obvious a loss as spending money from the public purse on measures for pandemics.

Activity of lobbying groups

I get the impression that the anti-nuclear lobby was alot bigger than any pro-pandemic-preparedness lobby. Possibly this is partly caused by the natural vs manmade thing so might be kind of a subpoint.

Tractability of problem

Preventing nuclear disasters seems more tractable than pandemic preparedness

1979 vs 2020

Were our institutions stronger back then?

 

FWIW I agree that a large AI disaster would cause some strong regulation and international agreements, my concern is more that a small one would not and small ones from weaker AIs seem more likely to happen.

We will be around in 30 years

Assuming this is the best an AGI can do, I find this alot less comforting than you appear to. I assume "a very moderate chance" means something like 5-10%?

Having a 5% chance of such a plan working out is insufficient to prevent an AGI from attempting it if the potential reward is large enough and/or they expect they might get turned off anyway. 

Given sufficient number of AGIs (something we presumably will have in the world that none have taken over) I would expect multiple attempts so the chance of one of them working becomes high.

Benign Boundary Violations

There's a theory of humor called benign violation theory.

The BVT claims that humor occurs when three conditions are satisfied: 1) something threatens one's sense of how the world "ought to be", 2) the threatening situation seems benign, and 3) a person sees both interpretations at the same time.

I think your description of pranks etc. fits in nicely with this - you even chose the same words to describe it so maybe you're already aware?

Accounting For College Costs

It's worth noting that the while number of courses at Berkeley almost doubled in the period shown, the number of courses per student has increased at a lower rate due to an increase in students. 

Eyeballing the graph and looking at Berkeley's enrollment numbers I think the number of courses per student has increased by around 50%. Smaller but still a big effect.

Load More