paulfchristiano

Sequences

Iterated Amplification

Wiki Contributions

Comments

Inference cost limits the impact of ever larger models

That estimate puts GPT-3 at about 500 billion floating point operations per word, 200x less than 100 trillion. If you think a human reads at 250 words per minute, then 6 cents for 750 words is $1.20/hour. So the two estimates differ by about 250x.

As a citation for the hardware cost:

(ETA: But note that a service like the OpenAI API using EC2 would need to use on demand prices which are about 10x higher per flop if you want reasonable availability.)

Secure homes for digital people

I think this is a reasonable way to look at it. But the point is that you identify with (and care morally about the inputs to) the homunculus. From the homunculus' perspective you are just in a room talking with a friend. From the (home+occupant)'s perspective you are communicating very rapidly with your friend's (home+occupant).

Inference cost limits the impact of ever larger models

Depends on what you mean by "human range." Go was decades only if you talk about crossing the range between people who don't play Go at all to those who play as a hobby to those who have trained very extensively. If you restrict to the range of "how good would this human be if they trained extensively at Go?" then I'd guess the range is much smaller---I'd guess that the median person could reach a few amateur dan with practice, so maybe you are looking at like 10 stones of range between "unusually bad human" and "best human."

My rough guess when I looked into it before was that doubling model size is worth about 1 stone around AlphaZero's size/strength, so that's about a factor of 1000 in model size.

then several years (decades?) later we'd get an AGI architecture+project that blows through the entire human range in a few months. That feels like it can't be right.

I think this is mostly an artifact of scaling up R&D effort really quickly. If you have a 50th percentile human and then radically scale up R&D, it wouldn't be that surprising if you got to "best human" within a year. The reason it would seem surprising to me for AGI is that investment will already be high enough that it won't be possible to scale up R&D that much / that fast as you approach the average human.

Inference cost limits the impact of ever larger models

It costs well under $1/hour to rent hardware that performs 100 trillion operations per second. If a model using that much compute (something like 3 orders of magnitude more than gpt-3) were competitive with trained humans, it seems like it would be transformative. Even if you needed 3 more orders of magnitude to be human-level at typical tasks, it still looks like it would be transformative in a short period of time owing to its other advantages (quickly reaching and then surpassing the top end of the human range, and running at much larger serial speed---more likely you'd be paying 1000x as much to run your model 1000x faster than a human). If this were literally dropped in our laps right now it would fortunately be slowed down for a while because there just isn't enough hardware, but that probably won't be the case for long.

EDT with updating double counts

I'm using EDT to mean the agent that calculates expected utility conditioned on each statement of the form "I take action A" and then chooses the action for which the expected utility is highest. I'm not sure what you mean by saying the utility is not a function of O_i, isn't "how much money me and my copies earn" a function of the outcome?

(In your formulation I don't know what P(|A) means, given that A is an action and not an event, but if I interpret it as "Probability given that I take action A" then it looks like it's basically what I'm doing?)

EDT with updating double counts

I feel like the part where you "exclude worlds where 'you don't exist' " should probably amount to "exclude worlds where your current decision doesn't have any effects"---it's not clear in what sense you "don't exist" if you are perfectly correlated with something in the world.  And of course renormalizing makes no difference, it's just expressing the fact that both sides of the bet get scaled down. So if that's your operationalization, then it's also just a description of something that automatically happens inside of the utility calculation.

(I do think it's unclear whether selfish agents "should" be updateless in transparent newcomb.)

Secure homes for digital people
  • I think it's a problem for future people (and this is fairly technically difficult solution at that) and it doesn't matter much whether we think about a plausible solution in advance. Whether future people solve this problem doesn't look like it will have much shape on the overall sweep of history.
  • I think the problem is very likely to be resolved by different mechanisms based on trust and physical control rather than cryptography.
  • I think the slowdowns involved, even in a mature version of this idea, are likely impractical for the large majority of digital minds. So this isn't a big deal morally during the singularity, and then after the singularity I don't think this will be relevant.
AMA: Paul Christiano, alignment researcher

I take it back, Chaitin's constant is more cool than I thought.

I don't like the cardinal  very much, but I like  just fine so it's not really clear if it's a problem with the object or the reference.

Secure homes for digital people

If someone is "holding you captive" then you wouldn't get to talk to your friends. The idea is just that in that case you can pause yourself (or just ignore your inputs and do other stuff in your home).

Of course there are further concerns that e.g. you may think you are talking to your friend but are talking to an adversary pretending to be your friend, but in a scenario where people sometimes get kidnapped that's just part of life as a digital person.

(Though if you and your friend are both in secure houses, you may still be able to authenticate to each other as usual and an adversary who controlled the communication link couldn't eavesdrop or fake the conversation unless they got your friend's private key---in which case it doesn't really matter what's happening on your end and of course you can be misled.)

Secure homes for digital people

Worth noting: this is supposed to be a fun cryptography problem and potentially fodder for someone's science fiction stories, it's not meant to be Serious Business.

Load More