kave

Wiki Contributions

Comments

Two-year update on my personal AI timelines
kave13dΩ573

If you assume the human brain was trained roughly optimally, then requiring more data, at a given parameter number, to be optimal pushes timelines out. If instead you had a specific loss number in mind, then a more efficient scaling law would pull timelines in.

The prototypical catastrophic AI action is getting root access to its datacenter

My impression was that "zero-sum" was not used in quite the standard way. I think the idea is the AI will cause a big reassignment of Earth's capabilities to its own control. And that that's contrasted with the AI massively increasing its own capabilities and thus Earth's overall capabilities.

johnswentworth's Shortform

Future perfect (hey, that's the name of the show!) seems like a reasonable hack for this in English

Richard Ngo's Shortform

  stenographically

steganographically?

Deconfusing Landauer's Principle

The Shannon entropy of a distribution over random variable  conditional on the value of another random variable  can be written as 

If X and C are which face is up for two different fair coins, H(X) = H(C) = -1. But  ? I think this works out fine for your case because (a) I(X,C) = H(C): the mutual information between C (which well you're in) and X (where you are) is the entropy of C, (b) H(C|X) = 0: once you know where you are, you know which well you're in, and, relatedly (c) H(X,C) = H(X): the entropy of the joint distribution just is the entropy over X.

Bits of Optimization Can Only Be Lost Over A Distance

Good point!

It seems like it would be nice in Daniel's example for P(A|ref) to be the action distribution of an "instinctual" or "non-optimising" player. I don't know how to recover that. You could imagine something like an n-gram model of player inputs across the MMO.

Why I'm Worried About AI

Nitpick: to the extent you want to talk about the classic example, paperclip maximisers are as much meant to illustrate (what we would now call) inner alignment failure.

See Arbital on Paperclip ("The popular press has sometimes distorted the notion of a paperclip maximizer into a story about an AI running a paperclip factory that takes over the universe. [...] The concept of a 'paperclip' is not that it's an explicit goal somebody foolishly gave an AI, or even a goal comprehensible in human terms at all.") or a couple of EY tweet threads about it: 1, 2

Bits of Optimization Can Only Be Lost Over A Distance

I agree on the "reference" distribution in Daniel's example. I think it generally means "the distribution over the random variables that would obtain without the optimiser". What exactly that distribution is / where it comes from I think is out-of-scope for John's (current) work, and I think is kind of the same question as where the probabilities come from in statistical mechanics.

How Does The Finance Industry Generate Real Economic Value?

Not quite! If there were no central bank, money’s value would not jump around aggressively and discontinuously

Accounting For College Costs

Full flights have more people on them. If you have 100 flights with one person and 1 flight with 200 people, most of the people in those flights are on the 200 person flight.

Load More