Logan Zoellner

Wiki Contributions

Comments

No Free Lunch theorems only apply to a system that is at maximum entropy.  Generally intelligent systems (e.g. AIXI) are possible because the simplicity prior is useful in our own universe (in which entropy is not at a maximum).   Instrumental convergence isn't at the root of intelligence, simplicity is.

As an example, consider two tasks with no common subgoals: say factoring large integers and winning at Go.  Imagine we are trying to find an algorithm that will excel at both of these while running on a Turing machine.  There are no real-world resources to acquire, hence instrumental convergence isn't even relevant.  However an algorithm that assumes a simplicity prior (like AIXI) will still outperform one that doesn't (say sampling all possible Go playing/number factoring algorithms and then picking the one that performs the best).

What does "want" mean here? Why is game theory some how extra special bad or good?  From a behaviorist point of view, how do I tell apart an angel from a devil that has been game-theoried into being an angel?  Do AGI's have separate modules labeled "utility module" and "game theory" modules and making changes to the utility module is somehow good, but making changes to the game theory module is bad? Do angels have a utility function that just says "do the good', or does it just contain a bunch of traits that we think are likely to result in good outcomes?

Or rather:  Those who can create devils and verify that those devils will take particular actually-beneficial actions as part of a complex diabolical compact, can more easily create angels that will take those actually-beneficial actions unconditionally.

 

I don't understand the distinction between devils and angels here.  Isn't an angel just a devil that we've somehow game-theoried into helping us?

I'm generally not happy if I have meetings for hours on ends.

Yep, I think this kills it.  I have a sort of argument in my head that nothing can emit energy slower than a black hole due to hawking radiation.

Assuming protons don't decay and that there's no big rip, I feel like you can do obnoxiously large numbers.  Build a clock out of superconductors that consumes 0 power until the top bit flips over (since incrementing a counter is reversible, this should be possible).  Then, when your "alarm" goes off, wake up the sentient being and let it have its last thought.  The limit is now on the number of bits B in your clock.  Assume this is somewhere between 10**67 (number of atoms in the galaxy) and 10**80 (number of atoms in the visible universe).  Your wake-up time is now 2**(10**B).

Let's take a concrete example.  

Assume you have an AI that could get 100% on every Putnam test, do you think it would be reasonable or not to assume such an AI would also display superhuman performance at solving the Yang-Mills Mass Gap?

This doesn't include working out advances in fundamental physics, or designing a fusion reactor, or making breakthroughs in AI research.

Why don't all of these fall into the self-play category?  Physics, software and fusion reactors can all be simulated.  

I would be mildly surprised if a sufficiently large language model couldn't solve all of Project Euler+Putnam+MATH dataset.

I strongly doubt we live in a data-limited AGI timeline

  1. Humans are trained using much less data than Chinchilla
  2. We haven't even begun to exploit forms of media other than text (Youtube alone is >2OOM bigger)
  3. self-play allows for literally limitless amounts of data
  4. regularization methods mean data constraints aren't nearly as important as claimed
  5. In the domains where we have exhausted available data, ML models are already weakly superhuman
Load More