ESRogs

Engineer at CoinList.co. Donor to LW 2.0.

Comments

Coherent decisions imply consistent utilities

That makes a lot of sense to me. Good points!

Coherent decisions imply consistent utilities

It seems to me that there has been enough unanswered criticism of the implications of coherence theorems for making predictions about AGI that it would be quite misleading to include this post in the 2019 review.

If the post is the best articulation of a line of reasoning that has been influential in people's thinking about alignment, then even if there are strong arguments against it, I don't see why that means the post is not significant, at least from a historical perspective.

By analogy, I think Searle's Chinese Room argument is wrong and misleading, but I wouldn't argue that it shouldn't be included in a list of important works on philosophy of mind.

Would you (assuming you disagreed with it)? If not, what's the difference here?

(Put another way, I wouldn't think of the review as a collection of "correct" posts, but rather as a collection of posts that were important contributions to our thinking. To me this certainly qualifies as that.)

Coherent decisions imply consistent utilities

On the review: I don't think this post should be in the Alignment section of the review, without a significant rewrite / addition clarifying why exactly coherence arguments are useful or important for AI alignment.

Assuming that one accepts the arguments against coherence arguments being important for alignment (as I tentatively do), I don't see why that means this shouldn't be included in the Alignment section.

The motivation for this post was its relevance to alignment. People think about it in the context of alignment. If subsequent arguments indicate that it's misguided, I don't see why that means it shouldn't be considered (from a historical perspective) to have been in the alignment stream of work (along with the arguments against it).

(Though, I suppose if there's another category that seems like a more exact match, that seems like a fine reason to put it in that section rather than the Alignment section.)

Does that make sense? Is your concern that people will see this in the Alignment section, and not see the arguments against the connection, and continue to be misled?

ESRogs's Shortform

The workflow I've imagined is something like:

  1. human specifies function in English
  2. AI generates several candidate code functions
  3. AI generates test cases for its candidate functions, and computes their results
  4. AI formally analyzes its candidate functions and looks for simple interesting guarantees it can make about their behavior
  5. AI displays its candidate functions to the user, along with a summary of the test results and any guarantees about the input output behavior, and the user selects the one they want (which they can also edit, as necessary)

In this version, you go straight from English to code, which I think might be easier than from English to formal specification, because we have lots of examples of code with comments. (And I've seen demos of GPT-3 doing it for simple functions.)

I think some (actually useful) version of the above is probably within reach today, or in the very near future.

ESRogs's Shortform

Mostly it just seems significant in the grand scheme of things. Our mathematics is going to become formally verified.

In terms of actual consequences, it's maybe not so important on its own. But putting a couple pieces together (this, Dan Selsam's work, GPT), it seems like we're going to get much better AI-driven automated theorem proving, formal verification, code generation, etc relatively soon.

I'd expect these things to start meaningfully changing how we do programming sometime in the next decade.

ESRogs's Shortform

One of the most important things going on right now, that people aren't paying attention to: Kevin Buzzard is (with others) formalizing the entire undergraduate mathematics curriculum in Lean. (So that all the proofs will be formally verified.)

See one of his talks here: 

Imitative Generalisation (AKA 'Learning the Prior')

FYI it looks like the footnote links are broken. (Linking to "about:blank...")

Science in a High-Dimensional World

https://www.preposterousuniverse.com/blog/2016/07/18/space-emerging-from-quantum-mechanics/

Science in a High-Dimensional World

I'm not sure whether it's the standard view in physics, but Sean Carroll has suggested that we should think of locality in space as deriving from entanglement. (With space itself as basically an emergent phenomenon.) And I believe he considers this a driving principle in his quantum gravity work.

Fourth Wave Covid Toy Modeling

Based on what you've said, Rt never goes below one

You're saying nostalgebraist says Rt never goes below 1?

I interpreted "R is always ~1 with noise/oscillations" to mean that it could go below 1 temporarily. And that seems consistent with the current London data. No?

Load More