John Schulman

Comments

"Decision Transformer" (Tool AIs are secret Agent AIs)

Basically agree -- I think that a model trained by maximum likelihood on offline data is less goal-directed than one that's trained by an iterative process where you reinforce its own samples (aka online RL), but still somewhat goal directed. It needs to simulate a goal-directed agent to do a good job at maximum likelihood. OTOH it's mostly concerned with covering all possibilities, so the goal directed reasoning isn't emphasized. But with multiple iterations, the model can improve quality (-> more goal directedness) at the expense of coverage/diversity.

The case for aligning narrowly superhuman models

Super clear and actionable -- my new favorite post on AF.

I also agree with it, and it's similar to what we're doing at OpenAI (largely thanks to Paul's influence).

Teaching ML to answer questions honestly instead of predicting human answers

D'oh, re: the optimum of the objective, I now see that the solution is nontrivial. Here's my current understanding.

Intuitively, the MAP version of the objective says: find me a simple model theta1 such that there's more-complex theta2 with high likelihood under p(theta2|theta1) (which corresponds to sampling theta2 near theta1 until theta2 satisfies head-agreement condition) and high data-likelihood p(data|theta2). 

And this connects to the previous argument about world models and language as follows: we want theta1 to contain half a world model, and we want theta2 to contain the full world model and high data-likelihood (for one of the head) and the two heads agree. Based on Step1, the problem is still pretty underconstrained, but maybe that's resolved in Step 2.

Teaching ML to answer questions honestly instead of predicting human answers

Isn't the Step 1 objective (the unnormalized posterior log probability of (θ₁, θ₂)) maximized at θ₁ = θ₂=argmax L + prior? Also, I don't see what this objective has to do with learning a world model.

The Case for a Journal of AI Alignment

I think this is a good idea. If you go ahead with it, here's a suggestion.

Reviewers often procrastinate for weeks or months. This is partly because doing a review takes an unbounded amount of time, especially for articles that are long or confusing. So instead of sending the reviewers a manuscript with a due date, book a calendar event for 2 hours with the reviewers. The reviewers join a call or group chat and read the paper and discuss it. They can also help clear each other's confusions. They aim to complete the review by the end of the time window.

Multi-dimensional rewards for AGI interpretability and control

There's a decent amount of literature on using multiple rewards, though often it's framed as learning about multiple goals. Here are some off the top of my head:

The Horde (classic): http://www.ifaamas.org/Proceedings/aamas2011/papers/A6_R70.pdf
Universal Value Function Approximators: http://proceedings.mlr.press/v37/schaul15.html
Learning to Act By Predicting: https://arxiv.org/abs/1611.01779
Temporal Difference Models: https://arxiv.org/abs/1802.09081
Successor Features: https://papers.nips.cc/paper/2017/hash/350db081a661525235354dd3e19b8c05-Abstract.html
 

Also see the discussion in Appendix D about prediction heads in OpenAI Five, used mostly for interpretability/diagnostics https://cdn.openai.com/dota-2.pdf.

Why Neural Networks Generalise, and Why They Are (Kind of) Bayesian

The results in Neural Networks Are Fundamentally Bayesian are pretty cool -- that's clever how they were able to estimate the densities.

A couple thoughts on the limitations:

  • There are various priors over functions for which we can calculate the exact posterior. (E.g., Gaussian processes.) However, doing Bayesian inference on these priors doesn't perform as well as neural networks on most datasets. So knowing SGD is Bayesian is only interesting if we also know that the prior is interesting. I think the ideal theoretical result would be to show that SGD on neural nets is an approximation of Solomonoff Induction (or something like SI), and the approximation gets better as the NNs get bigger and deeper. But I have yet to see any theory that connects neural nets/ SGD to something like short programs.
  • If SGD works because it's Bayesian, then making it more Bayesian should make it work better. But according to https://arxiv.org/abs/2002.02405 that's not the case. Lowering the temperature, or taking the MAP (=temperature 0) generalizes better than taking the full Bayesian posterior, as calculated by an expensive MCMC procedure.
Debate Minus Factored Cognition

I might be missing some context here, but I didn't understand the section "No Indescribable Hellworlds Hypothesis" and how hellworlds have to do with debate.

Debate update: Obfuscated arguments problem

OK, I guess I'm a bit unclear on the problem setup and how it involves a training phase and deployment phase.

Load More