Stella Biderman

Posts

Sorted by New

Wiki Contributions

Comments

Yudkowsky and Christiano discuss "Takeoff Speeds"

For Sanh et al. (2021), we were able to negotiate access to preliminary numbers from the BIG Bench project and run the T0 models on it. However the authors of Sanh et al. and the authors of BIG Bench are different groups of people.

Yudkowsky and Christiano discuss "Takeoff Speeds"

What makes you say BIG Bench is a joint Google / OpenAI project? I'm a contributor to it and have seen no evidence of that.

What exactly is GPT-3's base objective?

I think that 4 is confused when people talk about "the GPT-3 training data." If someone said "there are strings of words found in the GPT-3 training data that GPT-3 never saw" I would tell them that they don't know what the words in that sentence mean. When an AI researcher speaks of "the GPT-3 training data" they are talking about the data that GPT-3 actually saw. There's data that OpenAI collected which GPT-3 didn't see, but that's not what the words "the GPT-3 training data" refers to.

What exactly is GPT-3's base objective?

Or is it "Predict the next word, supposing what you are reading is a random-with-the-following-weights sample from dataset D? [where D is the dataset used to train GPT-3]

This is the correct answer.

The problem with these last two answers is that they make it undefined how well GPT-3 performs on the base objective on any prompt that wasn't in D, which then rules out psuedo-alignment by definition.

This is correct, but non-problematic in my mind. If data wasn’t in the training dataset, then yes there is no fact of the matter as to what training signal GPT-3 received when training on it. We can talk about what training signal GPT-3 counterfactually would have received had it been trained on this data, but there is no answer to the question in the actual world.

Discussion with Eliezer Yudkowsky on AGI interventions

My thinking is that prosaic alignment can also apply to non-super intelligent systems. If multimodal GPT-17 + RL = superintelligence, then whatever techniques are involved with aligning that system would probably apply to multimodal GPT-3 + RL, despite not being superintelligence. Superintelligence is not a prerequisite for being alignable.

Discussion with Eliezer Yudkowsky on AGI interventions

If superintelligence is approximately multimodal GPT-17 plus reinforcement learning, then understanding how GPT-3-scale algorithms function is exceptionally important to understanding super-intelligence.

Also, if superintelligence doesn’t happen then prosaic alignment is the only kind of alignment.

Discussion with Eliezer Yudkowsky on AGI interventions

Strong upvote.

N y original exposure to LW drove me away in large part because issues you describe. I would also add (at least circa 2010) you needed to have a near-deistic belief in the anti-messianic emergence of some AGI so powerful that it can barely be described in terms of human notions of “intelligence.”

Yes, new information absolutely exists. Thinking about new information in some kind of absolute sense (“has anyone else ever had this thought?”) is the wrong approach in my mind. What we are really interested in is new information relative to an established set of knowledge. Information theory tells us that there’s a maximum amount of information that can be encoded in k bits, and (at least as long as our system is significantly smaller than the universe) so we can find information that’s not encoded in the existing system.

Whether GPT-3 is likely to succeed at doing this is a statistical and empirical question, but at a minimum the answer to the title question is a resounding “yes.”

NVIDIA and Microsoft releases 530B parameter transformer model, Megatron-Turing NLG

It’s interesting how Microsoft and NVIDIA are plugging EleutherAI and open source work in general. While they don’t reference EleutherAI by name, the Pile dataset used as the basis for their training data and the LM Evaluation Harness mentioned in the post are both open source efforts by EleutherAI. EleutherAI, in return, is using the Megatron-DS codebase as the core of their GPT-NeoX model architecture.

I think that this is notable because it’s the first time we’ve really seen powerful AI research orgs sharing infra like this. Typically everyone wants to do everything bespoke and make their work all on their own. This is good for branding but obviously a lot more work.

I wonder if MSFT and NVIDIA tried to make a better dataset than the Pile on their own and failed.

The LessWrong Team is now Lightcone Infrastructure, come work with us!

Why is this problem better solved by systematically underpaying everyone as opposed to firing people who act “in favor of what advances their own power” or who promote infighting?

Load More