LESSWRONG
LW

Simon Möller
34380
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
2Simon Möller's Shortform
2y
1
Llama We Doing This Again?
Simon Möller2y10

I flat out do not believe them. Even if Llama-2 was unusually good, the idea that you can identify most unsafe requests only a 0.05% false positive rate is absurd.

 

Given the quote in the post, this is not really what they claim. They say (bold mine):

However, false refusal is overall rare—approximately 0.05%—on the helpfulness dataset

So on that dataset, I assume it might be true although "in the wild" it's not.

Reply
Anthropic Observations
Simon Möller2y10

Which brings us back to the central paradox: If the thesis that you need advanced systems to do real alignment work is true, why should we think that cutting edge systems are themselves currently sufficiently advanced for this task?

 

I really like this framing and question.

My model of Anthropic says their answer would be: We don't know exactly which techniques work until when or how fast capabilities evolve. So we will continuously build frontier models and align them.

This assumes at least a chance that we could iteratively work our way through this. I think you are very skeptical of that. To the degree that we cannot, this approach (and to a large extent OpenAI's) seem pretty doomed.

Reply
Reflective journal entries using GPT-4 and Obsidian that demand less willpower.
Simon Möller2y10

I fully agree. I tried using ChatGPT for some coaching, but tried to keep it high level and in areas where I wouldn't be too bothers if it showed up on the internet.

I think using the API, rather than ChatGPT, is better. See e.g. https://techcrunch.com/2023/03/01/addressing-criticism-openai-will-no-longer-use-customer-data-to-train-its-models-by-default/: 

Starting today, OpenAI says that it won’t use any data submitted through its API for “service improvements,” including AI model training, unless a customer or organization opts in. In addition, the company is implementing a 30-day data retention policy for API users with options for stricter retention “depending on user needs,” and simplifying its terms and data ownership to make it clear that users own the input and output of the models.

I was actually thinking that having an Obsidian plugin for this sort of thing would be really neat.

Reply
Reflective journal entries using GPT-4 and Obsidian that demand less willpower.
Simon Möller2y10

Couple of years? I think we are talking about months here. I guess the biggest bottleneck would be to get all notes into the LLM context. But I doubt you really need that. I think you can probably guess a few important notes for what you are currently working on and add those as context.

Reply
"Publish or Perish" (a quick note on why you should try to make your work legible to existing academic communities)
Simon Möller2y10

AF: Alignment Forum

Reply
Simon Möller's Shortform
Simon Möller2y11

"Human-level AGI" is not a useful concept (any more). I think many people equate human-level AGI and AGI (per definition) as a system (or a combination of systems) that can accomplish any (cognitive) task at least as well as a human.

That's reasonable, but having the "human-level" in that term seems misleading to me. It anchors us to the idea that the system will be "somewhat like a human", which it won't be. So let's drop the qualifier and just talk about AGI.

Comparing artificial intelligence to human intelligence was somewhat meaningful when we were far away from it along many dimensions to gesture in a general direction.

But large language models are already superhuman on several dimensions (e.g. know more about most topics than any single human, think "faster") and inferior on others (e.g. strategic planning, long-term coherence). By the time they are at human level on all dimensions, they will be super-human overall.

Reply
My thoughts on OpenAI's alignment plan
Simon Möller2y60

This post is great. Strongly upvoted. I just spent a day or so thinking about OpenAI's plan and reading other people's critique. This post does a great job of pointing out problems with the plan at what I think is the right level of detail. The tone also seems unusually constructive.

Reply
My thoughts on OpenAI's Alignment plan
Simon Möller2y10

Upvoted since I like how literally you went through the plan. I think we need to think about and criticize both, the literal version of the plan and the way it intersects with reality.

 

The methods you are trying are all known to fail at sufficiently high levels of intelligence. But if these are your only ideas, it is possible they get you far enough for GPT-5 to output a better idea.

To me this seems like a key point that many other critiques are missing that focus on specific details.

Reply
1Why AI Safety is Hard
2y
0
2Simon Möller's Shortform
2y
1
31Situational awareness in Large Language Models
2y
2