Sequences

AXRP - the AI X-risk Research Podcast

Comments

Is there going to be some sort of slack or discord for attendees?

What are the two other mechanisms of action?

In my post, I didn't require the distribution over meanings of words to be uniform. It could be any distribution you wanted - it just resulted in the prior ratio of "which utterance is true" being 1:1.

Is this just the thing where evidence is theory-laden? Like, for example, how the evidentiary value of the WHO report on the question of COVID origins depends on how likely one thinks it is that people would effectively cover up a lab leak?

To be clear, this is an equivalent way of looking at normal prior-ful inference, and doesn't actually solve any practical problem you might have. I mostly see it as a demonstration of how you can shove everything into stuff that gets expressed as likelihood functions.

Why wouldn't this construction work over a continuous space?

DanielFilan19dΩ440

Thanks for finding this! Will link it in the transcript.

Sorry, it will be a bit before the video uploads. I'll hide the link until then.

Proposal: merge with the separate tag "AI Control"

Load More