Wiki Contributions

Comments

Sorted by
Liron22

By your logic, if I ask you a totally separate question "What's the probability that a parent's two kids are both boys", would you answer 1/3? Becuase the correct answer should be 1/4 right? So something about your preferred methodology isn't robust.

Liron20

I agree that frequentists are flexible about their approach to try to get the right answer. But I think your version of the problem highlights how flexible they have to be i.e. mental gymnastics, compared to just explicitly being Bayesian all along.

Liron2-2

In scenario B, where a random child runs up, I wonder if a non-Bayesian might prefer that you just eliminate (girl, girl) and say that the probability of two boys is 1/3?

In Puzzle 1 in my post, the non-Bayesian has an interpretation that's still plausibly reasonable, but in your scenario B it seems like they'd be clowning themselves to take that approach.

So I think we're on the same page that whenever things get real/practical/bigger-picture, then you gotta be Bayesian.

Liron125

Thanks for this post.

I'd love to have a regular (weekly/monthly/quarterly) post that's just "here's what we're focusing on at MIRI these days".

I respect and value MIRI's leadership on the complex topic of building understanding and coordination around AI.

I spend a lot of time doing AI social media, and I try to promote the best recommendations I know to others. Whatever thoughts MIRI has would be helpful.

Given that I think about this less often and less capably than you folks do, it seems like there's a low hanging fruit opportunity for people like me to stay more in sync with MIRI.  My show (Doom Debates) isn't affiliated with MIRI, but as long as there keeps being no particular disagreement that I have with MIRI, I'd like to make sure I'm pulling in the same direction as you all.

Answer by Liron121

I’ve heard MIRI has some big content projects in the works, maybe a book.

FWIW I think having a regular stream of lower-effort content that a somewhat mainstream audience consumes would help to bolster MIRI’s position as a thought leader when they release the bigger works.

Liron20

I'd ask: If one day your God stopped existing, would anything have any kind of observable change?

Seems like a meaningless concept, a node in the causal model of reality that doesn't have any power to constrain expectation, but the person likes it because their knowledge of the existence of the node in their own belief network brings them emotional reward.

Liron20

When an agent is goal-oriented, they want to become more goal-oriented, and maximize the goal-orientedness of the universe with respect to their own goal

Because expected value tells us that the more resources you control, the more robust you are to maximizing your probability of success in the face of what may come at you, and the higher your maximum possible utility is (if you have a utility function without an easy-to-hit max score).

“Maximizing goal-orientedness of the universe” was how I phrased the prediction that conquering resources involves having them aligned to your goal / aligned agents helping you control them.

Liron20

I'm happy to have that kind of debate.

My position is "goal-directedness is an attractor state that is incredibly dangerous and uncontrollable if it's somewhat beyond human-level in the near future".

The form of those arguments seems to be like "technically it doesn't have to be". But realistically it will be lol. Not sure how much more there will be to say.

Liron40

Thanks. Sure, I’m always happy to update on new arguments and evidence. The most likely way I see possibly updating is to realize the gap between current AIs and human intelligence is actually much larger than it currently seems, e.g. 50+ years as Robin seems to think. Then AI alignment research has a larger chance of working.

I also might lower P(doom) if international govs start treating this like the emergency it is and do their best to coordinate to pause. Though unfortunately even that probably only buys a few years of time.

Finally I can imagine somehow updating that alignment is easier than it seems, or less of a problem to begin with. But the fact that all the arguments I’ve heard on that front seem very weak and misguided to me, makes that unlikely.

Load More