brglnd

Wiki Contributions

Comments

Quadratic Voting and Collusion

The "collusion" issue leads to a state of affairs that two political groups can gain more political power if they can organize and get along well enough to actively coordinate. Why should two groups have more power just because they can cooperate?

Ngo and Yudkowsky on alignment difficulty

[I may be generalizing here and I don't know if this has been said before.]

It seems to me that Eliezer's models are a lot more specific than people like Richard's. While Richard may put some credence on superhuman AI being "consequentialist" by default, Eliezer has certain beliefs about intelligence that make it extremely likely in his mind. 

I think Eliezer's style of reasoning which relies on specific, thought-out models of AI makes him more pessimistic than others in EA. Others believe there are many ways that AGI scenarios could play out and are generally uncertain. But Eliezer has specific models that make some scenarios a lot more likely in his mind. 

There are many valid theoretical arguments for why we are doomed, but maybe other EAs put less credence in them than Eliezer does.

Prioritization Research for Advancing Wisdom and Intelligence

FYI, the link at the top of the post isn't working for me.