Wiki Contributions

Comments

arxhy3y10

That's the idea behind the post, yeah. I am referring more to the general culture of the site, since it is relevant here.

arxhy3y10

I find it strange that our response to "politics is the mindkiller" has been less "how can we think more rationally about politics?" and more "let's avoid politics". If feasible, the former would pay off long-term.

Of course, a lot of more general ideas pertaining to rationality can be applied to politics too. But if politics is still the mindkiller, this may not be enough -- more techniques may be needed to deal with the affective override that politics can cause.

arxhy3y10

Listeners are probably not assuming that the person they are listening to is being honest.

arxhy3y20

Interesting, thanks for the reply. I agree that it could develop superhuman ability in some domains, even if that ability doesn't manifest in the model's output, so that seems promising (although not very scaleable). I haven't read on mesa optimizers yet.

arxhy3y40

I have very little knowledge of AI or the mechanics behind GPT, so this is more of a question than criticism:

If a scaled up GPT-N is trained on human-generated data, how would it ever become more intelligent than the people whose data it is trained on?

arxhy3y10

Or maybe good enough is the enemy of better. Regardless, the point's been made

arxhy3y30

Perfect is the enemy of good; good enough is also the enemy of good.

arxhy3y60

In my case, I probably wouldn't give my life for less than lives of a billion strangers, so that ratio would have to be extremely high, to the point where it's probably incalculable.

Why?

Load More