wunan

Posts

Sorted by New

Comments

Tweet markets for impersonal truth tracking?

Do you have a source for the 80% figure?

Seek Upside Risk

I agree that this is a really important concept. Two related ideas are asymmetric risk and Barbell strategies, both of which are things that Nassim Nicholas Taleb writes about a lot.

Where is human level on text prediction? (GPTs task)

What is that formula based on? Can't find anything from googling. I thought it may be from the OpenAI paper Scaling Laws for Neural Language Models, but can't find it with ctrl+f.

Where is human level on text prediction? (GPTs task)

In Steve Omohundro's presentation on GPT-3, he compares the perplexity of some different approaches. GPT-2 scores 35.8, GPT-3 scores 20.5, and humans score 12. Sources are linked on slide 12.

Escalation Outside the System

People are literally looting businesses and NPR is publishing interviews supporting it. They're not just interviewing people who support it -- the interviewer also supports it. What makes you think these aren't actual policy proposals?

They may only propose it for deep social-signalling reasons as you say, but that doesn't mean it's not actually a proposal. Historically, we've seen that people are willing to go through with mass murders.

Are we in an AI overhang?

In the Gwern quote, what does "Even the dates are more or less correct!" refer to? Which dates were predicted for what?

Are we in an AI overhang?

This was mentioned in the "Other Constraints" section of the original post:

Inference costs. The GPT-3 paper (§6.3), gives .4kWh/100 pages of output, which works out to 500 pages/dollar from eyeballing hardware cost as 5x electricity. Scaling up 1000x and you're at $2/page, which is cheap compared to humans but no longer quite as easy to experiment with
I'm skeptical of this being a binding constraint too. $2/page is still very cheap.
My experience with the "rationalist uncanny valley"

It might help if you try to think less in terms of making rationality and EA part of your identity and instead just look at them as some things you're interested in. You could pursue the things you're interested in and become a more capable person even if you never read anything else from the rationality community again. Maybe reading stuff from people who have achieved great things and had great ideas and who have not been influenced by the rationality community (which, by the way, describes most people who have achieved great things and had great ideas) would help? E.g. Paul Graham's essays are good (he's kind of LW-adjacent, but was writing essays long before the rationality community was a thing): http://paulgraham.com/articles.html

I think the rationality community is great, it has hugely influenced me, and I'm glad I found it, but I'm pretty sure I'd be doing great stuff even if I never found it.

Life can be better than you think

I remember reading SquirrelInHell's posts earlier and I'm really sorry to hear that. Is there any more public information regarding the circumstances of the suicide? Couldn't find anything with google.

Load More