Sorted by New

Wiki Contributions


Biology-Inspired AGI Timelines: The Trick That Never Works

If it's a normal distribution, what's the standard deviation?

Is it better to fix a problem directly, or start again so the problem never happens?

For software development, rewriting the code from scratch is typically a bad idea. It may be helpful to see how well the arguments in that article apply to your domain.

Discussion with Eliezer Yudkowsky on AGI interventions

Context for anyone who's not aware:

Nerd sniping is a slang term that describes a particularly interesting problem that is presented to a nerd, often a physicist, tech geek or mathematician. The nerd stops all activity to devote attention to solving the problem, often at his or her own peril

Here's the xkcd comic which coined the term.

Discussion with Eliezer Yudkowsky on AGI interventions

If MIRI hasn't already, it seems to me like it'd be a good idea to try reaching out. It also seems worth being at least a little bit strategic about it as opposed to, say, a cold email. 


+1 especially to this -- surely MIRI or a similar x-risk org could attain a warm introduction with potential top researchers through their network from someone who is willing to vouch for them.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

On one hand, meditation -- when done without all the baggage, hypothetically -- seems like a useful tool. On the other hand, it simply invites all that baggage, because that is in the books, in the practicing communities, etc.


I think meditation should be treated similarly to psychedelics -- even for meditators who don't think of it in terms of anything supernatural, it can still have very large and unpredictable effects on the mind. The more extreme the style of meditation (e.g. silent retreats), the more likely this sort of thing is.

Any subgroups heavily using meditation seem likely to have the same problems as the ones Eliezer identified for psychedelics/woo/supernaturalism.

Steelman arguments against the idea that AGI is inevitable and will arrive soon

Possible small correction: GPT-2 to GPT-3 was 16 months, not 6. The GPT-2 paper was published in February 2019 and the GPT-3 paper was published in June 2020.

Effective Altruism Virtual Programs Nov-Dec 2021

I can't tell from the descriptions, but it seems like these programs have been run before -- is that right? Are there any reviews or other writeups about participants' experiences anywhere?

Raj Thimmiah's Shortform

That would make a good monthly open thread.

wunan's Shortform

If compute is the main bottleneck to AI progress, then one goalpost to watch for is when AI is able to significantly increase the pace of chip design and manufacturing. After writing the above, I searched for work being done in this area and found this article. If these approaches can actually speed up certain steps in this process from taking weeks to just taking a few days, will that increase the pace of Moore's law? Or is Moore's law mainly bottlenecked by problems that will be particularly hard to apply AI to?

alenglander's Shortform

Do you have some examples? I've noticed that rationalists tend to ascribe good faith to outside criticisms too often, to the extent that obviously bad-faith criticisms are treated as invitations for discussions. For example, there was an article about SSC in the New Yorker that came out after Scott deleted SSC but before the NYT article. Many rationalists failed to recognize the New Yorker article as a hit piece which I believe it clearly was, even more clearly now that the NYT article has come out.

Load More