User Profile


Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
All Posts
personIncludes personal and meta blogposts (as well as curated and frontpage).
rss_feed Create an RSS Feed

Changes in AI Safety Funding

Show Highlightsubdirectory_arrow_left

The true degree of our emotional disconnect

2 min read
Show Highlightsubdirectory_arrow_left

Recent Comments

It doesn't really matter whether the AI uses their full computational capacity. If the AI has a 100000 times larger capacity (which is again a conservative lower bound) and it only uses 1% of it, it will still be 1000 as smart as the human's full capacity.

AGI's algorithm will be better, because it...(read more)


"[the brain] is sending signals at a millionth the speed of light, firing at 100 Hz, and even in heat dissipation [...] 50000 times the thermodynamic minimum energy expenditure per binary swtich operation"

AI will be quantitatively smart...(read more)

"Less than a third of students by their own self-appointed worst-case estimate *1."

missing a word here, I think.

re-live. Although I'd rather live the same amount of time from now onward.

First question: I know you admire Trump's persuasion skills, but what I want to know is why you think he's a good person/president etc.

Answer: [talks about Trump's persuasion skills]

Yeah, okay.

This is an exceptionally well reasoned article, I'd say. Particular props to the appropriate amount of uncertainty.

Well, if you put it like that I fully agree. Generally, I believe that "if it doesn't work, try something else" isn't followed as often as it should. There's probably a fair number of people who'd benefit from following this article's advice.

I don't quite know how to make this response more sophisticated than "I don't think this is true". It seems to me that whether classes ore lone-wolf improvement is better is a pretty complex question and the answer is fairly balanced, though overall I'd give the edge to lone-wolf.

I don't know what our terminal goals are (more precisely than "positive emotions"). I think it doesn't matter insofar as the answer to "what should we do" is "work on AI alignment" either way. Modulo that, yeah there are some open questions.

On the thesis of suffering requiring higher order cogniti...(read more)