Posts

Sorted by New

Wiki Contributions

Comments

It’s a mineral (rock). It’s not ductile at all.

If I make a post or comment starting from the assumption that we are not doomed, and in fact ignore AI x-risk entirely, where would that stand on these moderation guidelines? My reading of the post was that in such a context I would be redirected to read the sequences rather than engaged with.

(Notably the post you link to doesn’t disagree with AI risk, just argues for a long timeline. She explicitly states she agrees with EY on AI x-risk.)

I think you nailed it. The crypto petertodd wrote OpenTimestamps, and his handle is often next to commitment hashes related to that.

Please define what you mean by “AGI” because GPT is AGI. It is:

Artificial — man-made, not natural

General — able to handle any problem domain it is not specifically trained on

Intelligence — solves complex problems using inferred characteristics of the problem domain

What is it that you are imagining AGI to mean, which does not include GPT in its definition?

A key value-prop of LessWrong is that some arguments get to be "reasonably settled", rather than endlessly rehashed.

You are making a huge, and imho unwarranted leap from the article you linked to here. AI risk is very much in the domain of “reasonable people disagree”, unlike the existence of Abrahamic god or the theory of Cartesian dualism.

If moderators are going to start removing or locking posts which disagree on the issue of AI risk, that would be a huge change in the purpose and moderation policy of this site.

A detrimental change, imho.