User Profile

star379
description85
message1147

Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
Personal Blogposts
personPersonal blogposts by LessWrong users (as well as curated and frontpage).
rss_feed Create an RSS Feed

Shadow

1mo
1 min read
Show Highlightsubdirectory_arrow_left
6

Questions for an AGI project

3mo
1 min read
Show Highlightsubdirectory_arrow_left
1

News: AGI on the rise at MIT

3mo
1 min read
Show Highlightsubdirectory_arrow_left
2

Sufficient

3mo
4 min read
Show Highlightsubdirectory_arrow_left
4

Epistemic self-sufficiency on the edge of knowledge

3mo
1 min read
Show Highlightsubdirectory_arrow_left
0

Collective Aligned Systems

3mo
2 min read
Show Highlightsubdirectory_arrow_left
2

Musings about the AGI strategic landscape

4mo
1 min read
Show Highlightsubdirectory_arrow_left
2

Security services relationship to social movements

4mo
2 min read
Show Highlightsubdirectory_arrow_left
1

The Perils of the Security Mindset taken too far

4mo
1 min read
Show Highlightsubdirectory_arrow_left
6

Convincing the world to build something

5mo
1 min read
Show Highlightsubdirectory_arrow_left
0

Recent Comments

I'm reminded of the quote by George Bernard Shaw.

> “The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.”

I think it would be interesting to look at the reasons and occasions ...(read more)

I've been re-reading a sci-fi book which has the interesting Existential Risk scenario where most people are going to die. But some may survive.

If you are a person on earth in the book, you have the choice of helping out people and definitely dieing or trying desperately to be one of the ones to s...(read more)

rm -f double-post

She asked my advice on how to do creative work on AI safety, on facebook. I gave her advice as best I could.

She seemed earnest and nice. I am sorry for your loss.

> Dulce et Decorum Est Pro Huminatas Moria?

As you might be able to tell from the paraphrased quote I've been taught some bad things that can happen when this is taken too far.

> Therefore the important thing is how we, personally, would engage with that decision if it came from outside.

For me i...(read more)

I'm interested in seeing where you go from here. With the old lesswrong demographic, I would predict you would struggle, due to cryonics/life extension being core to many people's identities.

I'm not so sure about current LW though. The fraction of the EA crowd that is total utilitarian probably wo...(read more)

Has anyone done work on a AI readiness index? This could track many things, like the state of AI safety research and the roll out of policy across the globe. It might have to be a bit dooms day clock-ish (going backwards and forwards as we understand more) but it might help to have a central place t...(read more)

Out of curiosity what is the upper bound on impact?

> Do you think the AI-assisted humanity is in a worse situation than humanity is today?

Lots of people involved in thinking about AI seem to be in a zero sum, winner-take-all mode. E.g. [Macron](https://www.wired.com/story/emmanuel-macron-talks-to-wired-about-frances-ai-strategy/).

I think there w...(read more)

Interesting. I didn't know Russia's defences had degraded so much.