User Profile

star520
description82
message2598

Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
Personal Blogposts
personPersonal blogposts by LessWrong users (as well as curated and frontpage).
rss_feed Create an RSS Feed

Zooming your mind in and out

3y
Show Highlightsubdirectory_arrow_left
6

Purchasing research effectively open thread

3y
Show Highlightsubdirectory_arrow_left
15

Productivity thoughts from Matt Fallshaw

4y
Show Highlightsubdirectory_arrow_left
4

Managing one's memory effectively

4y
Show Highlightsubdirectory_arrow_left
18

OpenWorm and differential technological development

4y
Show Highlightsubdirectory_arrow_left
30

System Administrator Appreciation Day - Thanks Trike!

5y
Show Highlightsubdirectory_arrow_left
5

Existential risks open thread

5y
Show Highlightsubdirectory_arrow_left
47

Why AI may not foom

5y
Show Highlightsubdirectory_arrow_left
78

[Links] Brain mapping/emulation news

5y
Show Highlightsubdirectory_arrow_left
2

Recent Comments

Different fields of engineering vary based on how empirical it's possible to be. Experiments are very cheap for software engineers, but very expensive for civil engineers. In civil engineering, every time a "bug" occurs and a structure falls down, there's a good chance it's going in a textbook. But ...(read more)

People in the rationalist community have [complained about Facebook in the past](https://thezvi.wordpress.com/2017/04/22/against-facebook/), and this position looks like it is getting [more mainstream](https://techcrunch.com/2018/03/19/deletefacebook/). It seems possible that there will be an openin...(read more)

> if you try to learn in large chunks, you risk corrupting the external human and then learning corrupted versions of understanding and corrigibility

Why do you think small vs large chunks is the key issue when it comes to corrupting the external human? Can you articulate the chunk size at which yo...(read more)

In _Superintelligence_, Nick Bostrom talks about various "AI superpowers". One of these is "Social manipulation", which he summarizes as

> **Social and psychological modeling, manipulation, rhetoric persuasion**

> Strategic relevance:

> * Leverage external resources by recruiting human support ...(read more)

Interesting idea! Some thoughts: You might want to think a bit more about who your target audience is. Given that applying for a job at MIRI/FHI/etc. is always another option, it's not totally clear to me to what extent "x risk funding" a natural category. One possible target audience is e.g. gradua...(read more)

I agree that a post should only become "canon" if it has been public for a while and no convincing counterargument has materialized.

For my [AI alignment contest submission](https://medium.com/@pwgen/friendly-ai-through-ontology-autogeneration-5d375bf85922), I emailed a bunch of friends asking for ...(read more)

I would guess there are some commitment & consistency effects involved here--once you've followed someone in to battle, you tend to identify with that.

I vote for checking to see if there is a meetup coming up soon near the user's IP address, and if so, putting a notification below every post, above the comments.

Figuring out whether to act vs ask questions feels like a fundamentally epistemic judgement: How confident am I in my knowledge that this is what my operator wants me to do? How important do I believe this aspect of my task to be, and how confident am I in my importance assessment? What is the likel...(read more)