User Profile

star729
description82
message2624

Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
All Posts
personIncludes personal and meta blogposts (as well as curated and frontpage).
rss_feed Create an RSS Feed

Revitalizing Less Wrong seems like a lost purpose, but here are some other ideas

2y
6 min read
Show Highlightsubdirectory_arrow_left
34

Zooming your mind in and out

3y
1 min read
Show Highlightsubdirectory_arrow_left
6

Purchasing research effectively open thread

3y
1 min read
Show Highlightsubdirectory_arrow_left
15

Productivity thoughts from Matt Fallshaw

4y
2 min read
Show Highlightsubdirectory_arrow_left
4

Managing one's memory effectively

4y
3 min read
Show Highlightsubdirectory_arrow_left
18

OpenWorm and differential technological development

4y
1 min read
Show Highlightsubdirectory_arrow_left
30

System Administrator Appreciation Day - Thanks Trike!

5y
1 min read
Show Highlightsubdirectory_arrow_left
5

Existential risks open thread

5y
1 min read
Show Highlightsubdirectory_arrow_left
47

Why AI may not foom

5y
12 min read
Show Highlightsubdirectory_arrow_left
78

[Links] Brain mapping/emulation news

5y
1 min read
Show Highlightsubdirectory_arrow_left
2

Recent Comments

\[Brainstorming\]

One idea is to try to differentiate the NYC 'product' from the Berkeley 'product'. For example, the advantage of Vancouver over the Bay Area is that you can live in Vancouver if you're Canadian. The kernel project attempted to differentiate itself through e.g. a manifesto. In the ...(read more)

I think I see how X-and-only-X is a problem if we are using a classifier to furnish a 0/1 reward. However, it seems like less of a problem if we're using a regression model to furnish a floating point reward that attempts to describe _all_ of our values (not just our values as they pertain to the co...(read more)

Thanks for the links! (That goes for Wei and Paul too.)

> a group of researchers are beginning to think that in a broader sense "adversarial vulnerability" and "amount of test set error" are inextricably linked in a deep and foundational way - that they may not even be two separate problems.

I'd e...(read more)

Well, is there anything that can be done to stop the x-risk? If there is, maybe tell the people who are best positioned to stop it. Re: the AGI thing, is it a scheme that could plausibly be made friendly? If yes, maybe tell people who are working on friendliness/work on making it friendly yourself. ...(read more)

> deep learning is not unusually susceptible to adversarial examples

FWIW, this claim doesn't match my intuition, and googling around, I wasn't able to quickly find any papers or blog posts supporting it. [This](http://karpathy.github.io/2015/03/30/breaking-convnets/) 2015 blog post discusses how d...(read more)

Well, I haven't seen even a blog post's worth of effort put into doing something like what I suggested. So an extreme level of pessimism doesn't seem especially well-justified to me. It seems relatively common for a task to be hard in one framework while being easy in another.

Standard CFAR advice:...(read more)

Well, [here](https://www.datasciencecentral.com/profiles/blogs/what-comes-after-deep-learning) is a list of paradigms that might overtake deep learning. This list could probably be expanded, e.g. by researching various attempts to integrate deep learning with Bayesian reasoning, create more interpre...(read more)

I like the idea of optimizing for career growth & AI safety separately. However, I'm not sure the difference between "capabilities research" and "safety research" is as clear-cut as Critch makes it sound.

Consider the problem of making ML more data-efficient. Superficially, this is "capabilities re...(read more)

X-risks are one cause area where "raising awareness" is probably a bad idea.