User Profile

star154
description12
message1548

Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
All Posts
personIncludes personal and meta blogposts (as well as curated and frontpage).
rss_feed Create an RSS Feed

Pittsburgh meetup Nov. 20

8y
Show Highlightsubdirectory_arrow_left
16

Bay Area Meetup Saturday 6/12

8y
Show Highlightsubdirectory_arrow_left
16

Pittsburgh Meetup: Saturday 9/12, 6:30PM, CMU

9y
Show Highlightsubdirectory_arrow_left
2

Pittsburgh Meetup: Survey of Interest

9y
Show Highlightsubdirectory_arrow_left
7

Recent Comments

Doesn't work in incognito mode either. There appears to be an issue with lesserwrong.com when accessed over HTTPS — over HTTP it sends back a reasonable-looking 301 redirect, but on port 443 the TCP connection just hangs.

Similar meta: none of the links to lesserwrong.com currently work due to, well, being to lesserwrong.com rather than lesswrong.com.

Further-semi-aside: "common knowledge that we will coordinate to resist abusers" is _actively bad and dangerous to victims_ if it isn't true. If we won't coordinate to resist abusers, making that fact (/ a model of when we will or won't) common knowledge is doing good in the short run by not creatin...(read more)

This post may not have been quite correct Bayesianism (... though I don't think I see any false statements in its body?), but regardless there are one or more steel versions of it that are important to say, including:

* persistent abuse can harm people in ways that make them more volatile, less c...(read more)

IMO, the "legitimate influence" part of this comment is important and good enough to be a top-level post.

This is simply instrumentally wrong, at least for most people in most environments. Maybe people and an environment could be shaped so that this was a good strategy, but the shaping would actually have to be done and it's not clear what the advantage would be.

My consistent experience of your comments is one of people giving \[what I believe to be, believing that I understand what they're saying\] the actual best explanations they can, and you not understanding things that I believe to be comprehensible and continuing to ask for [explanations and evidence...(read more)

I don't see how we anything like know that deep NNs with ‘sufficient training data’ would be sufficient for all problems. We've seen them be sufficient for many different problems and can expect them to be sufficient for many more, but all?

A tangential note on third-party technical contributions to LW (if that's a thing you care about): the uncertainty about whether changes will be accepted, uncertainty about and lack of visibility into how that decision is made or even who makes it, and lack of a known process for making pull request...(read more)

Other possible implications of this scenario have been discusesd on LW before.