Posts

Sorted by New

Wiki Contributions

Comments

Sqrt-110d32

I do agree that a lot of seqeunces pages would benefit a lot from having discussion of previous work or at least stating what these ideas are called in the mainstream, but I feel Yudkowskys neologisms are just... better. Among the examples of similar concepts you mentioned, I definitely felt Yudkowsky was hinting at them with the whole dimensions thing, but I think "thingspace" is still a useful word and not even that complicated; if it was said in a conversation with someone familiar with ANNs I feel they would get what it meant. (Unlike a lot of other Yudkowskisms usually parroted around here, however...)

Sqrt-12mo10

So... how did it go? Do you use the medications often now or not?

Sqrt-12mo00

Criticism of this article was found at a talk page at RationalWiki.

The Sequences do not contain unique ideas, and they present the ideas they do contain in misleading ways using parochial language. The "Law of Conservation of Expected Confidence" essay, for instance, covers ideas that are often covered in introductory philosophical methods or critical thinking courses. There is no novelty either in the idea that your expected future credence must match your current credence (otherwise, why not update your credence now?), nor in the idea that if E is evidence for H, then ~E is evidence for ~H (though E and ~E may have very different evidential strength), and Yudkowsky's treatment is imprecise and, in combining multiple points, muddles things more than it illuminates them. Besides that, the former notion has sparked substantial controversy in epistemology, owing to cases wherein people apparently can reasonably expect to have their minds changed without changing them right now. While popular, Bayesianism is not univocally accepted by epistemologists, and it's not because they're irrational.

What do you guys think?

https://rationalwiki.org/wiki/Talk:LessWrong#EA_orgs_praising_AI_pseudoscience_charity._Is_it_useful.3F

Sqrt-12mo10

I found some criticism of this post on a RationalWiki talk page.

For another example, "Clusters in Thingspace" has a number of issues. Most simply, it seriously undersells Aristotle's ability to handle a nine-fingered person. Certainly, if you make 'has ten fingers' part of the definition of human, then you will be able to infer that a person without ten fingers is not a human; nobody, though, has ever seriously put forward such a proposal. For Aristotle's part, he would simply say that having a certain number of fingers is not an essential property of being human (and so should not be factored into the definition). Yudkowsky is also wrong to say that the coordinate point (0,0,5) contains the same information as the HTML color blue. To the contrary, the coordinate point by itself contains no information; it can contain color information only when paired with some interpretation function I (in the case of HTML, the software provides this function). As for where else these ideas can be found, philosophers have been working on conceptual vagueness intensely since the mid-20th century, and cluster concepts were a relatively early innovation. The philosophical literature also has the benefit of being largely free of nebulous speculations about cognition and needless formalism (and the discussion of configuration space here is needless formalism, since Yudkowsky is drawing only qualitative conclusions and the practical constraints on constructing a configuration space even for robins alone are severe). The literature also uses terminology in the ordinary way familiar to everybody engaging these issues professionally (compare Yudkowsky's muddled understanding of intension) and avoids the invention of needless terms like "thingspace", which mainly achieve the isolation of LessWrong from the external literature (whose relative richness and rigor would doubtlessly benefit them far more than the Sequences, the works of a single, self-aggrandizing amateur). That's not to say that there's no good ideas in the article, only that it is unoriginal, muddled, imprecise, and parochial.

What do you guys think?

https://rationalwiki.org/wiki/Talk:LessWrong#EA_orgs_praising_AI_pseudoscience_charity._Is_it_useful.3F

[This comment is no longer endorsed by its author]Reply
Sqrt-12mo10

Thought I should put this here to get more attention but this specific article of Scott Alexander's was criticised in a blog here.

https://www.eruditorumpress.com/blog/the-beigeness-or-how-to-kill-people-with-bad-writing-the-scott-alexander-method

Sqrt-14mo90

Completed my first LessWrong survey! I liked the several questions that tried to sort out all the lizardmen.

Sqrt-11y20

I wonder if protection from dementors is the only reason Voldemort brought Potter with him?

Sqrt-11y30

why does professor quirrell have a gun; like, doesn't he have a wand—

Sqrt-11y10

I believe that the test is hard because of the fact that the pattern involves increasing numbers, not decreasing. I feel that our brains are just primed for increasing numbers since childhood, like if the pattern was decreasing numbers, most people would say "1,2,3" and immediately solve it.

Load More