Onelier

Posts

Sorted by New

Wiki Contributions

Comments

How can we get more and better LW contrarians?

Stream of consciousness. Judge me that ye may be judged. If you judge it by first-level Less Wrong standards, it should be downvoted (vague unjustifiied assertions, thoughtlessly rude), but maybe the information is useful. I look first for the heavily downvoted posts and enjoy the responses to them best.

I found the discussion on dietary supplementation interesting, in your link and elsewhere. As I recall, the tendency was for the responses (not entrants, but peoples comments around town) to be both crazy and stupid (with many exceptions, e.g., Yvain, Xacharaiah). I recall another thread on the topic where the correct comment ("careful!") was downvoted and its obvious explanation ("evolution works!") offered afterward was upvoted. Since I detected no secondary reasons for this, it was interesting in implying Less Wrongians did not see the obvious. Low certainties attached since I know I know nothing about this place. I'm deliberately being vague.

In general, Less Wrongians strike me as a group of people of impaired instrumental rationality who are working to overcome it. Give or take, most of you seem to be smarter than average but also less trustworthy, less able to exhibit strong commitments, etc. Probably this has been written somewhere hereabouts, but a lot of irrationalities are hard to overcome local optima; have you really gone far enough onto the other side? Incidentally, that could be a definition for x-rationality (if never actually done): Actually epistemically rational enough that it's instrumentally useful. Probably a brutally hard threshold to achieve and seems untrue of here, as I believe I've seen threads comment.

I was curious about the background of the people offering lessons at the rationality bootcamp, and saw some blog entry by one of them against, oh, being conservative in outlook (re: risk aversion). It was incredibly stupid; I mean, almost exclusively circular reasoning. You obviously deviate from the norm in your risk aversion. You're not obviously more successful than the norm (or are you? perhaps I'm mistaken). Maybe it's just a tough row to hoe, but that's the real task.

Personal comment: I realize Dmitry has been criticized a bit elsewhere and the voting trend doesn't support generalization to the community at large, but my conversation with him illustrates what I generally believe about this place. I knew more than he did. I said enough that he should realize this. He didn't realize it and shoehorned his response into a boring framework. I had specific advice to give, which I didn't get to, and realized I was reluctant to give (most Less Wrong stuff seems weak to me).

A whole lot of Less Wrong seems to be going for less detail, less knowledge, more use of frameworks of universal applicability and little precision. The sequences seem similar to me: Boring where I can judge meaning, meaningless where I can't. And always too long. I've read about four paragraphs of them in total. The quality of conversation here is high for a blog, of course, but low for a good academic setting. Some of the mild sneering at academics around here sounds ridiculous (an AI researcher believes in God). AI's a weak field. All round, papers don't quite capture any field and are often way way behind what people roughly feel.

Real question: Do you want me here?

I like you guys. I agree with you philosophically. I have nothing much to offer unless I put some effort into it (e.g., actually read what people write, etc). No confusion: You should be downvoting posts like this in general. You might want to make an exception 'cause it's worth hearing a particular rambling mindset once. My effort is better spent elsewhere (I can't imagine you'd disagree). I can't see anything that can be offered to me. I feel like I was more rational at age 7 than you are now (I wrote a pro and con list for castrating myself for the longevity and potential continuity of personality gains; e.g., maintaining the me of 7). A million other things. I'm working on real problems in other areas now.

Call for Papers on AI/robot safety

I would say this exchange basically exemplifies why I don't participate in Less Wrong.

Call for Papers on AI/robot safety

Plenty of good open access journals and it's a now standard business model, depending on field, and will have zero impact on how the article is perceived. The good PLoS or BMC journals, for example, will be as well regarded as any somewhat focussed journal. Likewise, if you pay the open access fee to a journal that doesn't automatically require it, no one will imagine you're bribing them or something ridiculous like that. This journal, in particular, is probably not a great idea (Hindawi) and the thought process hinted at (re: editor) may not be great.

Cult impressions of Less Wrong/Singularity Institute

What paper or text should I read to convince me y'all want to get to know reality? That's a sincere question, but I don't know how to say more without being rude (which I know you don't mind).

Put another way: What do you think Harry Potter (of HPMOR) would think of the publications from the Singularity Institute? (I mean, I have my answer).