If it’s worth saying, but not worth its own post, you can put it here.

Also, if you are new to LessWrong and want to introduce yourself, this is the place to do it. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome. If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, and seeing if there are any meetups in your area.

The Open Thread sequence is here.

New Comment
35 comments, sorted by Click to highlight new comments since:

Hi! My name is Maarten van Doorn. I'm a PhD candidate in philosophy and a writer at Medium. I've read some Less Wrong things here and there, especially a couple things on Slate Star Codex, but David Siegel convinced me to really go deeper on this. I've just finished Inadequate Equilibria and the book made me enthusiastic, so I decided to make an account here. I especially liked the emphasis on bottom-up explanations of why things are as they are, instead of pointing to - gesturing at - grand theoretical reasons. On just high school economics, some parts of the book were a challenge, but that's what I love. It caused my to check Veritasium's videos on Bayesian Reasoning, dust that off, and ask my economist friends a shitload of questions on (for them) relatively basic concepts. I'm going to read the additional chapter on Hero Licensing now. Thanks for all this amazing content guys!

Welcome and thank you! I hope you will find many more valuable things on the site!

Hello! I'm gostaks, and I'm new to LessWrong and the rationalism community in general. I'm an engineering major who took a philosophy class in January, and ever since I've been poking around the internet looking for folks with interesting ideas. I found LessWrong through a link on Slate Star Codex, and I figure that six hours of reading is enough to justify making an account (at the very least so I can track which posts I have or haven't read). Planning to lurk for a while, plug through the sequences, and then figure out how to get interesting LessWrong posts to show up on my dreamwidth reading page.

I would very much appreciate recommendations of blogs/LessWrong users/posts/books that would make a good starting point for a rationalism noob!


In terms of users/blogs/posts that I don't expect you to stumble across naturally as you hang out on the site, there is all of Nate Soare's writing, both on LW and on his own blog.

How many of you are there, and what is your dosh-distimming schedule like these days?


Hello, I am Kyo. I am new here, and I hope to become a better person. Thank you.


What sort of better are you hoping to become?


Just staying alive would be fine. I believe that living on is equal to being a better person. Working on rationality might help me with doing so.

Part of what the quest of rationality is about is getting an idea about where you want to go. Without knowing what you want, half of rationality can't be applied.


Where do I start on this conquest?

Can asking for advice be bad? From Eliezer's post Final Words:

You may take advice you should not take.

I understand that this means to just ask for advice, not necessarily follow it. Why can this be a bad thing? For a true Bayesian, information would never have negative expected utility. But humans aren’t perfect Bayes-wielders; if we’re not careful, we can cut ourselves. How can we cut ourselves in this case? I suppose you could have made up your mind to follow a course of action that happens to be correct and then ask someone for advice and the someone will change your mind.
Is there more to it? Please reply at the original post: Final Words.


(Not replying "at the original post" because others haven't and now this discussion is here.)

That fragment of "Final Words" is in a paragraph of consequences of underconfidence.

Suppose (to take a standard sort of toy problem) you have a coin which you know either comes up heads 60% of the time or comes up heads 40% of the time. (Note: in the real world there are probably no such coins, at least not if they're tossed in a manner not designed to enable bias. But never mind.) And suppose you have some quantity of evidence about which sort of coin it is -- perhaps derived from seeing the results of many tosses. If you've been tallying them up carefully then there's not much room for doubt about the strength of your evidence, so let's say you've just been watching and formed a general idea.

Underconfidence would mean e.g. that you've seen an excess of T over H over a long period, but your sense of how much information that gives you is wrong, so you think (let's say) there's a 55% chance that it's a T>H coin rather than an H>T coin. So then someone trustworthy comes along and tells you he tossed the coin once and it came up H. That has probability 60% on the H>T hypothesis and probability 40% on the T>H hypothesis, so it's 3:2 evidence for H>T, so if you immediately have to bet a large sum on either H or T you should bet it on H.

But maybe the _real_ state of your evidence before this person's new information justifies 90% confidence that it's a T>H coin, in which case that new information leaves you still thinking it's more likely T>H, and if you immediately have to bet a large sum you should bet it in T.

Thus: if you are underconfident you may take advice you shouldn't, because you underweight what you already know relative to what others can tell you.

Note that this is all true even if the other person is scrupulously honest, has your best interests at heart, and agrees with you about what those interests are.

I'd trust myself not to follow bad advice. I'd probably be willing to ask a person I didn't respect very much for advice, even if I knew I wasn't going to follow it, just as a chance to explain why I'm going to do what I'm going to do, so that they understand why we disagree, and don't feel like I'm just ignoring them. You can't create an atmosphere of fake agreement by just not confronting the disagreement. They'll see what you're doing.

For a true Bayesian, information would never have negative expected utility.

That's because they already have it (in a sense that we don't). They know every way any experiment could go (if not which one it will).

I understand that this means to just ask for advice, not necessarily follow it.

You have more at stake than they do. (Also watch out for if they have vested interests.)

EDIT: If you have an amazing knockdown counter-argument, please share it.

I'm looking for some clarification/feelings on the social norms here surrounding reporting typo/malapropism-like errors in posts. So far I've been sending a few by PM the way I'm used to doing on some other sites, as a way of limiting potential embarrassment and not cluttering the comments section with things that are easily fixed, but I notice some people giving that feedback in the comments instead. Is one or the other preferred?

I also have the impression that that sort of feedback is generally wanted here in the first place, due to precise, correct writing being considered virtuous, but I'm not confident of this. Is this basically right, or should I be holding back more?

Personally, I’d prefer typos and other mistakes in my own posts to be noted publicly rather than privately. (I don’t know how common this preference is, of course.)

Yeah, typo-threads are pretty common and in my experience well-received. We have lots of non-native speakers on this site, who I think also tend to be particularly open to feedback on phrasing and nuanced grammar things.


Do what you like. I'd say that some people want to know, some don't. I wish we had tags like "typo" or "nitpick" because I might want to make a self aware comment that was one of those but we don't right now.

I suspect people like corrections but it's a hard thing to navigate with kindness at the forefront of "it's spelt wrong"

I have a theory about a common pathology in the visual system that I stumbled on through personal experience, and I've been trying to test it. If people would like to answer a five question survey that'd be a great help https://makoconstruct.typeform.com/to/ZHl2KL

Unfortunately I still don't have many responses from people who have the pathology. We might not be as common as I thought.

I found the questions hard to answer. I don't think the terms in #2 are well enough defined to give a good answer.

Not on the literal level, I guess. Of the answers that are true, choose the most specific one (the one that allows the fewest possibilities/provides the most information).

Question 2 seemed clear enough to me. That's the one about visual migraines, yes?

I have never heard the term "visual migraines" before. I have experienced a bunch of different visual effects but it's hard for me to decide which qualify and estimating quantity is also hard.

Yeah. I think Christian's issue is it's possible for multiple answers to be logically true at the same time (but an inclination to give the most specific answer of a given set is basic pragmatics (but maybe it isn't always obvious for a set of propositions whether or not there's a specificity hierarchy between them, various kinds of infinities, for instance, are tricky to prove to be subsets of each other))



[This comment is no longer endorsed by its author]Reply

Ah, that's around the time that LessWrong 2.0 came into existence, and I assume we never got around to properly merging the old join-date field with the new database

Yeah, that's indeed just an import artifact. Fixing that should be pretty straightforward.

Hi. A genius introduced me to HPMOR. I've shared it several times over. The other content is also fantastic.

Education is too imperfect. I have an ed-tech utility that would personalize education and self improve. I am seeking help with an MVP. I have been told by a software designer that the idea is viable, and by a professor and a CEO among others. There is a famous inventor who has pledged to do Ted talks (as she has done them before.)

The tech has applications in several fields and would even contribute to agi (somewhat.) I expect to become rich. I need capable people. Feel free to contact me if interested.

Is there some way, when writing a post, to have a paragraph be collapsed by default? This would be useful both for "spoilers"/"answers", and for "footnotes"/"digressions"/"technical quibbles".

There is a spoiler tag, but no collapsible sections (yet)

<spoiler>Oh, good to know.</spoiler>

Sorry, the syntax is slightly counterintuitive. In the WYSIWYG editor it's >! on a new line, rendering like this:

This is a spoiler

In markdown it's :::spoiler to open and ::: to close