All of saturn's Comments + Replies

I've now added password reset capability to GreaterWrong.

Thanks! Do you plan to add support for the new-to-LW2 "log in with LW1 credentials" flow? It seems to need some special-cased client-side support, according to this post - I suppose you can check out the related commits on LW2 code for the details of how to make it work! (Logging in and participating on LW2 itself is still unbearably slow for lower-powered devices-- and I'm not willing to go through the whole prospect of having to change (or worse, "reset") my credentials there in order to make them usable on GreaterWrong-- at least, not unless I hear back from multiple users who have done this with no issues!)

I do this because there's no way to request posts and comments sorted together chronologically with GraphQL. However, if you click the posts or comments tab, the pagination will work correctly for any number of pages.

Indeed, it's working properly with the show=posts and show=comments URL parameters, and no content seems to be lost. Great news, but that was definitely non-obvious - thanks! (I'd naïvely assumed that if the individual chronological listings were available, that the combined listing would be built by searching for offset_posts and offset_comments such that offset_posts + offset_comments = offset, and the timestamps for the post at offset_posts and comment at offset_comments are as close as possible. Shouldn't require more than log(N) reqs in the worst case - far less than that typically. But perhaps there's some snag that makes this approach unworkable!) Perhaps a notice should be added to the "combined" user pages to the effect that the 'Posts' and 'Comments' options may be preferable for some uses. ETA: There seems to be some remaining edge cases around comments on deleted posts, or something like that. In LW1, you get a permlink to the comment from the user page, and can then browse the individual thread. Greater Wrong does not have a notion of viewing a single thread or anything similar, so it tries to get data about the post as a whole, fails, and clicking on the link to the post returns an error page. I have not investigated what LW2 does. This is a very minor issue overall of course. I mention it mostly because I'm wondering how it impacts preservation of e.g. the original discussions about the LW basilisk, which were on a now-deleted page. (Yes, some comments - including e.g. Eliezer's initial reaction - were totally deleted, but many others were not! And yes, to be quite explicit about it, there are many popular misconceptions about the basilisk, and having these comments preserved is perhaps the one really effective way of addressing them. There was a very clear perception - from people who had actually read the original post! - of how silly it was for Roko to even come up with such an unlikely and contrived scenario, and then bring it up as something th

The two sites are based on quite different philosophies of web development, so it would be far from straightforward to do some of the things I've done within the existing LW 2.0 code. I've had fun creating GreaterWrong, and I don't mind putting effort into it as long as LW 2.0 seems like a viable community. I don't think it's necessarily bad to have two sites that do the same thing, if some people prefer one and other people prefer the other. (I agree with Error's comment.)

No, I don't have any special access to the database. If you log in to GreaterWrong, ... (read more)

Do you have plans to implement a list of posts by user (without comments), a list of drafts, and an inbox? These are the only things I go to LW2.0 for, most of my time is now spent on GW.

Yes, definitely.

It would be nice to have more than just a single page of 'new' content, since as is, it can even be hard to check out all recent posts from the past few days [...] more of a user's posting and commenting history

Done :)

I was a bit bothered by the noisy look of the front page, so I took a shot at restyling it: screenshot, pastebin. Not insisting that you should use it (I'm using it as a browser stylesheet anyway) but just thought I'd share. Edit: whoops, this affects other pages in surprising ways. I'll try to fix it tomorrow.

Hi, I'm the one who created Greater Wrong. I'm intending to announce it more widely once it doesn't have so many conspicuously missing features, but it's something I'm working on in my spare time so progress is somewhat gradual. You can, however, already log in and post comments. You can use your existing LW 2.0 username/password or create a new one. Let me know if you have any problems.

Thanks very much! If the only thing that remained of Greater Wrong was the javascript-free access to the Less(er)Wrong homepage (I mostly disabled js in my browser in the aftermath of spectre, plus js somehow makes scrolling (sic!) on LesserWrong agonisingly slow), it would be a huge value-added for me! I also like the accesskey-based shortcuts for home, featured etc. However, it's also a much nicer and faster interface for reading the comments and even the content! (Testing with js enabled: no noticeable slowness; the comment navigation system is neat, though I doubt whether I'd actually use it.)
Excellent job. You got bonus points for writing it in Lisp. I assume you've read SICP?
Thank you very much. I read LW primarily for the discussions that are spurred by posts/articles and the comments are effectively impossible for me to read with the standard interface. On a small glance/browse I'm very encouraged about trying Greaterwrong as my regular reading mode.
Thank you for doing this!
Thanks for adding this, then! Personally, I'm just waiting to create an account/log in there until the 'final' LW-importation goes through. (Users who were late setting the e-mails to their accounts here did not have these imported to LW2 initially, which can lead to all sorts of problems. But a new importation from LW's updated user list can fix this - or maybe it can't, but then there's no loss in just creating a new user!) It would be nice to have more than just a single page of 'new' content, since as is, it can even be hard to check out all recent posts from the past few days, or whatever. It's great that the archive is available though. (Similarly, it would be great if we could access more of a user's posting and commenting history directly from their user page. On LW and LW2, you can see everything that a user has posted to the site simply by browsing from the userpage, and many LW users do rely on this feature as a de-facto 'index' of what they've contributed here.)
Thank you for doing this! It's very nice to have an old timey UI.

I always assumed it was by selling prediction securities for less than they will ultimately pay out.

Bs pbhefr, erirnyvat n unfu nsgre gur snpg cebirf abguvat, rira vs vg'f irevsvnoyl gvzrfgnzcrq. Nabgure cbffvoyr gevpx vf gb fraq n qvssrerag cerqvpgvba gb qvssrerag tebhcf bs crbcyr fb gung ng yrnfg bar tebhc jvyy frr lbhe cerqvpgvba pbzr gehr. V qba'g xabj bs na rnfl jnl nebhaq gung vs gur tebhcf qba'g pbzzhavpngr.

Guvf vf irel yvxr gur sbbgonyy cvpxf fpnz.

Unless you have a model that exactly describes how a given message was generated, its Shannon entropy is not known but estimated... and typically estimated based on the current state of the art in compression algorithms. So unless I misunderstood, this seems like a circular argument.

You need to read about universal coding, e.g. start here: I highly recommend Thomas and Cover's book, a very readable intro on info theory. The point is we don't need to know the distribution from which the bits came from to do very well in the limit. (There are gains to be had in the region before "in the limit," but these gains will track the kinds of gains you get in statistics if you want to move beyond asymptotic theory).

Any kind of tragedy of the commons type scenario would qualify.

It's not obvious to me how tragedy of the commons/prisoner's dilemma is isomorphic to Newcomb's problem, but I definitely believe you that it could be. If TDT does in fact present a coherent solution to these types of problems, then I can easily see how it would be useful. I might try to read the pdf again sometime. Thanks.

Since the Shapley value of all players also has to sum to the value of the end result, I think the value of each A voter has to be just RB/n. I'm way out of my depth with the combinatorics here, but here's a paper I found that gives a bit more information than the wikipedia page.

Good point, but I don't think the value of the end result is necessarily equal to RB, for much the same reason that (I suspect) Shapley value would correspond to (something like) "market value" rather than "market value plus consumer surplus". That is, no matter how badly you want you bathroom cleaned, the value of the labor to clean the bathroom is only equal to the market value of that labor, irrespective of how happy I am to have it done. While voting doesn't directly map onto a market like that, there is a similar sense in which being one of the voters for something that "had no chance of passing" (thus getting the high margin of victory) is worth less -- even per voter -- than voting for something whose fate was less certain.

I don't know how to apportion credit and blame to individual people for group actions (and almost all effective actions are group actions). I'm not sure it's even a meaningful question.

Shapley value is one way to answer this question.

Interesting! I hadn't heard of Shapley Value before. Regarding voting, let me do a back of the envelope calculation: every voter (that voted the same way) would, by symmetry arguments have contributed equal value. And since Shapley averages over every possible voter subset, and voters would only get credited for those subsets where they are the determining vote (which is proportional to the factorial of the margin of victory I think) then the value each voter (for policy A given two alternatives) contributes is something like: RB ---- n MV! Where n is the number of voters for policy A, RB is relative benefit of policy A compared to B, and MV is the margin of victory. But I think I made a mistake of ome kind somewhere.

Does your rational advice differ from the common folk wisdom/cargo culting on this topic? And if so, what was your research process?

I suppose that depends completely on which specific "common folk wisdom/cargo culting" you're referring to. Without more details it's impossible to tell. There is certainly good information out there. There is also a lot of misinformation and outright deception designed to separate investors from their money. A meta-skill is telling the difference between them.
Is it a good idea to give a financial incentive to a big company to make your life shorter? If after the moment you buy your insurance you make some changes that increase your expected life span (e.g. give up smoking), can you be sued for insurance fraud?

Asking why we privilege "no" over "yes" is . . . let's just say problematic.

I can see that someone who has made it beyond childhood without learning this (perhaps by willfully ignoring the answer) has a problem. But does asking, in itself, create an additional problem?

Asking is not separately problematic from not internalizing the correct answer. But there is a social context, and we can't pretend we are writing on a blank slate. In the social context that exists, asking the question substantially raises an observer's probability that the questioner has not completely internalized the correct answer. Essentially, asking the question is somewhat like privileging the hypothesis. EDIT: Of course, there is mostly a problem because this particular topic (consent for sex) is so filled with conflict. With a topic that is less contentious, there is less reason to think that asking the question implies anything about what the questioner thinks See also this comment, describing the issue in terms of implicit assertions.

You might be able to get the habit-forming effect without "wasting" $100 or $10 by deciding how much you would like to donate in terms of your income and debt, then creating a worksheet for yourself which you dutifully fill out every month, even when you know it will come out to $0.

I don't have any special insight on this subject, only what I've picked up from reading LW and occasionally talking about it on IRC. Many sources are linked from the comments in this thread (the comments are much more informative than the original post). To sum up, it seems that both CI and Alcor are lamentably bad, but CI is considerably worse.

If using your computer in bright light gives you eyestrain, it might be possible that you need a brighter monitor to go with your brighter lights.

Randomization still eliminates some confounding factors even without blinding. For example, you might be more likely to decide to turn on your bright lights when you're already feeling alert.

Yep yep. It's just not quite as strong as a blind study, but that's fine for these purposes.

Given what I've heard about CI's quality control, I don't blame her for trying to raise enough money for Alcor.

What have you heard about CI's quality control, and do you happen to have the sources conveniently available? (I'm making the decision between CI and Alcor.)

"Equivalent watts" is not a well-defined unit and the figures given by manufacturers are often exaggerated. Real incandescent bulbs vary in light output per watt. It's easier to use lumens, which are additive. However, human brightness perception is logarithmic, so 4 times the lumens will appear less than 4 times as bright.

doesn't the idea of a persistent ranking system, and the concern with it imply a belief in intelligence as a static factor? Less Wrong is a diverse community, but I was by and large under the impression that it was biased towards a growth mindset.

I'd just like to point out that a growth mindset is fully compatible with fixed intelligence. Fixed intelligence doesn't mean that growth is impossible, only that some people can grow faster than others.

I have heard nothing but good about it in the past

If you'd like a countervailing anecdote, I was amused by the parody but I can't stand the actual show.

Thank you for your opinion. One nice thing about Less Wrong is that people are willing to give their opinions whatever they are, and even if a majority like something, the rest feel free to give their opinion, and are not harried by groupthink. Well, I am planning to watch a couple of episodes, and we will see which side of the question I come out on.

Assuming my math is right, if your stone carving were accurate to 1 micron, in order to encode a 140 character 'tweet' using this method, you would need a stone tablet 10^163 times larger than the observable universe. (!)

ugh...I just did a rough estimate for the same problem with's not much better. So much for that idea! I wonder if there is a way to use math to squeeze more digits out of this situation...
Hmm. I really should have tried searching for that. Thanks!

And describing him as a "former researcher at SIAI" is quite disingenuous of you, by the way; he never received any salary from us and is a long-time opponent of these ideas. At one point Tyler Emerson thought it would be a good idea to fund a project of his, but that's it.

If that's the case, it seems like giving him the title Director of Research could cause a lot of confusion. I certainly find it confusing. Maybe that was a different Ben Goertzel?

Reportedly, Ben Goertzel and OpenCog were intended to add credibility through association with an academic:
2Eliezer Yudkowsky11y
Honestly, at this point I'm willing to just call that a mistake on Tyler Emerson's part.

In the alternative where it's a bad idea, talking about it has net negative expected utility.

What about the possibility that someone who thought it was a good idea would change their mind after talking about it?

3Eliezer Yudkowsky11y
This seems an order-of-magnitude less likely than somebody wouldn't naturally think of the dumb idea, seeing the dumb idea.

Are you saying that abuse victims have an obligation to coach their abusers in how not to be abusive?

I would say... yes, actually, insofar as they want that abuse to end while changing nothing else about the dynamic.

I wish I could trust other's information.

You might think about the reasons people have for saying the things they say. Why do people make false statements? The most common reasons probably fall under intentional deception ("lying"), indifference toward telling the truth ("bullshitting"), having been deceived by another, motivated cognition, confabulation, or mistake. As you've noticed, scientists and educators can face situations where complete integrity and honesty comes into conflict with their own career objectives, but there's no... (read more)

I know. But it's possible for her to be unaware of the existence of CFMR, had there been two orgs. If you read the entire disagreement, you'll notice that what it came down to is that it did not occur to me that CFMR might have changed it's name. Therefore, denial that it existed appeared to be in direct conflict with the evidence. The evidence being two articles where people were creating CFMR. I was surprised she didn't seem to know about it, but then again, if she doesn't read every single post on here, it's possible she didn't know. I don't know how much she knows, or who she specifically talks to, or how often she talks to them, or whether she might have been out sick for a month or what might have happened. For something that small, I am not going to go to great lengths to analyze her every potential motive for being correct or incorrect. My assessment was simple for that reason. As for wanting to trust people more, I've been thinking about ways to go about that, but I doubt I will do it by trying to rule out every possible reason for them to have been wrong. That's a long list, and it's dependent upon my imperfect ability to think of all the reasons that a person might be wrong. I'm more likely to go about it from a totally different angle: How many scientists are there? What things do most of them agree on? How many of those have been proven false? Okay, that's an estimated X percent chance that what most scientists believe is actually true based on sample set of (whatever) size. This is a good suggestion, and I normally do. I did confirm my fact with two articles. That is why it became a "no actually" instead of a question.

I'd guess the GP was asking about conscientiousness as in the Big Five model, which is more about work ethic and motivation and not so much about morality. Anyone highly motivated and organized would be considered "conscientious" under this model, even if they were a criminal.

I'd never thought about why people are conscientious, but I can think of four sorts of reasons. In order to accomplish a task-related goal, in order to lower their own stress by making accidents less likely and less damaging, in order to not impose costs on other people, and possibly because it just feels right. I'm guessing though-- I'm only sporadically conscientious. Would conscientious people care to talk about how consientiousness feels from the inside?

You forgot that space itself is expanding. In theory, it's possible for Alice and Bob to travel far enough apart that the space between them expands faster than light, meaning the distance between them continues to increase even if they travel toward each other at the speed of light.

Isn't that violating the lightspeed limit? As you describe it, there's a frame of reference in which Alice and Bob are moving away from each other faster than the photons they are traveling near.

driving slowly

When there's snow or ice on the roads, there's really no speed slow enough that you can count on never losing traction. After the first heavy snow, you might want to practice in a low-traffic area until you get the hang of recovering from a slide. Also practice driving as if there's a full glass of water on your dashboard that you don't want to spill.

Do I need snow shoes? Spikes?

Nobody uses those for day-to-day walking, but you might want a pair of insulated boots depending on how much time you plan to spend outside. These are pretty c... (read more)

One final point about this response is worth nothing.

Is this a typo?

Oops. A typo but an unintentionally coherent one. (I can't edit the original post but I'll make sure to change it in the FAQ itself).

EMP destroys equipment by inducing high voltage and current in unshielded conductors, which act as antennas. The amount of energy picked up is related to the length of the conductor, with shorter conductors picking up less energy. Anything small enough to be described as "nanotechnology" would probably be unaffected, as long as it's not connected to unshielded external wiring. (An unmodified human touching a conductor would also experience an electric shock during an EMP.)

Thank you! That makes me very happy.

What motivates you to link personal identity to your specific particles? Any two atoms of the same type are perfectly indistinguishable.

I haven't touched on personal identity - for clarity I'm not equating that with continuous experience nor am I even equating continuous instance distinctions with continuous experience at this point. (I guess I'm interpreting personal identity either like "self" or identity the way it's used in "identity theft" - like a group of accounts and things like SSNs that places use to distinguish one person from another. I'm not using that term here and I'm not sure what you mean by it.). I'm not trying to figure out whether my "self" maps to certain particles. I feel sure that "self" is copy-able (though I haven't formally defined self yet). However, I am separating self from continuous experience (like you can see in my Elements of Death comment). What I am trying to do is to figure out whether the continuous experience of my current instance is linked to specific particles. The reason I am asking that question is made apparent in my transporter failure scenario.

Questions to consider: Would you feel the same way about using a Star Trek transporter? What if you replaced neurons with computer chips one at a time over a long period instead of the entire brain at once? Is everyone in a constant state of "death" as the proteins that make up their brain degrade and get replaced?

The million dollar question: Do I stop experiencing? If I were to be disassembled by a Star Trek transporter, I'd stop experiencing. That's death. If some other particles elsewhere are reassembled in my pattern, that's not me. That's a copy of me. Yes, I think a Star Trek transporter would kill me. Consider this: If it can assemble a new copy of me, it is essentially a copier. Why is it deleting the original version? That's a murderous copier. I remember researching whether the brain is replaced with new cells over the course of one's life and I believe the answer to that is no. I forgot where I read that, so I can't cite it, but due to that, I'm not going to operate from the assumption that all of the cells in my brain are replaced over time. However, if one brain cell were replaced in such a way that the new cell became part of me, and I did not notice the switch, my experiencing would continue, so that wouldn't be death. Even if that happened 100,000,000,000 times (or however many times would equate to a complete replacement of my brain cells) that wouldn't stop me from experiencing. Therefore, it's not a death - it's a transformation. If my brain cells were transformed over time into upgraded versions, so long as my experience did not end, it would not be death. Though, it could be said to be a transformation - the old me no longer exists. Epiphany 2012 is not the same as Epiphany 1985 because I was a child then, but my neural connections are completely different now and I didn't experience that as death. Epiphany 2040 will be completely different from Epiphany 2012 in any case, just because I aged. If I decide to become a transhuman and the reason I am different at that time is because I've had my brain cells replaced one at a time in order to experience the transformation and result of it, then I have merely changed, not died. It could be argued that if the previous you no longer exists, you're dead, but the me that I was when I was two years old or ten y

I'm trying to say that I think you might already be a pretty extreme outlier in your opinion of cryonics, based on a few clues I noticed in your comment, so your reactions may not generalize much. The median reaction to cryonics seems to be disgust and anger, rather than just not being convinced. I'm sort of on the fence about it myself, although I will try to refute bad cryonics-related arguments when I see them, so on object-level grounds I can't really say whether convincing you or learning how to convince people in general is a good idea or not.

Disgust and anger, that's interesting. I wonder if that might be due to them feeling it's unfair that some people might survive when everyone else has died, or seeing it as some kind of insult to their religion like trying to evade hell (with the implication that you won't be motivated enough to avoid sinning, for instance). If that's the case, you're probably right that my current reaction is different from the ones that others would have. My initial reaction was pretty similar, though. My introduction to cryo was in a cartoon as a child - the bad guys were freezing themselves and using the blood of children to live forever. I felt it was terrifying and horribly unfair that the bad guys could live forever and creepy that there were so many frozen dead bodies. I didn't think about getting it myself until I met someone who had signed up. My reaction was "Oh, you can actually do that? I had no idea." - and it felt weird because it seemed strange to believe that freezing yourself is going to save your life (I didn't think technology was that far along yet), but I'm OK with entertaining weird ideas, so I was pretty neutral. I thought about whether I should do it, but I wasn't in a financial position to take on new bills at the time, so I stored that knowledge for later. Then, when I joined LessWrong, I began seeing mentions of cryo all over. I had the strong sense that it would be wrong to spend so much on a small chance of saving my own life when others are currently dying, but that was countered pretty decently by one of the posts linked to above. Now I'm discovering cached religious thoughts (I thought I removed them all. These are so insidious!) and am wondering if I will wake up as some sort of miserable medical Frankenstein. I can't tell you whether it's worth it to convince me or learn to convince people, either. I'm not even sure it's worth signing up, after all. (:

Yeah, that's true. But still, if "they don't think my soul is worth saving" is more salient to you than, for instance, "I'm glad I won't have to deal with their proselytizing," it suggests that you take the idea of souls and hell at least a little bit seriously.

To give a more straightforward example, imagine a police officer asking someone someone whether they have any contraband. The person replies, "no, officer, I don't have any weed in my pocket." How would that affect your belief about what's in their pocket?

To practice on me before something happens to your female family members and you've got to convince them...

Are you such a Platonically ideal female that we can generalize from you to other females, who may have expressed no interest in cryonics?

Friendly hint: you just implied my life isn't worth saving. I am not easily offended and I'm not hurt, so that's just FYI.

If you see it that way, it sounds like you're already very nearly convinced.

Of course not, that's an assumed "no". I guess what you're really asking is "What is the point of seeing whether we can convince you to sign up for cryo?" Sometimes case studies are helpful for figuring out what's going on. Study results are more practically useful but let's not forget how we develop the questions for a study - by observing life. If you've ever felt uncomfortable about the idea of persuading someone of something or probing into their motivations, you can see why being invited to do so would be an opportunity to try things you normally wouldn't and explore my objections in ways that you may normally keep off-limits. Even if most of my objections are different from the ones other people have, discovering even a few new objections and coming up with even a few new arguments that work on others would be worthwhile if you intend to convince other people in the future, no? Alicorn is right. It's not that I am convinced or not convinced, it's that I'm capable of interpreting it the way that you might have meant it. For the record, where I'm at right now is that I'm not convinced it's a good way to save my life, (being the only way does not make it a good way) and I'm not 100% convinced that it's better than donating to a life-saving charity.
She could know that you see it that way without seeing it that way herself. If I knew someone who believed that I would definitely go to hell unless I converted to their religion, and they didn't seem to care if I did that or not, I might characterize that as them not thinking my soul was worth saving.

Write-in: I believed it was among the more reliable forms of forensic evidence, but didn't believe the bombastic claims of absolute certainty.

In this case, it's easy to predict how LessWrong is going to react. Your initial posts were well-received because you pointed out a potential problem, LW's high bounce rate, and even created some nice graphs. But when a consensus started to emerge that reducing the bounce rate would actually be a net negative, instead of accepting this or refuting it, you made a long series of posts mostly reiterating the same unconvincing points. Doing that will result in a poor reception.

Weird that you interpreted it that way. I thought I was working on solving the problem. This post would be an exception. I had a mind kill reaction surrounding "elitism" and, like 20% of the people who took my poll, was trying to decide whether or not I should quit LessWrong. How did you end up with the perspective that I was wasting time reiterating unconvincing points?
Recognition memory is actually even cooler than implicit memory, I thought, and can contain quite a bit of information (as far as I could tell, working through Shannon's theorem): Dunno how it would work in this setting, though, unless the personalities share visual recognition.
If I do something in this approximate neighborhood, I think I'll go with the hypnotism idea, since it's easier both to understand and to handwave about.

In my experience, being obnoxious doesn't deter others from being obnoxious. Quite the opposite, in fact.

LW could be considered a select group by discussion board standards. For example, posters who haven't studied the rather large amount of presumed background knowledge are, to a decreasing but still significant extent, only reluctantly tolerated. Some people accustomed to more typical discussion boards do seem somewhat miffed about the idea that LW has such prerequisites at all, and I assume this is because they perceive it as elitist.

Bringing this back to the main point, LW already does a reasonably good job at covering what you call the 'hard' material. I... (read more)

I sort of took your suggestion. (See OP under Center for Modern Rationality).
On educating new rationalists: Do they even have a forum? I don't see how this is going to work. Explain this plan.
Is LessWrong Elitist: By that definition, restaurants are elitist because people with no knowledge of silverware and table manners are only reluctantly tolerated. Roads are elitist because drivers with no knowledge of traffic rules are only reluctantly tolerated. Grocery stores are elitist because only people with no understanding of trade and shoplifting laws aren't tolerated. Is there any place you can go in the civilized world and be accepted regardless of whether you have knowledge relevant to that place? Even in jail, inmates are expected to know better than to drink out of the toilet and that food goes in their mouth. The mental ward might be the only place - but that isn't a place of acceptance. Let's look at the dictionary definition for the word elitist, now, as it's more detailed: 1. (of a person or class of persons) considered superior by others or by themselves, as in intellect, talent, power, wealth, or position in society: elitist country clubbers who have theirs and don't care about anybody else. 2. catering to or associated with an elitist class, its ideologies, or its institutions: Even at such a small, private college, Latin and Greek are under attack as too elitist. 3. a person having, thought to have, or professing superior intellect or talent, power, wealth, or membership in the upper echelons of society: He lost a congressional race in Texas by being smeared as an Eastern elitist. 4. a person who believes in the superiority of an elitist class. Reasons LessWrong isn't automatically elitist, as relates to the above: 1. Regardless of whether LessWrong members have more or less talent, intellect, power, wealth or position, if they do not have a superior attitude about it, that doesn't qualify them as elitist by definition 1. 2. Depending on whether LessWrong wants to be a place where everybody can learn or a place where only people thought to have "superior intellect or talent, power, wealth, or membership in the upper echelons of s
Seconded. I suggest adding to the LW Wiki under Getting Started, near the top.
The trade off between being elitist and new comers is something to think about. Apparently some LWs aspire to bring more people, as rationalists teachings assumes. Point is whats costs to pay for growing with (same?)quality.

Regarding elitism: LW is elitist, and would not be what it is without its elitism. What else differentiates LW from /r/skeptic or agi-list? The LW community recognizes that some writings are high quality and deserve to be promoted, and others are not. If anything, I wish LW would become more elitist.


That's the Kelly criterion, equivalent to having logarithmic utility for money.

Even an ethical egoist would cooperate with a copy of herself in the prisoner's dilemma, if she's using the ‘right’ decision theory.

Christians generally respect people who are genuinely seeking truth, in part because the Bible promises that "those who seek will find". The good news is that you ARE legitimately seeking truth, so you should be able to convince him of this.

On the other hand, I've seen Christians conclude that the fact that you haven't found Christianity is knock-down evidence that you're not legitimately seeking truth. One man's modus ponens is another man's modus tollens.

Load More