All of namespace's Comments + Replies

I’ve been reading the hardcover SSC collection in the mornings, as a way of avoiding getting caught up in internet distractions first thing when I get up. I’d read many of Scott Alexander’s posts before, but nowhere near everything posted; and I hadn’t before made any attempt to dive the archives to “catch up” to the seeming majority of rationalists who have read everything Scott Alexander has ever written.

Just a note that these are based on the SlateStarCodexAbridged edition of SSC:

8Yoav Ravid2y
And just to clarify what that means, from their website:

I still think that this problem is intractable so long as people refuse to define 'rationality' beyond 'winning'.

I, in general, try to avoid using the frame of 'rationality' as much as possible precisely because of this intractability. If you talk about things like existential risk, it's clearer what you should know to work on that.

See: The Rationality Quotient: Toward a Test of Rational Thinking, by Keith E. Stanovich et al.

This talk is required reading for designing a tag system:

I can also recommend the book The Intellectual Foundation Of Information Organization by Elaine Svenonius.

Nice, will definitely look at these.
This is the same 6500-word essay linked in the OP. It might be helpful to note that (I think) the relevant part is the very last two paragraphs. And you say there that you are not sure what Becker meant by practicing dying. The concrete method you describe is: Ok, I imagined it. I shrug.

I don’t see how the law of “people are obligated to respond to all requests for clarifications”, or even “people always have to define their terms in way that is understood by everyone participating” is somehow an iron law of communication. If anything, it is not an attribute that any existing successful engine of historical intellectual progress has had. Science has no such norms, and if anything strongly pushes in the opposite direction, with inquiries being completely non-public, and requests for clarification being practically impossible in public ven

... (read more)

This is a great comment, thanks!

I think you would be hard pressed to argue to me in seriousness that academics do not claim to have norms that peoples beliefs are open to challenge from anyone who has standing and warrant.

So, I am actually honestly confused about this dimension. My sense is that there is very little academic apparatus, or even social norm enforcement, for scientists responding to critiques or requests for clarification of their work. See for example the answers to Ben's question a while ago on "How did academia ensure papers were correct i

... (read more)

I have this intuitive notion that:

I do think the relevant question is whether your comments are being perceived as demanding in a similar way. From what I can tell, the answer is yes, in a somewhat lesser magnitude, but still a quite high level, enough for many people to independently complain to me about your comments, and express explicit frustration towards me, and tell me that your comments are one of the major reasons they are not contributing to LessWrong.

I agree that you are not as bizarrely demanding as curi was, but you do usually demand quite a

... (read more)
I don't think any of my concerns run up against any of the things that Shannon has talked about. This feels similarly to me to the common misuse of Aumann's Agreement Theorem for the case of conversation between humans. Obviously Shannon can give us lower bounds on how much information we have to transmit between each other in order to get basic ideas across, but we are so far away from any of those lower bounds that I don't think I know how to apply those insights to the question at hand.  I don't see how the law of "people are obligated to respond to all requests for clarifications", or even "people always have to define their terms in way that is understood by everyone participating" is somehow an iron law of communication. If anything, it is not an attribute that any existing successful engine of historical intellectual progress has had. Science has no such norms, and if anything strongly pushes in the opposite direction, with inquiries being completely non-public, and requests for clarification being practically impossible in public venues like journals and textbooks. Really very few venues have a norm of that type (and I would argue neither has historical LessWrong), even many that to me strike me as having produced large volumes of valuable writing and conceptual clarification. As I said, Science itself actually operates almost solely on positive selection, with critiques usually being either extensive and long, but most of the time completely absent from public discourse, and uncompelling ideas simply get dropped without getting much exposure (and with no measurable back-and-forth in public about the definitions of various terms). This doesn't mean I can't imagine a case for an iron law of communication of this type, but I don't find myself currently compelled to believe in such a law, or at least don't know the shape of the law that you are pointing at (if you are pointing at something that specific). 

Before and After

At the start of the decade I was 13, I'm now 23.


Before: I was a recovering conspiracy theorist. I'd figured out on my own that my beliefs should be able to predict the future, and started insisting they do. I wrote down things I expected to happen by a certain time in a giant list, and went back to record the outcome. I wanted to be a video game developer, but didn't know how to start.

A 13 year old boy sits on a swingset in his backyard, listening to Owl City[0] and Lemon Demon[1] as frosty dew melts off green grass in the morn

... (read more)
Comment removed for posterity.
I've already given this an upvote, but I'm also leaving a comment because I think LessWrong has a shortage of this kind of content. I think broad personal overviews are particularly important because a lot of useful information you can get from "comparing notes" is hard to turn into standalone essays.

The CFAR branch of rationality is heavily inspired by General Semantics, with its focus on training your intuitive reactions, evaluation, the ways in which we're biased by language, etc. Eliezer Yudkowsky mentions that he was influenced by The World of Null-A, a science fiction novel about a world where General Semantics has taken over as the dominant philosophy of society.

Question: Considering the similarity to what Alfred Korzybski was trying to do with General Semantics to the workshop and consulting model of CFAR, are you aware of a good analysis of ho

... (read more)

I buy that General Semantics was in some sense a memetic precursor to some of the ideas described in the sequences/at CFAR, but I think this effect was mostly indirect, so it seems misleading to me to describe CFAR as being heavily influenced by it. Davis Kingsley, former CFAR employee and current occasional guest instructor, has read a bunch about GM, I think, and mentions it frequently, but I'm not aware of direct influences aside from this.

I think Nuno's time-capped analysis [] is good.

Does CFAR have a research agenda? If so, is it published anywhere?

By looking in-depth at individual case studies, advances in cogsci research, and the data and insights from our thousand-plus workshop alumni, we’re slowly building a robust set of tools for truth-seeking, introspection, self-improvement, and navigating intellectual disagreement—and we’re turning that toolkit on itself with each iteration, to try to catch our own flawed assumptions and uncover our own blindspots and mistakes.

This is taken from the about page on your website (emphasis mine). I also took a look at this list of resources and notice I'm sti

... (read more)

The name for TAP's in the psychology literature is implementation intention. CFAR renamed it.

Altruistic silence is probably my default position, but from a strictly rational standpoint, is there some way to get paid for my continued silence (other than with the joy of living in a world ignorant of this idea)?

This belies a misunderstanding of what 'rational' means. Rational does not mean homo economicus, it means doing what a person would actually want to do on reflection if they had a good understanding of their options.

I doubt your idea is actually that dangerous, so I'm treating thi

... (read more)
4Gordon Seidoh Worley3y
I think this a really unfair reading of this post. Maybe it has been edited in ways since it was originally posted to change its tone (my understanding is that some editing has happened), but my impression is that the author is asking about economic incentives that would keep them or someone like them quiet rather than blackmail. If the author wanted to blackmail us, they could have made a very different kind of post.
I am holding a lot of dangerous knowledge and am encumbered by a variety of laws and non-disclosure agreements. This is not actually uncommon. So arguably, I am already being paid to keep my mouth shut about a variety of things, but these are mostly not original thoughts. This specific idea is, in my best judgement, both dangerous, and unencumbered by those laws and NDAs. The assertion that my default position is 'altrusitic silence' means that this is not 'posting a threat on a public forum'. It would be a real shame if a large variety of things that are currently not generally known were to become public. While I would indeed like to be paid not to make them public (and, as previously stated, in some cases already am), this should not be taken as an assertion to the reader, that if they fail to provide me with some tangible benefit, that I will do something harmful. This is in a broader sense, a question: 'If there exists an idea which is simply harmful, for example, a phrase which when spoken aloud turns a human into a raging canninal, such that there is no value whatsoever to increasing the number of people aware of the idea, how can people who generate such ideas be incentivized to not spread them?' Maybe the best thing to do is to look for originators of new ideas perceived as dangerous, and encourage them to drink hemlock tea before they can hurt anyone else. []

I just don't comment in these sorts of threads because I figure the site is a lost cause and the mods will ban all the interesting people regardless of what words I type into the box.

Like, feel free to call the site a lost cause, but I am highly surprised that you expect us to ban all the interesting people. We have basically never banned anyone from LW2 except weird crackpots and some people who violated norms really hard, but no one who I expect you would ever classify as being part of the "interesting people".

I'd have to read the LW 2 source to confirm, but from my experience with the API and relevant data models I'd imagine it's just a matter of changing the "post" field on a comment and all its children. Then making that a button which lets you write a new post and append the comment tree to it.

So it's a useful feature, but probably not a particularly difficult one.

Right, I agree that it doesn't sound difficult from a web-development perspective, but I also think that only praising difficult-to-implement features would create the wrong incentives.

Yet, somehow, it is you saying that there were people who left the rationality movement because of the Solstice ritual, which is the kind of hysterical reaction I tried to point at. (I can’t imagine myself leaving a movement just because a few of its members decided to meet and sing a song together.)

I don't think it's really "a few people singing songs together". It's more overall shift in demographics, tone, and norms. If I had to put it succinctly, the old school LessWrong was for serious STEM nerds and hard science fiction dorks. It was sup

... (read more)

I think it's a sort of Double Entendre? It's also possible the author didn't actually read Zvi's post in the first place. This is implied by the following:

Slack is a nerd culture concept for people who subscribe to a particular attitude about things; it prioritizes clever laziness over straightforward exertion and optionality over firm commitment.

In the broader nerd culture, slack is a thing from the Church of the Subgenius, where it means something more like a kind of adversarial zero sum fight over who has to do all the work. In that context, the pos

... (read more)

Huh, that might make sense. Still seems a weird thing to name the post.

I was about to write up some insight porn about it, and then was like “you know, Raemon, you should probably actually think about about this for real, since it seems like Pet Psychology Theories are one of the easier ways to get stuck in dumb cognitive traps.”

Thank you. I'm really really sick of seeing this kind of content on LW, and this moment of self reflection on your part is admirable. Have a strong upvote.

Thanks for inspiring GreaterWrong's new ignore feature.

Man we were considering whether to implement that but then we’re like ‘hmm we probably should not do that on a whim without thinking about it’

For what it's worth, I don't feel like 'escalation spiral' is particularly optimal. The concept you're going for is hard to compress into a few words because there are so many similar things. It was just the best I could come up with without spending a few hours thinking about it.

"Uphill battle" is a standard English idiom, such idioms are often fairly nonsensical if you think about them hard enough (e.g, "have your cake and eat it too"), but they get a free pass because everyone knows what they mean.

and one feature of the demon thread is ‘everyone is being subtly warped into more aggressive, hostile versions of themselves’

See that's obvious in your mind, but I don't think it's obvious to others from the phrase 'demon thread'. In fact, hearing it put like that the name suddenly makes much more sense! However, it would never be

... (read more)
Flame war. Don't invent new words ;-)
Seconded; this interpretation didn't ever occur to me before reading Raemon's comment just now.

That post is a fairly interesting counterargument, thanks for linking it. This passage would be fun to try out:

This prompted me to think that it might be valuable to buy a bunch of toys from a thrift store, and to keep them at hand when hanging out with a particular person or small group. When you have a concept to explore, you’d grab an unused toy that seemed to suit it decently well, and then you’d gesture with it while explaining the concept. Then later you could refer to “the sparkly pink ball thing” or simply “this thing” while gesturing at the ball

... (read more)
2Gordon Seidoh Worley4y
On the s1/s2 thing, there are alternatives and I try to promote them when possible, especially since around these parts people tend to use s1/s2 for a slightly different but related purpose to their original formulation anyway. The alternative names for the clusters (not all the source names line up exactly, though): s1: near, concrete, id, fast, yin, hot, elephant, unconscious, machine, outside s2: far, abstract, superego, slow, yang, cold, rider, conscious, monkey/homunculus, inside I think near/far the best, but I think we're stuck with s1/s2 at this point due to momentum.
Data point: I remember that System 1 is the fast, unconscious process by associating it with firstness - it's more primal than slow thinking. This is probably somewhat true, but it defeats the purpose (?).

Most people don't learn jargon by reading the original source for a term or phrase, they learn it from other people. Therefore one of the best ways to stop your jargon from being misused is to coin it in such a way that the jargon is a compressed representation of the concept it refers to. Authors in this milieu tend to be really bad at this. You yourself wrote about the concept of a 'demon thread', which I would like to (playfully) nominate for worst jargon ever coined on LessWrong. Its communicated meaning without the original thread boils down to 'bad t

... (read more)
You mean to say that deliberate anti-epistemology, which combines dehumanization with anthropomorphism, turns out to be bad?

Eliezer also mentioned this in his old article on writing advice:

9. Don't invent new words.
Yes, I frequently violate this myself, but at least I've been trying to keep it down.
If you do violate the rule, then make the new word as self-explanatory as possible. "Seed AI", "Friendly AI", and "neurohacking" are good. "External reference semantics" is bad.
I do quite agree on the "the best jargon is self explanatory" thing, just noting that it's often fairly hard. (I'm interested if you have alternate suggestions for demon thread, although fwiw I find "unholy thread" a bit more intuitive than 'uphill battle in snow', since there's a lot of reasons something might be like an uphill battle in snow, and one feature of the demon thread is 'everyone is being subtly warped into more aggressive, hostile versions of themselves'. I agree that connotation still pretty culture dependent though)

Another option might be to use a word without any baggage. For example, Moloch seems to have held onto its original meaning pretty well but then maybe that's because the source document is so well known.

EDIT: I see The sparkly pink ball thing makes a similar point.

I like the spirit of this post, but think I object to considering this 'too smart for your own good'. That framing feels more like an identity-protecting maneuver than trying to get at reality. The reality is that you think you're smarter than you are, and it causes you to trip over your untied shoelaces. You acknowledge this of course, but describing it accurately seems beyond your comfort zone. The closest you get is when you put 'smart' in scare quotes near the end of the essay.

Just be honest with yourself, it hurts at first but the improvement in perspective is massive.

Agreed, except that the behaviour described could also just be procrastination.
Thanks, fixed.

It's been a classic guideline of the site for a long time, that you should avoid the word 'rational' or 'rationalist' in titles as an adjective to describe stuff. In the interest of avoiding a repeat of the LW 1 apocalypse, I (and probably others) would really appreciate if you changed it.

Gotcha. Thanks for the heads up!

Suggested feature: adding a “link option” to answers. I’m not sure what this is actually called, but it’s a feature that comments have. For example, here is a link to this comment.

This is generally called a permalink.

I think my broader response to that is "Well, if I could change one thing about LW 2 it would be the moderation policy."

That seems strictly off topic though, so I'll let it be what it is.

General moderation seems off topic for this particular post. I think the guidelines for either what questions should go on the frontpage, or various ways you might want to filter questions, are fair game. (Regardless, it will continue to be the case that you can post whatever question you want to your personal blog)

My Complaint: High Variance

Well, to put it delicately the questions have seemed high variance when it comes to quality.

That is the questions posed have been either quite good or stunningly mediocre with little in between.

3 examples of good questions

3 examples of not as good questions

I'd prefer to be... (read more)

Different types of questions seems useful. What categories sound like a good idea?
I think my broader response is "rather than try to resolve this by discouraging certain questions, solve it through filtering." Right now, we have a minimum-viable system where all questions show up on frontpage so long as they meet the frontpage criteria. This means questions appear to be weighted about as strongly as a post in terms of importance, and that there isn't much in the way of filtering of what sort of questions get displayed. I think both of these could be resolved with a more dedicated question management system. I think it's fairly important for people to be able to post questions freely – a lot of progress depends on people being able to pursue curiosity wherever it goes. So I think letting people do that, and then having some requirements like "frontpage questions need to be particularly well formed" and possibly some tighter requirements on topic, and/or have something like subreddits that focus on particular topics, is probably a better overall solution. (It also so happens I think I roughly disagree with some of the "bad question" examples. The sunscreen example isn't deeply entwined with things-LW-tends-to-focus-on, but it *is* a question where the answer actually requires some rationality to think about, and I think it's in fact a good use of LW to be a place you can go to ask questions where you can expect people to have thought clearly/usefully about how to weigh evidence when answering them)

I found the "In what ways are holidays good" question actually quite useful. Not sure what you mean by the "Bizarre, alien perspective.", since I don't think I really understand what holidays do either (which doesn't mean they don't do anything, I just don't have a great model of what they do).

The official LessWrong 2 server is pretty heavy, so running it locally might be a problem for some people.

Whistling Lobsters 2.0 uses a clone of the LW 2 API called Accordius as its backend. Accordius is, with some minor differences, nearly an exact copy of the pre-October LW 2 API. It was developed with the goal that you could put the GreaterWrong software in front of it and it would function without changes. Unfortunately due to some implementation disagreements between Graphene and the reference GraphQL library in JavaScript, it's only about 95% compati

... (read more)
A great deal of my affection for hackers comes from the unique way they bridge the world of seeking secrets about people and secrets about the natural world. This might seem strange, since the stereotype is that hackers are lonely people that are alienated from others, but this is only half truth. In both the open source MIT tradition and the computer intrusion phone phreaking tradition, the search for secrets and excellence are paramount but fellow travelers are absolutely welcome on the journey. Further, much of even the ‘benign’ hacking tradition relies
... (read more)

I think users that are used to Markdown will often use single bold words as heading, and I feel hesitant to deviate too much from the standard Markdown conventions of how you should parse Markdown into HTML.

Don't know where you got this notion from, but absolutely not. Markdown has syntax that's used for headings, and I've never used bolded text as a replacement for a proper heading.

(As a wider point, Said Achmiz is as usual correct in his approach and it would be much appreciated if you didn't inflict any more appalling HTML practices on API consumers)

We just serve the historical HTML for practical all posts, and all new HTML is really as straightforward HTML as you can imagine (with some exception for blockquotes, which we currently split into block-level elements, though that will be fixed soon). Happy to hear about any other problems you have with the HTML, but I am not aware of any. Just because markdown has a heading syntax, doesn't mean that everyone follows it, and depending on context you might not want to follow it. I literally googled "Markdown bold" and among the first few results this [] tutorial uses bolded headers as an example.

(My guess is you wanted to write “Can’t I post any Open Questions I have right now...“, so I will respond to that, but let me know in case I misunderstood)

Nope. My question was literally just whether I can post some open questions I have right now to LessWrong, this sounds like an excellent direction for the website to take.

Heh. I interpreted your question the other way, and my off the cuff answer is "Yes, you can, although it wouldn't automatically get converted into the new format. It would probably be pretty easy to convert into the new format though. But, there's enough pieces still up in the air that I can't make promises about it."

We’re interested in people’s thoughts on the idea so far. Any questions about Open Questions?

Can I post any Open Questions I have right now with a title like:

"[Open Question] Bla bla bla bla?"

(My guess is you wanted to write "Can't I post any Open Questions I have right now...", so I will respond to that, but let me know in case I misunderstood)

Yep, you can, but I think there are a few reasons why people don't, and why doing so wouldn't get you all the benefit of having a more dedicated Q&A system:

  • In terms of UI, I think you want to give people affordances for asking questions, which we right now don't do (simple things like having a smaller text-field that doesn't scream for 3 pages of content, plus a to
... (read more)

Will second not enjoying Neuromancer very much.

I missed that line and I apologize. A strong upvote for your troubles.

I have not invented a "new style," composite, modified or otherwise that is set within distinct form as apart from "this" method or "that" method. On the contrary, I hope to free my followers from clinging to styles, patterns, or molds. Remember that Jeet Kune Do is merely a name used, a mirror in which to see "ourselves". . . Jeet Kune Do is not an organized institution that one can be a member of. Either you understand or you don't, and that is that. There is no mystery about my style. My movements are simple, direct and non-classical. The extraordinary

... (read more)

While writing the about page for the upcoming Whistling Lobsters 2.0 forum, I took a shot at giving a brief history of and definition of rationality. The following is the section providing a definition. I think I did an okay job:

The Rationalist Perspective

Rationality is related to but distinct from economics. While they share many ideas and goals, rationality is its own discipline with a different emphasis. It has two major components, instrumental and epistemic rationality. Instrumental means "in the service of", it's about greater insight in the service

... (read more)

Yup. Empirically, people who lose lots of weight and keep it off have a CONSTANT VIGILANCE mindset going.

This isn't to say that OP's post is untrue, but rather they're underestimating just how badly the odds are stacked against those who are obese.

HBO's The Weight Of The Nation documentary goes into the Weight Control Registry study on long term weight loss, and the common factors between people who manage to keep it off:

From the post:

Something like 95 per cent of people who lose weight put it all back on. Almost every attempt is doomed to fail.

Fat people who are trying to lose weight are heroes, engaged in a struggle worthy of Sisyphus. Every conceivable force is levelled against them.

Not sure what gave you the impression I'm underestimating the odds, or the difficulty of the endeavour? That was literally the whole point of the post. If it wasn't communicated clearly enough, my apologies- I'd be interested in any feedback on which bits were confusing.

Even if we take that interpretation, I think 3 and 4 are useful operational expansions of 1 and 2. They're concrete things you can do to implement them.

"How hard it is to obtain the truth is a key factor to consider when thinking about secrets. Easy truths are simply accepted conventions. Pretty much everybody knows them. On the other side of the spectrum are things that are impossible to figure out. These are mysteries, not secrets. Take superstring theory in physics, for instance. You can’t really design experiments to test it. The big criticism is that no one could ever actually figure it out. But is it just really hard? Or is it a fool’s errand? This distinction is important. Intermediate, difficult t

... (read more)

One of the reasons why academia has all those strict norms around plagiarism and citing sources is that it makes the "conceptual family tree" legible. Otherwise it just kind of becomes soupy and difficult to discern.

So how many "confirmed kills" of ideas found in the sequences actually are there? I know the priming studies got eviscerated, but the last time I looked into this I couldn't exactly find an easy list of "famous psychology studies that didn't replicate" to compare against.

I know the priming studies got eviscerated, but the last time I looked into this I couldn't exactly find an easy list of "famous psychology studies that didn't replicate" to compare against.

My understanding is that even this story is more complicated; Lauren Lee summarizes it on Facebook as follows:

OK, the Wikipedia article on priming mostly refers to effects of the first kind (faster processing on lexical decision tasks and such) and not the second kind (different decision-making or improved performance in general).
So uh. To me it j
... (read more)

Well, if someone were interested in this, it seems possible (though time-consuming, of course) to go through every mentioned study or result in the Sequences, research it, and figure out whether it’s been replication crisis’d, etc. This seems like valuable information to gather, and (as noted in the linked comment thread) the tools to aggregate, store, and collaborate on that gathered info already exist.

I do not know of any extant list, though.

To be really frank, and really succinct:

Abuse of the word 'rational' was one of the original social stressors that killed LessWrong.

It is not more fitting, and you should actually go back and edit your post to change it.

What do you think "rational" means when you thing "skillfully" would be more fitting?

The most common pattern I run into, where I’m not sure what to do, is patterns of comments from a given user that are either just barely over the line, or where each given comment is under the line, but so close to a line that repetition of it adds up to serious damage – making LW either not fun, or not safe feeling.

What I used to do on the #lesswrong IRC was put every time I see someone make a comment like this into a journal, and then once I find myself really annoyed with them I open the journal to help establish the pattern. I'd also look at peoples... (read more)

On the one hand, I too resent that LW is basically an insight porn factory near completely devoid of scholarship.

On the other hand, this is not a useful comment. I can think of at least two things you could have done to make this a useful comment:

  1. Specified even a general direction of where you feel the body of economic literature could have been engaged. I know you might resent doing someone elses research for them if you're not already familiar with said body, but frankly the norm right now is to post webs spun from the fibrous extrusions of peoples mu

... (read more)
5Charlie Steiner5y
Fair enough. I'll try to be more usefully specific in the future.

Excellent question. The short answer is when I'm not swamped and running on razor-thin margins of slack, hopefully soon.

3Paperclip Minimizer5y
cough cough

This is actually a fairly powerful intuition that I hadn't considered before. In case it might help others:

Keep in mind that a Dunbar-sized tribe of 300 people or so is going to have more than 1 'leader' (and 300 is the upper limit on tribe size). Generally you're looking at a small suite of leaders. Lets say there are a dozen of them. In that case we should naively expect the level of personal fitness required to 'lead a tribe' to be somewhere in the 1-in-30 range, you meet people that would have been leaders in the ancestral environment quite literally every day, multiple times a day even.

Reconcile this with what you actually observe in your life.

Relevant to my earlier comment is a fascinating essay by sociologist Theda Skocpol, called “The Narrowing of Civic Life” (h/t The Scholar’s Stage):

To understand the changes wrought by this sweeping civic reorganization, it is useful to consider the significant role these membership groups played in American life dating back at least a century. From the 1800s through the mid-1900s, countless churches and voluntary groups of all sizes needed volunteer leaders. Indeed, the country’s largest nation-spanning voluntary federations could have as many as 15,000

... (read more)
2[comment deleted]3y
2[comment deleted]3y
My day job is, essentially, "grunt". I work with about 30 other people. I can immediately think of two leader-types among the grunts -- three if I count someone who recently quit. I used to work a different shift, and there were no leader-types among the grunts there. There are a few more people who I'm pretty sure could be leader-types if they wanted to, but don't want to. Small sample size, I know, but one ought to test these things against daily life, and by that test 1/30 seems to be in the right ballpark. That said, things like grunt jobs and (I assume; I've never played any) MMORPGs probably lend themselves more easily to leadership opportunities than things like rationality -- there are different sorts of leadership called for. In the one case, there are concrete and well-defined goals to be met, and there's domain-specific knowledge accumulated mostly through experience that needs to be applied in order to meet those goals, and leadership entails being generally recognized as 1) having a sufficient accumulation of domain-specific knowledge to know what has to be done to meet those goals, know what to do in most situations that will arise, and probably be able to figure something out in most of the rest of the situations, 2) not a prick. In the other case... I'm not really sure what leadership in the ratsphere calls for, but it's probably not that. For one thing, we don't have concrete and well-defined operational goals; for another thing, we don't even have much general agreement on _strategic_ goals, although there are subsets of the ratsphere that do.
5Said Achmiz5y
This is consistent with my experience. To elaborate… as with so many things, World of Warcraft makes[1] a good case study here. In WoW, opportunities for leadership are legion. One may lead groups of 5 people, or raids of 10, 15, or 25 (and once upon a time, even 40). Of course, not every 5-man group will end up with a good leader, nor every 10-man group. But decent, or even good, group/raid leaders are common enough to lead groups through a variety of challenges of varying difficulty. Conversing with some of one’s fellow players inevitably reveals that most of the folks who make good group or raid leaders are, in their “real” lives, programmers, actuaries, artists, salespeople, students, secretaries, unemployed loafers, lawyers… in short, they come from a variety of backgrounds and professions, with no particular pattern to be discerned among them. But give them a small group of people, a mutual goal, and challenges—and they will lead, and people will follow, and the goal will be achieved. A game like World of Warcraft simply couldn’t work if “leadership qualities” were as rare as CEOs and entrepreneurs. [1] Well, made. Things are different now, and less interesting—though perhaps we may observe many of these dynamics again, once WoW Classic is released. Consider my comments to apply to the WoW of then, not the WoW of now.

As a note, when you make a discussion in the abstract about yourself that exposes your identity to more of the fallout from it. It also forces other people to consider you personally in their response, which also sets you up as being a proxy target for the idea itself. Unless you're a particularly charismatic, high-status person this ends up mostly just being a way to consistently clobber yourself from a strategy perspective.

Load More