All of katydee's Comments + Replies

With respect to power dynamics point one and two, there is another person known to the community who is perhaps more qualified and already running something which is similar in several respects - Geoff Anders of Leverage Research. So I don't think this is precisely the only group making an attempt to hit this sort of thing, though I still find it novel and interesting.

(disclaimer: I was at the test weekend for this house and am likely to participate)

2[DEACTIVATED] Duncan Sabien7y
Yeah, Geoff and Leverage have a lot I would love to look at and emulate, but I haven't been running under the assumption that I'd just ... be allowed to. I'm beginning some conversations that are exciting and promising. That being said, I do think that the overall goals are somewhat different. Leverage (as far as I can tell) is building a permanent superteam to actually do stuff. I think Dragon Army is building a temporary superteam that will do stuff in the short and medium term, but is more focused on individual leveling up and sending superhero graduates out into the world to do lots and lots of exploring and tackle a wide number of strategies. My model of Leverage is looking for the right thing to exploit on, whereas I'm looking for how to create competent people, and while there's a lot of overlap those are not the same Polaris. I similarly think Geoff is highly competent and certainly outstrips me in some ways (and possibly is net more qualified), but I'd posit I outstrip him in a roughly similar number of ways, and that he's better matched for what Leverage is doing and I'm better matched for what DA is doing (sort of tautologically, since we're each carving out mountains the way we think makes the most sense). I think the best of all would be if Geoff and I end up in positions of mutual respect and are able to swap models and resources, but I acknowledge he's a good five years my senior and has no reason to treat me as an equal yet. EDIT: Also note that Geoff is disqualified by virtue of already being busy, and as for "just join Leverage," well ... they've never really expressed interest in me up to this point, so I figured I wouldn't bother them unless I was no longer employed day-to-day.

Something like this also happened with Event Horizon, though the metamorphosis is not yet complete...

It looks like it's finishing soon, though.

Broadly agreed - this is one of the main reasons I consider internal transparency to be so important in building effective organizations. in some cases, secrets must exist - but when they do, their existence should itself be common knowledge unless even that must be secret.

In other words, it is usually best to tell your teammates the true reason for something, and failing that you should ideally be able to tell them that you can't tell them. Giving fake reasons is poisonous.

In some cases it can be - and I will discuss this further in a later post. However, there are many situations where the problems you're encountering are cleanly solved by existing paradigms, and looking at things from first principles leads only to reinventing the wheel. For instance, the appropriate paradigm for running a McDonald's franchise is extremely understood, and there is little need (or room) for innovation in such a context.

My quite simplistic understanding is that (a) yes, there are already existing solutions, but (b) the people providing those solutions will charge you a lot of money, especially if you later become dependent on them; and you still need to check their work, and they may disagree with you on some details because at the end of the day they are optimizing for themselves, not for you. Doing things yourself requires extra time and energy, but the money which would otherwise become someone else's profit now stays in your pockets. Essentially, as soon as you feel reasonably sure that the project will be successful, getting rid of each subcontractor means increasing your profit. You don't need to become an expert on everything, you can still hire the experts, but now they are your employees working for a salary, instead of a separate company optimizing for their own profit. Not sure how realistic this is, but if you imagine that even a typical successful company somewhat resembles the Dilbert comic, then if you can build your own company better, you can just take over their people who do the actual work, and stop feeding the remaining ones. EDIT: I don't have an experience running a company, but I am thinking about a friend who recently reconstructed his house. His original thoughts were "I am a software developer, this is my competitive advantage, so I will just pay the people who are experts on house construction", but it turned out that the real world doesn't work this way. Most of the so-called experts were quite incompetent, and he had to do a lot of research in their field of expertise just to be able to tell the difference. When the reconstruction was over, he already felt like he could start a new profession and do better than most of these experts. In this case, however, those experts were typically sole proprietors. If instead they would have been companies renting the experts, and if my friend would be in some kind of a business of repeatedly reconstructing hous
I agree with this response; using first principles is a heuristic, and heuristics always have pros and cons. Just in terms of performance, the benefit is that you can re-assess assumptions but the cost is that you ignore a great amount of information gathered by those before you. Depending on the value of this information, you should frequently seek it out, as least as a supplement to your derivation.

This is one of the worst comments I've seen on LessWrong and I think the fact that this is being upvoted is disgraceful. (Note: this reply refers to a comment that has since been deleted.)

To clarify, there are 4 embarrassing/disgraceful/noteworthy things happening here, which are embarrassing to different people in different ways.

First, the fact that The_Lion thinks this way is a disgrace for The_Lion.

Second the fact that his comment is heavily upvoted is due to the fact that he has sockpuppet accounts which he uses to upvote his posts. It is slightly embarrassing for The_Lion that he chooses to interact with the internet in this way.

Third, the fact that The_Lion has not been banned despite making comments like this one and generating upvot... (read more)

This note is for readers who are unfamiliar with The_Lion:

This user is a troll who has been banned multiple times from Less Wrong. He is unwanted as a participant in this community, but we are apparently unable to prevent him from repeatedly creating new accounts. Administrators have extensive evidence for sockpuppetry and for abuse of the voting system. The fact that The_Lion's comment above is heavily upvoted is almost certainly entirely due to sockpuppetry. It does not reflect community consensus

Moved to Discussion.

So, maybe this is just my view of things, but I think a big part of this conversation is whether you're outside looking in or inside looking out.

I'm on the inside and I think we should get rid of these things for the sake of both insiders and outsiders.

Is that true? I mostly don't notice people scoring cheap points by criticizing religion; I mostly notice them ignoring religion.

See for instance Raising the Sanity Waterline, a post which raises very important points but is so unnecessarily mean-spirited towards religion that I can't particularly show... (read more)

In terms of weird fixations, there are quite a few strange things that the LW community seems to have as part of its identity - polyamory and cryonics are perhaps the best examples of things that seem to have little to do with rationality but are widely accepted as norms here.

If you think rationality leads you to poly or to cryo, I'm fine with that, but I'm not fine with it becoming such a point of fixation or an element of group identity.

For that matter, I think atheism falls into the same category. Religion is basically politics, and politics is the mind... (read more)

For me, one of the most appealing things about EA (as opposed to rationalist) identity is that it's not wrapped up in all this unnecessary weird stuff.

I'd consider EA itself to be one of those strange things that LW has as part of its identity. It's true that EA involves rationality, but the premises that EA is based on are profoundly weird. I have no desire to maximize utility for the entire human race in such a way that each person's utility counts equally, and neither does just about everyone else outside of the LW-sphere. I prefer to increase uti... (read more)

Seems to me we have to differentiate between two things: a) x-rationality (rationality without compartmentalization) b) LessWrong x-rationalist culture Rationality means thinking and acting correctly, not doing stupid stuff. Culture means creating an environment where people feel comfortable, and are encouraged to do (what the culture considers to be) the right thing. There is only one rationality, but there can be multiple rationalist cultures. Different cultures may work better for different people. But different people cannot have different definitions of rationality. Seems to me that polyamory is a clearly cultural thing, atheism is a part of rationality itself (not believing in magic, not accepting "mysterious answers", reductionism), and cryonics is... somewhere in between, these days probably more on the cultural side. Secular solstice is obviously a cultural thing, and in my opinion not even a central component of the traditional LW culture; although it's obviously related. I love the "old good hardcore LessWrong rationalist culture", and I would be sad to see it disappear. I want it to survive somewhere, and LW seems like the logical place. (I mean, where else?) But I don't want to push it on other people, if they object. I enjoy it, but I can understand if other people don't. I support experimenting with other rationalist cultures. Not sure what is the solution here. Maybe making the cultures more explicit? Giving them names? Yes, this encourages tribal thinking, but on the other hand, names are Schelling points. (And if we don't have an explicit name for the culture, people will simply use "the rationalist community" as a name, and then there will be confusion when different people will try to define it differently, when what they really mean is they prefer different cultures.) Actually, this could be an interesting topic for a separate discussion: Do we need a rationalist culture? What kinds of cultures (that we could consider rationalist) alread
I don't notice Less Wrong users bashing religion all the time. At some point in the past, there may have been more overlap with New Atheism, but because there are no new points being made in that domain these days, among other reasons, I don't observe this as much. Mind you I could be biased based on how I spend less time on Less Wrong the website these days, and spend more time discussing with friends on social media and at meetups, where bashing religion seems like it would take place less often anyway. Mentally, I've switched out "politics is the mind-killer" for "politics is hard mode". That article was originally written by Robby Bensinger, and I think it works better than the original sentiment, for what it's worth. I perceive the secular solstice as part of the rationalist community being a step away from the public atheism and skeptic communities, at large. While in many skeptic circles, or among casual atheists, people I know seem grossed out by the elements of piety and community devotion, it seems to me the rationalist community embraces them because they understand, psychologically, replicating such activity from organized religion can engender happiness and be empowering. The rationalist community may be able to do so without receiving all the fake and false beliefs which usually comes with the territory of organized religion. In embracing the secular solstice, perhaps the rationalist community isn't afraid of looking like a bunch of clowns to achieve their goals as a social group. On the other hand, the secular solstice could be too heavy-handed with symbolism and themes of anti-deathism and transhumanism. I haven't attended one. I know there were big ones in Seattle, New York, and Berkeley in 2014, and I think only the latter was so overtly steeped in transhumanist memes. I could also have more sentimentality for the of a "secular solstice" than most non-religious folk, as I seem to perceive more value in "spirituality" than others.
So, maybe this is just my view of things, but I think a big part of this conversation is whether you're outside looking in or inside looking out. For example, I'm neither poly nor signed up for cryo, but I'm open to both of those things, and I've thought them through and have a balanced sense of what facts about the world would have to change for my identification / recommendations to have to change. In a place where most people have seriously considered the issue, that gets me no weird looks. But saying "I'm open to cryo" to an audience of stereotypical skeptics comes across as an admission of kookery, and so that's the relevant piece about LW they notice: not "they don't scoff at ideas" but "they believe in cryonics more than normal." Is that true? I mostly don't notice people scoring cheap points by criticizing religion; I mostly notice them ignoring religion. Mmm. I would say that "religion is basically community"--they're the people you spend a lot of time with, they're the people you have a shared history / myth base with, they're people you can trust more than normal. And any community, as it becomes more sophisticated, basically becomes a 'religion.' The Secular Solstice is part of making a genuine sophisticated rationalist community--i.e., a rationalist religion, of the "brownies and babysitting" variety rather than the "guru sex cult" variety.

I think LessWrong has a lot of annoying cultural problems and weird fixations, but despite those problems I think there really is something to be gained from having a central place for discussion.

The current "shadow of LessWrong + SSC comments + personal blogs + EA forum + Facebook + IRC (+ Tumblr?)" equilibrium seems to have in practice led to much less mutual knowledge of cool articles/content being written, and perhaps to less cool articles/content as well.

I'd really like to see a revitalization of LessWrong (ideally with a less nitpicky cultu... (read more)

I've read some of the comments below, and I'm thinking both for your own use and further discussion it will help to distinguish between different sorts on Less Wrong by reading this post by Ozy Frantz.
Not that they aren't here, but which ones are you talking about? What's a weird fixation to some might be an attractor for others, and visa-versa.

My impression significantly differs, though I'm far from confident. I'd be interested in seeing an expanded version of this point because it seems potentially very valuable to me.

Great, thanks for letting me know.

I agree, that comment was written during a somewhat silly period of my life. :)

This post seems more suited for the Discussion section (insofar as it is suitable for LW at all).

It went very well - too well, in fact! Writing a LessWrong post did not feel alive to me, so I didn't do it.

Great post! I'd love to see this in the Main section.

Would you avoid making yourself better at thinking because you might start winning arguments by bamboozling your opponent?

I do avoid making myself better at arguing for this reason. Thinking is another story.

Affordances; men with hammers and all that.

There's more than one affordance. For example, the one of being able to go out without having to think all the time about safe routes and sticking to brightly lit public spaces. Would you avoid making yourself better at thinking because you might start winning arguments by bamboozling your opponent?

I'm considered pretty good in this respect. I think the #1 thing that helps is just paying attention to things a lot and having a high degree of situational awareness, which causes you to observe more interesting things and thus have more good stories to share. Reading quickly also helps.

When it comes to actually telling the stories, the most important thing is probably to pay attention to people's faces and see what sorts of reactions they're having. If people seem bored, pick up the pace (or simply withdraw). If they seem overexcited, calm it down.

One go... (read more)

While it's important to avoid encouraging political debates on LessWrong, exercising virtues such as moderation and tolerance when such issues do come up is even more important.

I agree. That's why I looked at advancedatheist's comment history before replying. If this were the only such comment, I would not have called it out-- but this user has a history of posting similar comments.

Now, advancedatheist has also posted comments that advocate neoreactionary positions in ways that I consider totally appropriate for LessWrong-- this one, for example. But IM... (read more)

Politics is an important domain to which we should individually apply our rationality—but it's a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational.

The purpose of LessWrong is to discuss and learn rationality, so I think politics are almost never appropriate here. But even if we think that civilized discussion of political matters is appropriate, the post I was critiquing was not, IMO, up to our standards of civility and polite discussion.

I don't think it was out of line. May I suggest instead that you reject it because of its ideological content which you find unacceptable? Fighting against the political mindkill you fall prey precisely to what you object to.

For discussion of political matters? A bit late for that, I think. This train has left the station.

Has it? Insofar as it has, that's been thanks to our own failure to tend to basic principles. I think that in order to better reach as many people as possible, it's critical that LW avoid politics and the potential biases that can result.

I do agree that having civilized discussions even while disagreeing about politics is important. But there are other venues for that, like Slate Star Codex, and if we indeed need more of this I think it's better to move it off-site.

Re-read that post carefully :-) It doesn't say not to discuss politics, it says don't be an ass about it. I am unaware that this is a goal of LW. If, by any chance, it is, LW is spectacularly unsuccessful at it :-D Well, we disagree about that. In a fairly civilized fashion, so far :-) P.S. And most discussion here is actually about political philosophy, not politics themselves. Notice how today's US elections which flipped the Senate got zero posts on LW.

Yes, ha ha. This is a serious matter, though. I believe that it really truly doesn't matter whether someone's political points are good or not. LessWrong should not allow itself to be a venue for this sort of behavior, especially when it's accompanied by this sort of tone.

In order for the LessWrong community to flourish, I think it is critical that it be divorced from bickering over political matters. So when it comes to posts like this one, I really truly don't care whether their arguments are valid or not-- either way, they shouldn't be on LessWrong

For discussion of political matters? A bit late for that, I think. This train has left the station. I disagree. "Bickering", of course, is a word with negative connotations, but I see no reason to taboo political discussions here. Politics of all sorts are important in real life and having a giant blind spot doesn't look too useful for that winning thing that rationality is supposed to be about :-/ So far on LW people have shown their ability to have civilized discussions even while disagreeing about politics. That's a good thing.

I don't particularly care about whether the points are valid. This kind of discussion isn't what LessWrong is for, especially when it's being posted with this sort of tone.


I disagree with the general concept that LW is an appropriate place to post bizarre, mindkilled political rants.

I agree that the tone sucks. However, some of the points are valid. For example, the large chunk of opposition to (online) feminism is now from the mens rights crowd, not from traditional-gender-roles crowd. And this pattern should be expected to continue in the future. For example, the main opposition to assisted suicide in the US is currently religion-motivated. However, in Canada and elsewhere where religion is only a minor player, the main opposition is from the secular disability rights movements. The advocates of the right to die with dignity will find themselves opposing similarly "progressive", kind and compassionate people, once the issue is no longer about faith. You can probably name another issue or two where overcoming one obstacle only leaves you bashing against a different, unexpected one, without having made much progress.

Please stop making comments like this.

shminux is right here, this is not a helpful attitude on your part. While it's important to avoid encouraging political debates on LessWrong, exercising virtues such as moderation and tolerance when such issues do come up is even more important.
This request is likely to be ineffectual without something more concrete. The OP makes several rambling points, it's not clear which you disagree with.

The example that springs to mind most readily is that a few days ago, someone asked me if we had a video cable I hadn't heard of in the office. I didn't recognize the name but knew I'd recognize it by sight, so I searched for the name of the cable online, found a picture of it, and directed the person to the right location.

It doesn't take significantly longer for me (I just did a side-by-side comparison and couldn't tell the difference), though I have Google Fiber at home and pretty fast Internet at work. I also didn't notice the ads until you pointed that out and don't consider them particularly annoying.

That said, if these are persistent stumbling blocks then by all means don't use this service. If Goodsearch took 2-3 seconds more than Bing/Google for me I would certainly not use it.

I have some degree of discipline and a pretty good degree of self-awareness, but in the past-- even the recent past-- I've definitely found myself doing shiny but unfulfilling activities for extended periods. It's possible that I've gained a bunch of skill or willpower without noticing it and that this event caused me to shift into a mode that I didn't know how to access before, but this didn't feel like using discipline to me.

I've experimented with different alarms. For some reason the one that seems to work best is very loud and harsh-- not because it wakes me up, but because my subconscious hates it and consistently wakes me up a few minutes before it goes off. I'm not sure what exactly causes this effect but I've found it extremely useful.

Hmm, depends on what you mean by useful. I think lucid dreaming is:

a) very fun

b) useful for becoming more rational,, but only in a somewhat limited way-- it can be very good for training noticing confusion but doesn't seem to have a huge amount of potential beyond this.

By useful, I mean, can I use it to: * Practice a speech? * Deliberate about a decision with imagined famous people? * Get more comfortable around people of high status?

This is a line of development that-- while clearly useful-- seems somewhat hacky and unpromising to me. While I agree that this is likely to yield useful benefits in the short run, it strikes me that fixing one's internal structure in order to produce reliably correct external actions without these sorts of hacks seems more promising in terms of long-term growth and skills.

About a year ago, I thought that lucid dreaming was a great path to rationality. While lucid dreaming is a great way to train the skill of noticing confusion, I no longer recommend it to... (read more)

I've been thinking about trying out lucid dreaming. Do you think it's not useful in general, or just in terms of becoming more rational?

Extremely good post. I'd love to see more content like this on LessWrong.

"Cake or Death" was part of an Eddie Izzard joke from 1998-- I think it has achieved some kind of widespread memetic success, though, since I've seen it in quite a few places since.

This approach 80/20s the point I want to convey. Writing up a bunch of examples is more work than the entire rest of the post combined and IMO adds substantially less utility, so I'm not doing it here. I'll probably do so when/if I write this up for Main.

If you can reliably emulate a wiser person, why not just be the wiser person?

I think you're missing my point. I'm saying it's easier to be wise about someone else's problems than your own. My "Big Brother" need not be wiser than me, just wiser than me about my problems.
Being wise is the goal, emulation is a method of approaching it.

Yes, I consider this an empirical claim. I have a fair amount of anecdata from people I've shared this with in person about this being a useful approach.

That said, I agree that some may not find this effective or will find it harmful; this is why I wrote "in many cases" rather than "in almost all cases" or "you will find" or similar.

If you do not find this technique effective, I suggest that you don't practice it. I and a few friends found it useful and interesting enough to be worth disseminating.

Wouldn't it be easier to just write down and then assess/ameleriorate the biggest weaknesses in your plans?

In theory, yes; in practice, this seems to work less effectively than we'd like to think.

Are there studies on this? Or is it your personal observations? If it's the latter, what works best for you?

You even mention an example and then still fail to actually give it. That annoyed me because it would have been nice to see this abstract idea grounded.

In general I think LessWrong cares far too much about this sort of detail. I posted this in Discussion rather than Main precisely because I didn't want to write up a bunch of examples to express a straightforward principle.

Thing is, it's not straightforward to me what exactly I have to imagine, and I don't seem to be alone with this. (See e.g. Nectanebo's comment below.) In general, the established practice on LessWrong is to give examples to illustrate what you mean, and I disagree that this is "caring far too much about that sort of detail".

Is this intended to be an empirical claim?

I'm confused as to what thought process generated this comment. Can you explain?

Sure. The piece of advice being offered is a potent one: "Any time you come up with a significant plan, assume the worst about your own planning and your own performance. Specifically, reformulate the plan and your expectations of its execution in terms you would find insulting." If someone were to take this seriously, it would dramatically change the way they live. Much in the same way that the corresponding epistemic pessimism is supposed to, and I assume that's your intention. So, again, potent medicine. At the heart of your argument for doing this is what looks to be an empirical claim: "In general, following this advice will give you an accurate picture of your probable performance." This may be so, but it seems likely to me that the relationship between expected and actual performance will vary significantly from person to person. And if in a given case your claim is off the mark, the effect could easily be harmful. So, I wondered if you considered this an empirical claim. And if so, whether or not you had some relevant data. If you don't have any relevant data, and you do consider this an empirical claim, then it may be a bit rash to offer this kind of advice.

I agree that "it's arrogant for you to write a book" is probably not helpful, though "you can't finish a project this long" may or may not be helpful depending on whether you generate that thanks to reference class forecasting (even insulting, biased reference class forecasting) or thanks to negative self-image issues.

In general, I do not advocate this (or any other) technique if it causes damage to your self-concept, intrusive thoughts, etc.

The problem with "You can't finish a project this long" is that is doesn't come with a reason like "You haven't set aside enough time" or "Planning fallacy!" or "You'll have to trade off against more worthwhile use of your time" which are all useful to address. I'm describing a kind of thought that doesn't feel like troubleshooting but more like anti-self efficacy, where the problem isn't the plan, it's that the plan has you in it. I like pre-mortems, outside view, etc, so I'm not denigrating the technique, just flagging an error mode.

I actually find it more effective than the pre-mortem (and the closely related pre-hindsight technique I learned at CFAR). While those techniques are certainly effective, I think it's easy to be too charitable to oneself, even in failure. This version has explicit safeguards against that possibility.

As for the name, certainly it is not fully accurate. That said, being memorable and salient is quite important. The primary failure mode of this sort of technique is not remembering it in the heat of the moment, so I selected a name optimized for being shocking and memorable rather than fully accurate.

Unfortunately, I don't know of an easy way to do this, aside from writing the post in the LW editor-- that said, I think it looks good now. Thank you for editing!

I would appreciate it if you reformatted this post to match standard LW font/text size conventions.

Done, sorta - but I had to manually edit the HTML since there don't seem to be built-in font controls. Am I missing something? Was there an easy way to do this? Also, please let me know if there are still loose ends I need to tie up.

Very funny-- my own choice for a fun-but-useless martial art was the épée!

And which was the unfortunately-useful martial art you took before that?

I used the term antiskill for parallelism with the concept of an antipattern. While I agree it is somewhat imprecise, I think the parallelism, ease of use in speech, and general aesthetic virtue of the term is enough for it to be better than "disadvantageous skill"-- though I had considered that earlier and certainly think it's a potentially valid choice

The martial arts example I provided in this comment may prove to your liking.

Indeed it does. I have to point out that even that skill strongly depends on context (albeit a specifc but very common context).

Could you give example where it helped you make decisions against learning a skill?

For a while I was interested in learning martial arts for self-defense. Then I realized that a version of me that had advanced martial arts knowledge would be more inclined to fight people, while a version of me that did not have advanced martial arts knowledge would be more inclined to avoid conflict.

Given that fighting someone-- even with advanced/superior skill-- is likely much more dangerous than avoiding conflict, and that there is a risk of injury in martial arts tr... (read more)

Why? Especially:
Not learning combat skills as a commitment to avoid conflict is a nice mirror image of Schelling's Xenophon example, where cutting off your ability to retreat is a way to commit yourself to winning a fight.
Eh, I have a black belt and I don't think it's increased my likelihood of getting into fights at all. Now bodybuilding, on the other hand, is definitely causing an issue via increased testosterone. I still don't get into fights, though. One useful thing to remember, if one is the sort of person who reads lesswrong, is interested in either of these activities, but doesn't want to get into fights is the fact (I can't remember where I read, this but our purposes, it's instrumentally useful to believe even if false) that the average IQ of individuals ending up in the emergency room because of fight-related injuries is 87 -- not because dumb people are more likely to lose fights, but because smart people are more likely to avoid them. If you think of yourself as "not the kind of person who gets into fights," (because they are mostly idiots) you're less likely to get into fights.
I don't think that learning a martial arts increase the chance that you will fight. People usually fight because they are afraid and their fight-or-flight response triggers. Being confident in critical situations because you know how to fight, reduces the chances of actually fighting.
It is worth pointing out that most martial arts, at least in the older traditions, put quite a bit of stress on the skill of fight avoidance. I have no clue how true that is of American martial arts training. It also may be genuinely easier to avoid fights if you are not afraid of them. A story along those lines.
I agree that self-defense martial arts is a better example of an anti-skill. And that aestetic martial arts (e.g. stange combat) have most advantages (e.g. health and signalling-wise) but few of the disadvantages. I did choose foil fencing for this reason - after getting the idea from Heinleins books (who was a fencer ).
If you are genuinely interested in self defense, get a weapon that is legal in your jurisdiction. Even pepper spray will probably serve you better than martial arts. If you want to learn martial arts for fitness AND self-defense, a weapon will complement the second purpose.

Could you give actionable feedback?

I would run this post (and future posts) past an editor/proofreader who is strongly familiar with the English language prior to posting.

Thank you. I didn't expect that. The feedback on this I got earlier wouldn't have suggested it.

While I like the concept, I think this post needs substantial editing and revision prior to posting on LessWrong.

A agree that it is no technical report or well researched article but an opinion/idea post containing more open than closed ends but this doesn't appear to a requirement for Main posts by itself. I thought I made into a consistent well-linked whole (more so than is usual in a Discussion post suggesting an idea) so I'm really unclear what kind of editing might be needed. Could you give actionable feedback? EDIT: Nonetheless I assume you know what you say so I moved it to Discussion.

I don't think so. It's interesting that the LW article makes top 10 in google for google win-more card. There one article about the concept.

Hmm, I don't see the LW article on the first page at all. Perhaps this is different search customization?

In any case I see several other articles on this topic, as well as many forum discussions about it, people asking whether specific cards are or aren't win-more, etc.

Load More