All of ntroPi's Comments + Replies

I actually like this post and agree to most points you make. I'm not talking about the meta points about steelmanning and rhetoric tricks.

The obvious and clearly stated bias helped me to better insights than most articles that claim true understanding of anything.

I'm not sure whether this is due to increased attention to weak arguments or a greater freedom to ignore weak arguments as they are probably not serious anyways.

Can it be both? Was that effect intentional?

I would read a "Steelmanning counterintuitive claim X" series.

Interesting. Glad it seems to have given some new understanding! But please believe me, though a lot of the individual points are very valid, I could shred my central thesis entirely.
Nothing like a good idea to get lots of names.

I like your solution to pascals mugging but as some people mentioned it breaks down with superexponential numbers. This is caused by the extreme difficulty to do meaningful calculations once such a number is present (similar to infinity or a division by zero).

I propose the following modification:

  • Given a Problem that contains huge payoffs or penalties, try common_laws solution.
  • Should any number above Gogol be in the calculation, refuse to calculate!
  • Try to reformulate the problem in a way that doesn't contain such a big number.
  • Should this fail, do nothi
... (read more)
  • Select one and only one cause to join that you really care about.
  • Activism is useful for networking as already mentioned. Treat it as a tool, not as an achievement.
  • Read to find out what really needs to change. What are the root causes? What keeps the movement from being effective?
  • Again select just one of these according to your abilities.
  • Edit: Oh and please just do it. Don't get lost in "I will be more effective by earning money and paying someone to do it." mindgames. You can't pay them to actually care, they will do a lousy job. Find something you can do and grow with the challenge!

I've been in that spot for a long time and my excuse always was that vegetarianism would be too inconvenient.

Around the end of last year it finally clicked. The inconvenience excuse is plainly wrong in many cases AND being a vegetarian in just these cases is still a good thing!

I resolved to eat vegetarian whenever it is not inconvenient. This turned out to be almost always. Especially easy are restaurants and ordered food. When in a supermarket I never buy meat which automatically sets me up for lots of vegetarian meals.

I'm currently eating vegetarian on ~95% of my meals. As a bonus I don't have a bad conscience in the few cases where I eat meat.

Actually, I just did something similar. I already keep daily logs about what I did during the day (e.g. if I had an exercise), so I added not eating meat to the list of recorded variables. This step was trivial. But now, whenever I am choosing a meal, I remember that I will have to put it on the record. And somehow this trivial inconvenience makes me choose the vegetarian meal more reliably than previously. I started recording this two weeks ago. First week there was no difference, but the second week I ate vegetarian food for the whole week. I only broke the chain today, because we already had some pre-made soup containing meat at home.

Here are two projects that try to remove subvocalization. It's fun to try at least.

I find the qualitative reflections most enlightening and especially that you said: "But never in the course of this experiment did I count something that turned out to be unimportant."

Your under-confidence in that point may be very common leading to thoughts like: "Yea noticing confusion is all nice but I usually do that already. I'm fairly certain that I'm only missing some irrelevant confusion." Your experience suggests that there is no such thing as irrelevant confusion. The art is to notice as many as humanly possible instead of just some.

I have never read a better motivation to go and actively try to notice confusion than this sentence. Thanks.

Lying is saying something false while you know better. Not lying doesn't imply only saying true things or knowing all implications.

The added burden should be minimal as between friends most people already assume that they are not lied to without making it an explicit rule.

Wait, wait, has the game already started?

The start of the game may be undefined and whether a lie is couted as inside the game depends a lot on the players.

That one didn't really work for me, but a bunch in the google picture results for "reading in my voice" [] do.

I actually read the article due to your post and it was interesting. I agree to your point, just didn't like the style and I could have been more diplomatic about it.

Keep posting. :-)

I don't think prevention is very likely as EYs comment suggests that moderator intervention will be very hard or even impossible, so disincentivizing is probably the way. I hope my suggestions would remove a motivation for mass downvoting by making it impossible to attack someones karma.

This is decreasing your work in commenting by increasing the work for some readers. It would be globally more useful to spend one minute on a better comment like the one Viliam_Bur has posted, than having an unknown number of people read the linked article to understand your point.

Your utility function and opinion may differ though, perhaps your intention was not primarily to get a point across but to make people read the article?

I'm sorry that you didn't like my comment. My intention was to get a point across. I thought that anyone who read my comment, didn't find its meaning clear, and was interested enough that they'd have bothered to read a longer and more explicit one would probably also be willing to read the thing I linked to, and that they might find it interesting if they did. (Being terse plainly hasn't, in fact, decreased the amount of effort I've had to expend.)

A less extreme modification of the karma system would be to keep the downvotes but change how karma is calculated for the users.

Karma could be defined as the sum of all votes of posts with positive total score. An alternative change would be to count only the upvotes and ignore downvotes completely for the karma calculation.

In both cases the general correlation between users that post great content and high karma would stay intact but mass downvoting would no longer feel as threatening. All the signaling benefits you mentioned would still work in this modified system.

Do you think these are acceptable changes to the karma system?

Doesn't seem crazy. I'd have to give it more thought before deciding whether it's likely to be an improvement. (Not that it particularly matters whether I think it's likely to be an improvement!) But mass-downvoting would still be an abuse of the system and make karma less informative. Better to make it go away, if possible, either by preventing it or by disincentivizing it. [EDITED to add: I mean "and make the scores of posts and comments less informative".]

"Society" doesn't make decisions, groups of people make decisions.

The way society forms mass-opinions and decides (i.e. by voting) on important issues is not easily split into groups of people making decisions.

Still I accept your mechanism because group decisions are a large part of society and improving that will improve society.

About the group project: If we can get everyone to be "genuinely rational" instead of just a bit more rational we will certainly live in a very different world. I don't expect that anytime soon though.

You're right. "Has read a majority of the sequences so that there is a high probability that this specific sequence is among them" would have been more precise.

While it was an exaggeration "extreme distortion" seems like a harsh judgement.

Edit: oh sorry - I i didn't mean to imply all the sequences are necessary for understanding. I'll fix the sentence.

[This comment is no longer endorsed by its author]Reply
Having to read the "majority of the sequences" is still an extreme distortion. It's enough to have a look at the (single) linked post.

A group project is far away from society as a whole, where discussion and explanation between all members is impossible due to scale.

Your project could benefit from increased obedience as you could just lead rationally and the others would follow. Disagreements between rational people can take a longer time to resolve, etc.

I still agree to all your examples. More anecdotes will not be helpful, as I already agree that increased rationality will improve society (and group projects and institutions for that matter).

What I'm missing is a clear mechanism that actually produces a more rational society just from increasing the rationality of people. Please explain the mechanism.

This is a good point even for the society. To get a rational society, it is not necessary that literally everyone becomes rational. Just that the rational people make the most important decisions, and the others follow them. Although there are dangers with this solution in a long term; specifically that some day the irrational people may decide to stop following the rational ones. In democracy it means someone else uses some simple tricks to get their attention, and wins the elecion. On the other hand, the non-democratic societies have another long-term risk, which is the leading group becoming irrational from the inside; either they lose their sanity gradually, or just a small subset goes insane and succeeds to remove the others from the inner circle.
"Society" doesn't make decisions, groups of people make decisions. If every individual in the group understands how to avoid natural pitfalls, how to coordinate decisionmaking processes, how to take on board information from viewpoints which conflict with their own and incorporate what's useful rather than throwing it out wholecloth, etc, then the collective decisionmaking ability of the group is improved. The projects I participated in could have benefited from increased group obedience, if everyone simply followed my lead, but if the members lacked the reasoning ability to distinguish between competent leaders, how would they know who to trust to lead them? In my experience, disagreements between genuinely rational people overwhelmingly do not take a longer time to resolve. One of the basic components of rationality is knowing how to take new information on board and actually change your mind. Disagreements between irrational people tend to be far more intractable.

I disagree about having this problem solved by moderators. Changing the karma system would be preferable i.e. by removing the downvotes or having downvotes only affect the individual post but not on the total karma of the user.

What do you mean by "only obvious in extreme cases?"

Just, that there is no obvious mechanism that produces a more rational society from more rational people.

Again I agree on the positive effects of rationality and do believe that more rationality will improve society. But there are many people that say the same about religion, obedience or other things that I don't view as positive.

I don't think it's true at all that there's no obvious mechanism that produces a more rational society from more rational people. If I'm working on a project with a group of irrational people, the other members will tend to make mistakes of judgment which I'm simply too many steps of inference removed from them to realistically explain. So I give up, and the project suffers. If I'm working on a project with a group of highly rational people, those problems can be avoided without even needing to be discussed, saving energy for higher level problems. Groups are made up of individuals. If every individual in a group recognizes the problems which will attend a course of action, that group is much more likely to avoid those problems than a group where nobody recognizes them.

So a society is rational if the institutions are rational ... and an institution is rational if its outputs seem rationally designed ... which is judged by a rational individual ... which is still hard to define.

I see your point and agree that there is room for improvement. Instead of "more rational" I would propose "less insane" which seems to fit the evidence as good as the other description.

Will one of these more insane societies become less insane by making sure everybody on the streets is less insane? The connection doesn't seem ob... (read more)

What do you mean by "only obvious in extreme cases?" I would definitely not agree that the connection between rational individuals and rational society is merely implied by the use of the same word, I would absolutely say that they're inextricably linked. Having attempted cooperative projects with other people over a wide range of rationality levels, I've found that working with groups of more rational individuals really does eliminate a huge cohort of problems which attend the work of less rational individuals. One member on this site, years ago, discussed how his boss had once remarked on how well a project he (the commenter) had handled had gone, and spoken of it as if it were simply a fortuitous chance. And the commenter explained to the boss that the project had gone well because he'd designed it to go well by addressing the possible points of failure. This was a possibility that had simply never occurred to his boss before. Operating within rational versus irrational groups can spell the difference between everyone understanding concepts like this versus nobody understanding them.

I think you have a good point, but it would be easier to see if you had posted a short sentence explaining what your point is. Please don't assume that every reader has read all the sequences or has the time to do so (edit: read this one) just to understand your comment.

The idea is that you shouldn't start your reasoning process from the conclusion, if you want to be rational. For a rational person, conclusion is what they get at the end, after weighing all available evidence, not a starting point. Specifically, you don't know whether "rationality would be beneficial for the society". So you shouldn't start at this point (the conclusion). What if you are wrong (but there is a selective evidence you could use to support your conclusion anyway)?
I certainly don't assume that any particular reader has read all the sequences (nor that they should). I don't think it's so unreasonable to suggest reading one particular not-so-long post -- whose title might give the game away to a sufficiently quick-witted reader without even needing to follow the link.
A particular post was linked. The implied requirement of having to "read all the sequences" is an extreme distortion of the issue that makes your remark seem more relevant.

Assuming that rationality can be taught at school to everyone, is there even a connection between more rational individuals and a more rational society?

The problem I see here is that rationality is already very weakly defined for individuals and I know of no definitions in the context of society. A society can't even think (or can it?), how can it be rational?

Many decision processes of society are not based on rationality at all and I see no reason why the tried ways of winning (i.e. corruption) should be replaced by others assuming as the only change slig... (read more)

The first answer that comes to mind is "A society can be considered rational if the institutions that society creates collectively would be considered rationally and intelligently designed had they been designed by an individual." This is clearly not usually the case [], but some societies have it much worse than others. Yvain's writings on Haiti reveal a society which is, by this metric, much less rational than America. I see no reason to suppose that existing first world countries have hit some theoretical ceiling on societal rationality.

The positive effects would trickle down into many aspects of our society.

I think the opposite way is more probable. We first need a better culture of debate in society. Only if debate is more accepted and expected by the general population this change may trickle up to the politicians and the mass media. It could be pushed back down by the powerful if they feel threatened.

Mass debate is very difficult though.

This gave me an idea to make things even more complicated: Let's assume a scientist manages to create a simulated civilization of the same size as his own. It turns out, that to keep the civilization running he will have to sacrifice a lot. All members of the simulated civilization prefer to continue existing while the "mother civilization" prefers to sacrifice as little as possible.

How much should be sacrificed to keep the simulation running as long as possible? Should the simulated civilization create simulations itself to increase the preferen... (read more)