Wiki Contributions


A RSS feed for new posts is highly desirable - I don't generally go to websites "polling" for new information that may or may not be there unless e.g. I'm returning to a discussion that I had yesterday, so a "push mechanism" e.g. RSS is essential to me.

I'm going to go out and state that the chosen example of "middle school students should wear uniforms" fails the prerequisite of "Confidence in the existence of objective truth", as do many (most?) "should" statements.

I strongly believe that there is no objectively true answer to the question "middle school students should wear uniforms", as the truth of that statement depends mostly not on the understanding of the world or the opinion about student uniforms, but on the interpretation of what the "should" means.

For example, "A policy requiring middle school students to wear uniforms is beneficial to the students" is a valid topic of discussion that can uncover some truth, and "A policy requiring middle school students to wear uniforms is mostly beneficial to [my definition of] society" is a completely different topic of discussion that likely can result in a different or even opposite answer.

Talking about unqualified "should" statements are a common trap that prevents reaching a common understanding and exploring the truth. At the very least, you should clearly distinguish between "should" as good, informed advice from "should" as a categorical moral imperative. If you want to discuss if "X should to Y" in the sense of discussing what are the advantages of doing Y (or not), then you should (see what I'm doing here?) convert them to statements in the form "X should do Y because that's a dominant/better/optimal choice that benefits them", because otherwise you won't get what you want but just an argument between a camp arguing this question versus a camp arguing about why we should/shouldn't force X to do Y because everyone else wants it.

The most important decisions are before starting a war, and there the mistakes have very different costs. Overestimating your enemy results in peace (or cold war) which basically means that you just lose out on some opportunistic conquests but underestimating your enemy results in a bloody, unexpectedly long war that can disrupt you for a decade or more - there are many nice examples of that in 20th century history.

"Are we political allies, or enemies?" is rather orthogonal to that - your political allies are those whose actions support your goals and your political enemies are those whose actions hurt your goals.

For example, a powerful and popular extreme radical member of the "opposite" camp that has conclusions that you disagree with, uses methods you disagree with, and is generally toxic and spewing hate - that's often a prime example of your political ally whose actions incite the moderate members of society to start supporting you and focusing on your important issues instead of something else. The existance of such a pundit is important to you, you want them to keep doing what their do and have their propaganda actions be successful up to a point. I won't go into examples of particular politicians/parties of various countries, that gets dirty quickly, but many strictly opposed radical groups are actually allies in this sense against the majority of moderates; and sometimes they actively coordinate and cooperate despite the ideological differences.

On the other hand, a public speaker that targets the same audience as you do, shares the same goals/conslusions that you do, and the intended methods to achieve it, but simply does it consistently poorly - by using sloppy arguments that alienate part of the target audience, or by disgusting personal behavior that hurts the image of your organization. That's a good example of a political enemy, one that you must work to silence, to get them ignored and not heard; despite being "aligned" with your conclusions.

And of course, a political competitor that does everything that you want to do but holds a chair/position that you want for yourself, is also a political enemy. Infighting inside powerful political groups is a normal situation, and when (and if) it goes to public, very interesting political arguments appear to distinguish one from their political enemy despite sharing most of the platform.

The difference is that there are many actions that help other people but don't give an appropriate altruistic high (because your brain doesn't see or relate to those people much) and there are actions that produce a net zero or net negative effect but do produce an altruistic high.

The built-in care-o-meter of your body has known faults and biases, and it measures something often related (at least in classic hunter-gatherer society model) but generally different from actually caring about other people.

An interesting followup to your example of an oiled bird deserving 3 minutes of care that came to mind:

Let's assume that there are 150 million suffering people right now, which is a completely wrong random number but a somewhat reasonable order-of-magnitude assumption. A quick calculation estimates that if I dedicate every single waking moment of my remaining life to caring about them and fixing the situation, then I've got a total of about 15 million care-minutes.

According to even the best possible care-o-meter that I could have, all the problems in the world cannot be totally worth more than 15 million care-minutes - simply because there aren't any more of them to allocate. And in a fair allocation, the average suffering person 'deserves' 0.1 care-minutes of my time, assuming that I don't leave anything at all for the oiled birds. This is a very different meaning of 'deserve' than the one used in the post - but I'm afraid that this is the more meaningful one.

I'd read it as an acknowledgement that any intelligence has a cost, and if your food is passive instead of antagonistic, then it's inefficient (and thus very unlikely) to put such resources into outsmarting it.

If animal-complexity CNS is your criteria, then humans + octopuses would be a counterexample, as urbilaterals wouldn't be expected to have such a system, and the octopus intelligence has formed separately.

A gold-ingot-manufacturing-maximizer can easily manufacture more gold than exists in their star system by using arbitrary amounts of energy to create gold, starting with simple nuclear reactions to transmute bismuth or lead into gold and ending with direct energy to matter to gold ingots process.

Furthermore, if you plan to send copies-of-you to N other systems to manufacture gold ingots there, as long as there is free energy, you can send N+1 copies-of-you. A gold ingot manufacturing rate that grows proportionally to time^(n+1) is much faster than time^n, so sending N copies wouldn't be maximizing.

And a third point is that if it's possible that somewhere in the universe there are some ugly bags of mostly water that prefer to use their atoms and energy for not manufacturing gold ingots but their survival; then it's very important to ensure that they don't grow strong enough to prevent you from maximizing gold ingot manufacturing. Speed is of the essence, you must reach them before it's too late, or gold ingot manufacture won't get maximized.

Dolphins are able to herd schools of fish, cooperating to keep a 'ball' of fish together for a long time while feeding from it.

However, taming and sustained breeding is a long way from herding behavior - it requires long term planning for multi-year time periods, and I'm not sure if that has been observed in dolphins.

Load More