Sorted by New

Wiki Contributions



An intuition is that red-black trees encode 2-3-4 trees (B-trees of order 4) as binary trees.

For a simpler case, 2-3 trees (Ie. B-trees of order 3) are either empty, a (2-)node with 1 value and 2 subtrees, or a (3-)node with 2 values and 3 subtrees. The idea is to insert new values in their sorted position, expand 2-nodes to 3-nodes if necessary, and bubble up the extra values when a 3-node should be expanded. This keeps the tree balanced.

A 2-3-4 tree just generalises the above.

Now the intuition is that red means "I am part of a bigger node." That is, red nodes represent the values contained in some higher black node. If the black node represents a 2-node, it has no red children. If it represents a 3-node, it has one red child, and if it represents a 4-node, it has 2 red children.

In this context, the "rules" of the red-black trees make complete sense. For instance we only count black trees when comparing branch heights because those represent the actual nodes. I'm sure that with a bit of work, it's possible to make complete sense of the insertion/deletion rules through the B-tree lens but I haven't done it.

edit: I went through the insertion rules and they do make complete sense if you think about a B-tree while you read them.


Although I appreciate the parallel, and am skeptical of both, the mental paths that lead to those somewhat related ideas are seriously dissimilar.


I have a question, but I try to be careful about the virtue of silence. So I'll try to ask my question as a link :

Also, these ideas are still weird enough to win against his level of status, as I think the comments here show:


Could you expand on this?

...there are reasons why a capitalist economy works and a command economy doesn't. These reasons are relevant to evaluating whether a basic income is a good idea.


Sorry, "fine" was way stronger than what I actually think. It just makes it better than the (possibly straw) alternative I mentioned.


No. Thanks for making me notice how relevant that could be.

I see that I haven't even thought through the basics of the problem. "power over" is felt whenever scarcity leads the wealthier to take precedence. Okay, so to try to generalise a little, I've never been really hit by the scarcity that exists because my desires are (for one reason or another) adjusted to my means.

I could be a lot wealthier yet have cravings I can't afford, or be poorer and still content. But if what I wanted kept hitting a wealth ceiling (a specific type, one due to scarcity, such that increasing my wealth and everyone else's in proportion wouldn't help), I'd start caring about relative wealth really fast.


I see it as a question of preference so I know by never having felt envy, etc. at someone richer than me just for being richer. I only feel interested in my wealth relative to what I need or want to purchase.

As noted in the comment thread I linked, I could start caring if someone's relative wealth gave them power over me but I haven't been in this situation so far (stuff like boarding priority for first-class tickets are a minor example I did experience, but that's never bothered me).


Responding to a point about the rise of absolute wealth since 1916, this article makes (not very well) a point about the importance of relative wealth.

Comparing folks of different economic strata across the ages ignores a simple fact: Wealth is relative to your peers, both in time and geography.

I've had a short discussion about this earlier, and find it very interesting.

In particular, I sincerely do not care about my relative wealth. I used to think that was universal, then found out I was wrong. But is it typical? To me it has profound implications about what kind of economic world we should strive for -- if most folks are like me, the current system is fine. If they are like some people I have met, a flatter real wealth distribution, even at the price of a much, much lower mean, could be preferable.

I'm interested in any thoughts you all might have on the topic :)


...people have already set up their fallback arguments once the soldier of '...' has been knocked down.

Is this really good phrasing or did you manage to naturally think that way? If you do it automatically: I would like to do it too.

It often takes me a long time to recognize an argument war. Until that moment, I'm confused as to how anyone could be unfazed by new information X w.r.t. some topic. How do you detect you're not having a discussion but are walking on a battlefield?


I think practitioners of ML should be more wary of their tools. I'm not saying ML is a fast track to strong AI, just that we don't know if it is. Several ML people voiced reassurances recently, but I would have expected them to do that even if it was possible to detect danger at this point. So I think someone should find a way to make the field more careful.

I don't think that someone should be MIRI though; status differences are too high, they are not insiders, etc. My best bet would be a prominent ML researcher starting to speak up and giving detailed, plausible hypotheticals in public (I mean near-future hypotheticals where some error creates a lot of trouble for everyone).

Load More