Wiki Contributions


stands for "support"

I am guessing this refers to this notion of support I found on Wikipedia:


A good example of a more flexible tool is the “Focus-mode” on Adroid since version 11 or 12. You can disable certain distracting apps during certain times, but if you really need to, you can click on the icon and ask for using it for 5 minutes. After 5 minutes, it exits the app and makes my whole phone hang for a few seconds. I consider this a feature rather than a bug. It's nice to have this added friction to use apps I don't want to use often (like messaging apps during the morning).

From virtues perspective, the group organizer should be worried if they are not on the path to turn themselves from a thinker to an idea salesperson.

From the structure of your argument, I infer, you want to leave out the "not" from that sentence, right?

From virtues perspective, the group organizer should be worried if they are on the path to turn themselves from a thinker to an idea salesperson.

I think I don't really get the psychology of most people here. Like, when I first heard about effective altruism, I'd have loved to meet with someone who could get me up to speed rather quickly. I think the only problem with this, just like with real salespeople, is that you have to second guess yourself how much they are in it for their own instead of your benefit. This means I want to figure out how much someone is genuinely excited about the whole topic. For example, I recently sent out some invitations to our local EA meetup to some of my friends. Some of them have never heard about EA before. Am I coming across as spammy if I invite someone for the third time, because the person gave a plausible explanation why they couldn't come the last two times? If someone does not reply, should I remind them? In practice, I am mostly thinking about whether the person would enjoy the meetup. The potential impact of the person does go into my calculus whether it is worth the effort for me, given that I already have lots of friends with whom I can talk about EA topics.

  • Time/-intervals are a general factor. How many pulses a caesium atom makes per second is determined by how much time passes

  • Reality is the label we attach to THE general factor of all our observations. Reductionism is the assumption that one general factor is enough (though computational complexity hinders us from using the same models for all areas of reality).

  • Awakeness/Sleepiness are general factors for humans. Sleeping humans behave very different from awake humans.

Your butterfly formalism strikes me as a good description of what an "objective" probability is (and what 'frequentists' actually mean). The problem with the 'frequentist view' is best illustrated by your own example:

A coin has a 50% chance of landing heads because if you flip it 100 times, close to 50 of the flips will be heads. In contrast with Bayesianism, the frequentist view is perfectly objective: the limit of a ratio will be the same no matter who observes it.

Saying something is 50% likely because it happens 50% of the time is valid, but it does not actually refer to any real phenomenon. Real coins thrown by real people are not perfectly fair, because angular momentum is crucial, if you let the coin land on a flat surface.

In some sense, nothing is objective, there is only more and less objective. But throwing a die under carefully set up conditions (like in the casino game craps) gets you pretty close to an "objective" probability that multiple humans can agree on.

Yeah it's so relevant that I assumed I must have overlooked something.

Oh now I get what you meant. I first thought you cared about the yes/no ratio. But now I understood you return one answer and terminate.

This procedure takes unbounded time, but with probability 1 it terminates eventually.

Why should it terminate?



You are using the wrong formula/using the formula wrong. Shannon actually uses markov models for the process you are describing and defines Entropy as the limit of ever greater n-grams. I'd agree that n-grams aren't necessarily the best model of what compression looks like.

[This comment is no longer endorsed by its author]Reply

Looking forward to the rest of the sequence!


Note that pure prediction is not understanding. As a simple example take the case of predicting the outcomes of 100 fair coin tosses. Predicting tails every flip will give you maximum expected predictive accuracy (50%), but it is not the correct generative model for the data. Over the course of this sequence, we will come to formally understand why this is the case.

You are just not using a proper scoring rule. If you used log or brier score, then predicting 50% head, 50% tails would indeed give the highest score.

for i in range(data_length/3):

Here you are cheating by using an external library. Just calling random_bits[i] is also very compact. You might at least include your pseudo-random number generator.

conceptual downside if we want Kolmogorov complexity to measure structure in natural systems: it assigns maximal 'complexity' to random strings.  

Maybe I've read too much Jaynes recently, but I think quoting 'complexity' in that sentence is misplaced. Random/pseudo-random processes really are complex. Jaynes actually uses the coin tossing example and shows how to easily cheat at coin tossing in an undetectable way (page 317). Random sounds simple, but that's just our brain being 'lazy' and giving up (beware mind projection fallacy blah...). If you don't have infinite compute (as seems to be the case in reality), you must make compromises, which is why this looks pretty promising.

Load More