Sorted by New

# Wiki Contributions

Let's say the entire biography of the universe has already been written. An unending, near infinitely granular chain of cause and effect, extending forward in a single direction: time. For a model of this, you could use boxes representing choices (we'll get to that later) and potentially multiple (again, later) arrows between boxes in such a way that an arrow from A to B means A is the cause of B, plus an axiom that there is no way to go back, meaning if there is a path from A to B there is not a path from B to A. (This is pretty much the definition of a branch in a directed tree).

If you, from any given branch, trace human life, your grandmas 20-25 years, Saturn from 1720 to 1963, etc... you should get roughly the same thing: A to B to C to D........ to wherever you choose to stop counting. A single one width arrow in which every cause has a single effect, and viceversa, and everything that happened was originally intended to.

You can definitely see it that way, but its not very interesting.

A good idea might be to consider the fact that, with the intuition and evidences we have, there is not really a way to tell whether the branch we're living in, in which every event has either 0 or 1 (this is a huge abuse of the term, but I'm speaking to intuition) probability, since it will either happen or not (again, this is not true but bear with me), is unique, or determined, if you know the whole structure of the tree.

With that in mind, things improve a bit. Sure, you are always going to live a single A to B to C succession of events, but you don't know which. Seems trivial, and it is. And yet, it solves a lot of problems. See asimetric information and incomplete information games if you are interested in that idea.

My point is that, while choice is problematic when everything is said and done, it might very well be useful when you are still in the thick of it. Say that you are in a branch which opens alternatives B and C. The B-you , when laying in their deathbed surrounded by their B-grandchildren might say "everything was predetermined, B was written all along, etc". But there is an equally compelling argument from the C-you, with their C-Grandchildren in their C-deathbed. Both are right in this model, in the sense that everything was written beforehand, but you could loosely define choice between B and C as deciding which do you want to live in. The fact that you are inhabiting C, for instance, doesn't negate the existence of B

The main takeaway here is that, if the deterministic model is huge enough, the concept of choice lies in the uncertainty of the position you ocupy within it.  This is not a perfect explanation, but it served me to soothe the existential dreads.

lubinas3mo10

The preceding sentence gives an intensional definition of "extensional definition", which makes it an extensional example of "intensional definition".

This is really elegant. Worth taking a beat to digest

lubinas3mo10

The fallacy of Proving Too Much is when you challenge an argument because, in addition to proving its intended conclusion, it also proves obviously false conclusions. For example, if someone says “You can’t be an atheist, because it’s impossible to disprove the existence of God”, you can answer “That argument proves too much. If we accept it, we must also accept that you can’t disbelieve in Bigfoot, since it’s impossible to disprove his existence as well.”

Wow, I've been looking for a name for this thing for sooo long. Thanks so much. The phrasing here is a bit ambiguous, and can lead to confusion I think.

From the whole of the text, it seems that Scott's view on this is that of the Wiki page, that the fallacy is committed when someone claims a conclusion that is a special case in some category of which there are obviously false instances that would be true if the reasoning was valid. Something like

A) You can (validly) argue that someone else is committing the Proving Too Much fallacy when their argument, were it valid, in addition to proving its intended conclusion, would also prove obviously false conclusions

But

The fallacy of Proving Too Much is when you challenge an argument because, in addition to proving its intended conclusion, it also proves obviously false conclusions

Can also read (my first understanding of it) as:

B) You commit the Proving Too Much fallacy when you (invalidly) challenge an argument because, in addition to proving its intended conclusion, it also proves obviously false conclusions.

I'm leaning towards A, but would appreciate more info on this. Again, I found this extremely useful.

lubinas3mo10

There is a big leap between there are no X, so Y and there are no useful X (useful meaning local homeomorphisms), so Y, though. Also, local homeomorphism seem too strong a standard to set. But sure, I kind of agree on this. So let's forget about injection. Orthogonal projections seem to be very useful under many standards, albeit lossy. I'm not confident that there are no akin, useful equivalence classes in A (joint probability distributions) that can be nicely map to B (causal diagrams). Either way, the conclusion

This means the first causal structure is falsifiable; there's survey data we can get which would lead us to reject it as a hypothesis

can't be entailed from the above alone.

Note: my model of this is just balls in , so elements might not hold the same accidental properties as the ones in A and B, (if so, please explain :) ) but my underlying issue is with the actual structure of the argument.

lubinas3mo100

By the pigeonhole principle (you can't fit 3 pigeons into 2 pigeonholes) there must be some joint probability distributions which cannot be represented in the first causal structure

Although this is a valid interpretation of the Pigeonhole Principle (PP) for some particular one-to-one cases, I think it misses the main point of it as relates to this particular example. You absolutely can fit 3 Pigeons into 2 Pigeonholes, and the standard (to my knowledge) intended takeaway from the PP is that you are gonna have to, if you want your pigeons holed. There might just not be a cute way to do it.

The idea being that in finite sets  and  with  there is no injective  from  to  (you could see this as losing information); but you absolutely can send things from  to , you just have to be aware that at least two originally (in ) different elements are going to be mapped onto the same element in . This is an issue of losing uniqueness in the representation, not impossibility of the representation itself. It is even useful sometimes

A priori, it looks possible for a function to exist by which two different joint distributions would be mapped into the same causal structure in some nice, natural or meaningful way, in the sense that only related-in-some-cute-way joined distributions would share the same representation. If there are no such natural functions, there are definitely ugly ones. You can always cram all your pigeons in the first pigeonhole. They are all represented!

On the other hand, the full joint probability distribution would have 3 degrees of freedom - a free choice of (earthquake&recession), a choice of p(earthquake&¬recession), a choice of p(¬earthquake&recession), and then a constrained p(¬earthquake&¬recession) which must be equal to 1 minus the sum of the other three, so that all four probabilities sum to 1.0.

If you get to infinite stuff, it gets worse. You actually can inject  into  (or in this case  —three degrees of freedom— into  — two degrees —), meaning that not only can you represent every 3D vector in 2D (wich we do all the time), but there are particular representations that wont be lossy, with every 3D object being uniquely represented. You won't lose that information! (the operative term here being that. You are most definitely losing something).

If you substitute  and  for  and , to account for the fact that you are working in probabilities, this is still true.

So, assuming there are infinitely many things to account for in the Universe, you would be able to represent the joint distribution as causal diagrams uniquely. If there aren't, you may be able to group them following "nice" relationships. If you can't do that, you can always cram them willy neely. There is no need for a joint distribution to not be represented. I'm not sure how this affects the following

This means the first causal structure is falsifiable; there's survey data we can get which would lead us to reject it as a hypothesis

Seemed weird to me that this hasn't been pointed out (after a rather superficial look at the comments), so I'm pretty sure that either I'm missing something  and there are already 2-3 similarly wrong, refuted comments on this , or this has actually been talked about already and I just haven't seen it.

Edit: Just realized, in

If you substitute  and  for  and , to account for the fact that you are working in probabilities, this is still true.

should be the 1-norm unit balls from  and  and , so all the components sum up to 1, instead of the . Pretty sure the injection still holds.

lubinas3mo10

For many people, “jelly beans” live in a bucket that is Very Good!  The bucket has labels like “delicious” and “yes, please!” and “mmmmmmmmmmm” and .

“Bug secretions,” on the other hand, live in a bucket that, for many people, is Very Bad[3].  Its labels are things like “blecchh” and “gross” and “definitely do not put this inside your body in any way, shape, or form.”

When buckets collide—when things that people thought were in one bucket turn out to be in another, or when two buckets that people thought were different turn out to be the same bucket—people do not usually react with slow, thoughtful deliberation.  They usually do not think to themselves “huh!  Turns out that I was wrong about bug secretions being gross or unhygienic!  I’ve been eating jelly beans for years and years and it’s always been delicious and rewarding—bug secretions must be good, actually, or at least they can be good!”

While the main point stands, this particular image seems sort of misleading. The bucket - label analogy suggests that concepts we formulate to navigate the world live in neat, contained and well defined spaces (even if those spaces themselves are not accesible, or even known, to us in a conscious level, which is what I'm reading from this). But, more importantly, one of the traits of buckets is that by finding out something is in this bucket I know it can't be anywhere else - They are exclusive in that way.

Furthermore, labels themselves seem to be sufficient to model this analogy. Why are not elements (bugs) labeled themselves rather than belong to higher level categories (the Bad-Bucket) which will be the ones carrying the labels?