notfnofn

Wiki Contributions

Comments

Sorted by

I'd be surprised if it could be salvaged using infinitesmals (imo the problem is deeper than the argument from countable additivity), but maybe it would help your intuition to think about how some Bayesian methods intersect with frequentist methods when working on a (degenerate) uniform prior over all the real numbers. I have a draft of such a post that I'll make at some point, but you can think about univariate linear regression, the confidence regions that arise, and what prior would make those confidence regions credible regions.

notfnofn1-1

Imo if you could really choose a point uniformly at random in [0,1], then things like Vitali sets philosophically shouldn't exist (but I've gotten attacked on reddit for this reasoning, and I kinda don't want to get into it). But this is why probability theory is phrased in terms of sigma algebras and whatnot to model what might happen if we really could choose uniformly at random in [0,1] instead of directly referring to such a platonic process. One could get away with being informal in probability theory by referring to such a process (and imo one should for the sake of grasping theorems), but then you have issues with the axiom of choice, as you mentioned. (I don't think any results in probability theory invoke a version of the axiom of choice strong enough to construct non-measurable sets anyway, but I could be wrong.)

Also as additional theorems about a given category arise, and various equivalencies are proven, one often ends up with definitions that are much "neater" than the original. But there is sometimes value in learning the historical definitions.

No, but it's exactly what I was looking for, and surprisingly concise. I'll see if I believe the inferences from the math involved when I take the time to go through it!

We could also view computation through the lens of Turing Machines, but then that raises the argument of "what about all these quantum shenanigans, those are not computable by a turing machine".

I enjoyed reading your comment, but just wanted to point out that a quantum algorithm can be implemented by a classical computer, just with a possibly exponential slow down. The thing that breaks down is that any O(f(n)) algorithm on any classical computer is at worst O(f(n)^2) on a Turing machine; for quantum algorithms on quantum computers with f(n) runtime, the same decision problem can be decided in (I think) O(2^{(f(n)}) runtime on a Turing machine

This pacifies my apprehension in (3) somewhat, although I fear that politicians are (probably intentionally) stupid when it comes to interpreting data for the sake of pushing policies

To add: this seems like the kind of interesting game theory problem I would expect to see some serious work on from members in this community. If there is such a paper, I'd like to see it!

Currently trying to understand why the LW community is largely pro-prediction markets.

  1. Institutions and smart people with a lot of cash will invest money in what they think is undervalued, not necessarily in what they think is the best outcome. But now suddenly they have a huge interest in the "bad" outcome coming to pass.

  2. To avoid (1), you would need to prevent people and institutions from investing large amounts of cash into prediction markets. But then EMH really can't be assumed to hold

  3. I've seen discussion of conditional prediction markets (if we do X then Y will happen). If a bad foreign actor can influence policy by making a large "bad investment" in such a market, such that they reap more rewards from the policy, they will likely do so. A necessary (but I'm not convinced sufficient) condition for this is to have a lot of money in these markets. But then see (1)

The pivotal time in my life where I finally broke out of my executive dysfunction and brain fog involved going to an area on campus that was completely abandoned over the summer with no technology, just a paper and pencil and a math book I was trying to get through while my wife was working on her experiments a building away (with my phone).

There wasn't even a clock there.

The first few days, I did a little work then slept (despite not being slee-deprived). Then I started adding some periodic exercise. Then I started bringing some self-help books and spent some time reading those as well. Eventually, I stopped napping and spent the whole time working, reading, or exercising.

It's not like I never went back to being unproductive for stretches of time after that summer, but I was never as bad as I was before that.

Not trying to split hairs here, but here's what was throwing me off (and still is):

Let's say I have an isomorphism: sequential states of a brain  molecules of a rock

I now create an encoding procedure: physical things  txt file

Now via your procedure, I consider all programs  which map txt files to txt files such that 

and obtain some discounted entropy. But isn't  doing a lot of work here? Is there a way to avoid infinite regress?

Load More