notfnofn

Wiki Contributions

Comments

No, but it's exactly what I was looking for, and surprisingly concise. I'll see if I believe the inferences from the math involved when I take the time to go through it!

We could also view computation through the lens of Turing Machines, but then that raises the argument of "what about all these quantum shenanigans, those are not computable by a turing machine".

I enjoyed reading your comment, but just wanted to point out that a quantum algorithm can be implemented by a classical computer, just with a possibly exponential slow down. The thing that breaks down is that any O(f(n)) algorithm on any classical computer is at worst O(f(n)^2) on a Turing machine; for quantum algorithms on quantum computers with f(n) runtime, the same decision problem can be decided in (I think) O(2^{(f(n)}) runtime on a Turing machine

This pacifies my apprehension in (3) somewhat, although I fear that politicians are (probably intentionally) stupid when it comes to interpreting data for the sake of pushing policies

To add: this seems like the kind of interesting game theory problem I would expect to see some serious work on from members in this community. If there is such a paper, I'd like to see it!

Currently trying to understand why the LW community is largely pro-prediction markets.

  1. Institutions and smart people with a lot of cash will invest money in what they think is undervalued, not necessarily in what they think is the best outcome. But now suddenly they have a huge interest in the "bad" outcome coming to pass.

  2. To avoid (1), you would need to prevent people and institutions from investing large amounts of cash into prediction markets. But then EMH really can't be assumed to hold

  3. I've seen discussion of conditional prediction markets (if we do X then Y will happen). If a bad foreign actor can influence policy by making a large "bad investment" in such a market, such that they reap more rewards from the policy, they will likely do so. A necessary (but I'm not convinced sufficient) condition for this is to have a lot of money in these markets. But then see (1)

The pivotal time in my life where I finally broke out of my executive dysfunction and brain fog involved going to an area on campus that was completely abandoned over the summer with no technology, just a paper and pencil and a math book I was trying to get through while my wife was working on her experiments a building away (with my phone).

There wasn't even a clock there.

The first few days, I did a little work then slept (despite not being slee-deprived). Then I started adding some periodic exercise. Then I started bringing some self-help books and spent some time reading those as well. Eventually, I stopped napping and spent the whole time working, reading, or exercising.

It's not like I never went back to being unproductive for stretches of time after that summer, but I was never as bad as I was before that.

Not trying to split hairs here, but here's what was throwing me off (and still is):

Let's say I have an isomorphism: sequential states of a brain  molecules of a rock

I now create an encoding procedure: physical things  txt file

Now via your procedure, I consider all programs  which map txt files to txt files such that 

and obtain some discounted entropy. But isn't  doing a lot of work here? Is there a way to avoid infinite regress?

notfnofn2-1

It feels like this a semantic issue. For instance, if you asked me if Euclid's algorithm produces the gcd, I wouldn't think the answer is "no, until it runs". Mathematically, we often view functions as the set of all pairs (input,output), even when the input size is infinite. Can you clarify?

While I sort of get what you're going for (easy interpretability of the isomorphism?), I don't really a see a way to make this precise.

I'm having a little trouble understanding how to extend this toy example. You meant for these questions to all be answered "yes", correct?

Load More