BeanSprugget

Comments

Message Length

Really interesting post. To me, approaching information with mathematics seems like a black box - and in this post, it feels like magic.

I'm a little confused by the concept of cost: I understand that it takes more data to represent more complex systems, which grows exponentially faster than than the amount of bits. But doesn't the more complex model still strictly fit the data better? - is it just trying to go for a different goal than accuracy? I feel like I'm missing the entire point of the end.

Probability is in the Mind

Even with quantum uncertainty, you could predict the result of a coin flip or die roll with high accuracy if you had precise enough measurements of the initial conditions.

I'm curious about how how quantum uncertainty works exactly. You can make a prediction with models and measurements, but when you observe the final result, only one thing happens. Then, even if an agent is cut off from information (i.e. observation is physically impossible), it's still a matter of predicting/mapping out reality.

I don't know much about the specifics of quantum uncertainty, though.

Why haven't we celebrated any major achievements lately?

Since national pride is decreasing, the pride of scientific accomplishments seem to be mainly relegated to, well, the scientists themselves - geeks and nerds.

That reminds me of this Scott Aaronson post (https://www.scottaaronson.com/blog/?p=87). Unless the science "culture" changes or everyone else does, it seems like there will be a limit on the amount of people who would be willing to celebrate technical achievements.

Why *I* fail to act rationally

It doesn't optimize for "you", it optimizes for the gene that increases the chance of cheating. The "future" has very little "you".

Your Price for Joining

This seems mainly to be about the importance of compromise: that something is better than nothing. Refusing only makes sense when there are "multiple games", like the Prisoner's Dilemma; if you can't find an institution that is similar enough, then don't do it.

But I think there is some risk to joining a cause that "seems" worth it. (I can't find it, but) I remember an article on LessWrong about the dangers of signing petitions, which can influence your beliefs significantly despite the smallness of the action.

Feeling Rational

Reminded me of this blog post by Nicky Case, where they said "Trust, but verify". Emotions are often a good heuristic for truth: if we didn't feel pain, that would be bad.

Rationality Lessons in the Game of Go

I don't know anything about Go. But the fact that following it helps you reminds me of In praise of fake frameworks: while "good shape" isn't fully accurate at calculating the best move, it's more "computationally useful" for most situations (similar to calculating physics with Newton's laws vs general relativity and quantum mechanics). (The author also mentions "ki", which makes no sense from a physics perspective, to get better at aikido.)

I think it's just important to remember that the "model" is only a map for the "reality" (the rules of the game).

Magical Categories

I don't really doubt that increasing value while preserving values is nontrivial, but I wonder just how nontrivial it is: are the regions of the brain for intelligence and values separate? Actually, writing that out, I realize that (at least for me) values are a "subset" of intelligence: the "facts" we believe about science/math/logic/religion are generated in basically the same way as our moral values; the difference to us humans seems obvious, but it really is, well, nontrivial. The paper clip maximizing AI is a good example: even if it wasn't about "moral values"--even if you wanted to maximize something like paper clips--you'd still run into trouble

You could make a habit of checking LW and EA at a certain time each day/week/etc. I don't know how easy that would be maintain, or if it's really practical depending on your situation.