Wiki Contributions

Comments

The Utility Function of a Prepper

I'd be careful with thinking of prepping as a binary "do/don't prep" distinction. If you live somewhere where a civil war happens every 2-3 years, the expected value of something that only has value in a civil war scenario is much higher than if one happens every 150 years or so. However, that doesn't mean you should "prep" in one case and not the other, just that some actions that would be worth it if civil wars were frequent are not worth it if civil wars are infrequent. Water may be useful in both, but training your friends in wilderness survival or whatever, maybe less so.

The Utility Function of a Prepper

I don't think I understand the question. If something is expensive but will definitely (let's say with 99% certainty) save my life (which I think is the sort of thing you are describing as expensive_but_must), I would buy it at almost any cost. 

The Utility Function of a Prepper

buying a bunker is not frequent that much anymore

Are you sure? The second doom boom is here, and people are buying bunkers again.

The difference between bunkers and water is not just the cost, but the probability of needing one - there are many non-nuclear-war cases for wanting water on hand. So water has a higher probability of being useful, and a lower cost.

The Utility Function of a Prepper

Most places have water, but how close is it to where you live? If you don't have a way of storing a significant amount of water, and live far enough from your local water source that you would have to drive, there is a benefit to having enough water storage so that you can transport a reasonable amount of water per trip.

But I agree that having a water filter on hand is useful in cases where you have access to water, but you aren't sure whether it's safe to drink or not.

Agency and the unreliable autonomous car

I think your definition of perfect model is a bit off - a circuit diagram of a computer is definitely not a perfect model of the computer! The computer itself has much more state and complexity, such as temperature of the various components, which are relevant to the computer but not the model. 

Containing a copy of your source code is a weird definition of a model. All programs contain their source code, does a program that prints it source code have more of a model of itself than other programs, which are just made of their source code? Human brains are not capable of holding a perfect model of a human brain, plus more, because the best-case encoding requires using one atom of the human brain to model an atom in the human brain, leaving you at 100% capacity.

The key word is "perfect" - to fit a model of a thing inside that thing, the model must contain less information than the thing does.

Agency and the unreliable autonomous car

This is a really good illustration of the 5 and 10 problem. I read the linked description, but didn't fully understand it until you walked through this example in more depth. Thanks!

One possible typo:  You used 5a and 5b in most of the article, but in the original list of steps they are called 6 and 7.

Paper Review: Mathematical Truth

This is definitely not a "big problem" in that we can use math regardless of what the outcome is. 

It sounds like you're arguing that semantic uniformity doesn't matter, because we can change what "exists" means. But once you change what "exists" means, you will likely run into epistemological issues. If your mathematical objects aren't physical entities capable of interacting with the world, how can you have knowledge that is causally related to those entities? That's the dilemma of the argument above - it seems possible to get semantic uniformity at the expense of epistemological uniformity, or vice versa, but having both together is difficult.

Paper Review: Mathematical Truth

I'm not super up-to-date on fictionalism, but I think I have a reasonable response to this.

When we are talking about fictional worlds, we understand that we have entered a new form of reasoning. In cases of fictional worlds, all parties usually understand that we are not talking about the standard predicate, "exists", we are talking about some other predicate, "fictionally-exists". You can detect this because if you ask people "do those three Jedi really exist?", they will probably say no. 

However, with math, it's less clear that we are talking fictionally or talking only about propositions within an axiomatic system. We could swap out the "exists" predicate with something like "mathematically-exists" (within some specific axiom system), but it's less clear what the motivation is compared to fictional cases. People talk as if 2+2 does really equal 4, not just that its useful to pretend that it's true. 

Paper Review: Mathematical Truth

Hi! I really appreciate this reply, and I stewed on it for a bit. I think the crux of our disagreement comes down to definitions of things, and that we mostly agree except for definitions of some words.

Knowledge - I think knowledge has to be correct to be knowledge, otherwise you just think you have knowledge. It seems like we disagree here, and you think that knowledge just means a belief that is likely to be true (and for the right reason?). It's unclear to me how you would cash out "accurate map" for things that you can't physically observe like math, but I think I get the gist of your definition. Also, side note, justified true belief is not a widely held view in modern philosophy, most theories of truth go for justified true belief + something else.

Real - We both agree it doesn't matter for our day-to-day lives whether math is real or not. (It may matter for patent law, if it decides whether math is treated as an invention or a discovery!) I think that it would be nice to know whether math is real or not, and I try to understand the logical form of sentences I utter to know what fact about the world would make them true or false. So you say I "don't have to worry about" whether numbers are real, and I agree – their reality or non-reality is not causing me any problem, I'm just curious.

I also view epistemic uniformity as pretty important, because we should have the same standards of knowledge across all fields. You seem to think that mathematical knowledge doesn't exist, because mathematical "knowledge" is just what we have derived within a system. I can agree with that! The Benacerraf paper presents a big problem for realism, which you seem to buy - and you're willing to put up with losing semantic uniformity for it. 

I think our differences comes down to how much we want semantic uniformity in a theory of truth of math.

Update 2021-05-31

Hi - looks like you did a relative link (https://www.lesswrong.com/capital-gains-in-agi-big.png) but you want this absolute link instead: https://www.jefftk.com/capital-gains-in-agi-big.png

Load More