Jiro

Comments

The Power & Tragedy of Names

Tell this to the people who named GIMP.

Expansive translations: considerations and possibilities

The fact that people have different understanding of the same texts and have to "translate" them through an inferential distance is a necessary evil. Just because something is a necessary evil doesn't mean it's good, and certainly doesn't mean that we should be fine with deliberately creating more of it.

A full explanation to Newcomb's paradox.

Under some circumstances, it seems that option 4 would result in the predictor trying to solve the Halting Problem since figuring out your best option may in effect involve simulating the predictor.

(Of course, you wouldn't be simulating the entire predictor, but you may be simulating enough of the predictor's chain of reasoning that the predictor essentially has to predict itself in order to predict you.)

Inaccessible finely tuned RNG in humans?
Answer by JiroOct 10, 20202

Generate several "random" numbers in your head, trying to generate them randomly but falling prey to the usual problems of trying to generate them in your head. Then add them together and take them mod X to produce a result that is more like a real random number.

Some elements of industrial literacy

Remember the original post about epistemic learned helplessness: making people literate in some things may be bad, because the fact that they don't understand things prevents them from doing good in those areas, but it also prevents them from falling prey to scams and fallacies in the same areas.

You might want the average person to fail to get excited about a 6% increase in battery energy density, because if too many people get excited about such things, the politicians, media machines, and advertisers will do their best to exploit this little bit of knowledge to extract momey from the general public while producing as few actual improvements to energy density as possible. I'm sure you could name plenty of issues where the public understands that they are important without having the breadth of knowledge to not fall for "we have to do omething, it's important!"

Weird Things About Money

Bets have fixed costs to them in addition to the change in utility from the money gained or lost. The smaller the bet, the more those fixed costs dominate. And at some point, even the hassle from just trying to figure out that the bet is a good deal dwarfs the gain in utility from the bet. You may be better off arbitrarily refusing to take all bets below a certain threshhold because you gain from not having overhead. Even if you lose out on some good bets by having such a policy, you also spend less overhead on bad bets, which makes up for that loss.

The fixed costs also change arbitrarily; if I have to go to the ATM to get more money because I lost a $10.00 bet, the disutility from that is probably going to dwarf any utility I get from a $0.10 profit, but whether the ATM trip is necessary is essentially random.

Of course you could model those fixed costs as a reduction in utility, in which case the utility function is indeed no longer logarithmic, but you need to be very careful about what conclusions you draw from that. For instance, you can't exploit such fixed costs to money pump someone.

Words and Implications

“I merely wrote those inscriptions on two boxes, and then I put the dagger in the second one.”

Statements can have inconsistent truth values. The logical analysis done by the jester is wrong because the jester is assuming that the statements are either true or false. This assumption is unwarranted, and given the actual box contents, the statements aren't true or false.

In other words, the jester didn't correctly analyze the logic of the inscriptions, but mess up because the result has no connection to the real world. The jester incorrectly analyzed the logic of the inscriptions. If he had done so correctly, he would have figured out that the contents of the boxes could be anything.

Anthropic effects imply that we are more likely to live in the universe with interstellar panspermia

This is similar to the simulation hypothesis, and in fact is sometimes used as a response to the simulation hypothesis.

Potential Ways to Fight Mazes

Con­sider this re­cent column by the ex­cel­lent Matt Lev­ine. It vividly de­scribes the con­flict be­tween en­g­ineer­ing, which re­quires peo­ple com­mu­ni­cate in­for­ma­tion and keep ac­cu­rate records, and the le­gal sys­tem and pub­lic re­la­tions, which tell you that keep­ing ac­cu­rate records is in­sane.

It certainly sounds like a contradiction, but the spin that article puts on it is unconvincing:

In other words, if you are trying to build a good engineering culture, you might want to encourage your employees to send hyperbolic, overstated, highly quotable emails to a broad internal distribution list when they object to a decision. On the other hand your lawyers, and your public relations people, will obviously and correctly tell you that that is insane: If anything goes wrong, those emails will come out, and the headlines will say “Designed by Clowns,”

This argument is essentially "truth is bad".

We try to pretend that making problems sound worse than they really are, in order to compel action, is not lying. But it really is. This complaint sounds like "we want to get the benefits of lying, but not the harm". If you're overstating a problem in order to get group A to act in ways that they normally wouldn't, don't be surprised if group B also reacts in ways that they normally wouldn't, even if A's reaction helps you and B's reaction hurts you. The core of the problem is not that B gets to hear it, the core of the problem is that you're being deceitful, even if you're exaggerating something that does contain some truth.

(Also, this will result in a ratchet where every decision that engineers object to is always the worst, most disastrous, decision ever, because if your goal is to get someone to listen, you should always describe the current problem as the worst problem ever.)

Reality-Revealing and Reality-Masking Puzzles

The epistemic immune system serves a purpose--some things are very difficult to reason out in full and some pitfalls are easy to fall in unknowingly. If you were a perfect reasoner, of course, this wouldn't matter, but the epistemic immune system is necessary because you're not a perfect reasoner. You're running on corrupted hardware, and you've just proposed dumping the error-checking that protects you from flaws in the corrupted hardware.

And saying "we should disable them if they get in the way of accurate beliefs" is, to mix metaphors, like saying "we should dispense with the idea of needing a warrant for the police to search your house, as long as you're guilty". Everyone thinks their own beliefs are accurate; saying "we should get rid of our epistemic immune system if it gets in the way of accurate beliefs" is equivalent to getting rid of it all the time.

Load More