All of AVoropaev's Comments + Replies

The case for hypocrisy

But that's a fix to a global problem that you won't fix anyway. What you can do is allocate some resources to fixing a lesser problem "this guy had nothing to eat today".

It seems to me that your argument proves too much -- when faced with a problem that you can fix you can always say "it is a part of a bigger problem that I can't fix" and do nothing.

The case for hypocrisy

What do you mean by 'real fix' here? What if said that real-real fix requires changing human nature and materialization of food and other goods out of nowhere? That might be more effective fix, but it is unlikely to happen in near future and it is unclear how you can make it happen. Donating money now might be less effective, but it is somehow that you can actually do.

1Gerald Monroe4moA real fix is forcing everyone in a large area to contribute to fixing a problem. If enough people can't be compelled to contribute the problem can't be fixed. Doing something that costs you resources but doesn't fix the problem and negatively affects you vs others who aren't contributing but are competing with you isn't a viable option. In prisoners dilemma you may preach always cooperate but you have to defect if your counterparty won't play fair. Similarly warren Buffett can preach that billionaires should pay more taxes but not pay any extra voluntarily until all billionaires have to.
In Defence of Spock

Detailed categorizations of mental phenomena sounds useful. Is there a way for me to learn that without reading religious texts?

1frcassarino5moQualia Research institute is working on building a catalogue of qualia iirc.
Julia Galef and Matt Yglesias on bioethics and "ethics expertise"

How can you check proof of any interesting statement about real world using only math? The best you can do is check for mathematical mistakes.

3Gerald Monroe6mo"what do they claim to know and how do they know it" No amount of credentials or formal experience makes an expert not wrong if they do not have high quality evidence, that they have shown, to get their conclusions from. And an algorithm formally proven to be correct that they show they are using. Or in the challenge trials : ethicist claims to value human life. A challenge trial only risks the lives of a few people, where even if they die it would have saved hundreds of thousands. In this case the " basic math" is one of multiplication and quantities, showing the "experts" don't know anything. As you might notice, ethicists do not have high quality information as input to generate their conclusions from. Without that information you cannot expect more than expensive bullshitting. "Ethics" today is practiced by reading ancient texts and more modern arguments, many of which have cousins with religion. But ethics is not philosophy. It is actually a math problem. Ultimately, there are things you claim to value ("terminal values"). There are actions you can consider doing. Some actions have an expected value that with a greater score on the things you care about, and some actions have a lesser expected value. Any other action but taking the one with the highest expected value (factoring in variance), is UNETHICAL. Yes, professional ethicists today are probably mostly all liars and charlatans, no more qualified than a water douser. I think EY worked down to this conclusion in a sequence but this is the simple answer. One general rule of thumb if you didn't read the above: if an expert claims to know what they are doing, look at the evidence they are using. I don't know the anatomy of the human body enough to gainsay an orthopedic surgeon, but I'm going to trust the one that actually looks at a CT scan over one that palpates my broken limb and reads from some 50 year old book. Doesn't matter if the second one went to the most credible medical school and has 50 year
Extracting Money from Causal Decision Theorists

I assume you mean that I assume P(money in Bi | buyer chooses Bi )=0.25? Yes, I assume this, although really I assume that the seller's prediction is accurate with probability 0.75 and that she fills the boxes according to the specified procedure. From this, it then follows that P(money in Bi | buyer chooses Bi )=0.25.

Yes, you are right. Sorry.

Why would it be a logical contradiction? Do you think Newcomb's problem also requires a logical contradiction?

Okay, it probably isn't a contradiction, because the situation "Buyer writes his decision and it is common... (read more)

1Caspar427moSorry for taking some time to reply! >You might wonder why am I spouting a bunch of wrong things in an unsuccessful attempt to attack your paper. Nah, I'm a frequent spouter of wrong things myself, so I'm not too surprised when other people make errors, especially when the stakes are low, etc. Re 1,2: I guess a lot of this comes down to convention. People have found that one can productively discuss these things without always giving the formal models (in part because people in the field know how to translate everything into formal models). That said, if you want mathematical models of CDT and Newcomb-like decision problems, you can check the Savage or Jeffrey-Bolker formalizations. See, for example, the first few chapters of Arif Ahmed's book, "Evidence, Decision and Causality". Similarly, people in decision theory (and game theory) usually don't specify what is common knowledge, because usually it is assumed (implicitly) that the entire problem description is common knowledge / known to the agent (Buyer). (Since this is decision and not game theory, it's not quite clear what "common knowledge" means. But presumably to achieve 75% accuracy on the prediction, the seller needs to know that the buyer understands the problem...) 3: Yeah, *there exist* agent models under which everything becomes inconsistent, though IMO this just shows these agent models to be unimplementable. For example, take the problem description from my previous reply (where Seller just runs an exact copy of Buyer's source code). Now assume that Buyer knows his source code and is logically omniscient. Then Buyer knows what his source code chooses and therefore knows the option that Seller is 75% likely to predict. So he will take the other option. But of course, this is a contradiction. As you'll know, this is a pretty typical logical paradox of self-reference. But to me it just says that this logical omniscience assumption about the buyer is implausible and that we should consider agents who
Extracting Money from Causal Decision Theorists

I've skimmed over the beginning of your paper, and I think there might be several problems with it.
 

  1. I don't see where it is explicitly stated, but I think information "seller's prediction is accurate with probability 0,75" is supposed to be common knowledge. Is it even possible for a non-trivial probabilistic prediction to be a common knowledge? Like, not as in some real-life situation, but as in this condition not being logical contradiction? I am not a specialist on this subject, but it looks like a logical contradiction. And you can prove absolutel
... (read more)
4Caspar428mo>I think information "seller's prediction is accurate with probability 0,75" is supposed to be common knowledge [https://en.wikipedia.org/wiki/Common_knowledge_(logic)]. Yes, correct! >Is it even possible for a non-trivial probabilistic prediction to be a common knowledge? Like, not as in some real-life situation, but as in this condition not being logical contradiction? I am not a specialist on this subject, but it looks like a logical contradiction. And you can prove absolutely anything if your premise contains contradiction. Why would it be a logical contradiction? Do you think Newcomb's problem also requires a logical contradiction? Note that in neither of these cases does the predictor tell the agent the result of a prediction about the agent. >What kinds of mistakes does seller make? For the purpose of the paper it doesn't really matter what beliefs anyone has about how the errors are distributed. But you could imagine that the buyer is some piece of computer code and that the seller has an identical copy of that code. To make a prediction, the seller runs the code. Then she flips a coin twice. If the coin does not come up Tails twice, she just uses that prediction and fills the boxes accordingly. If the coin does come up Tails twice, she uses a third coin flip to determine whether to (falsely) predict one of the two other options that the agent can choose from. And then you get the 0.75, 0.125, 0.125 distribution you describe. And you could assume that this is common knowledge. Of course, for the exact CDT expected utilities, it does matter how the errors are distributed. If the errors are primarily "None" predictions, then the boxes should be expected to contain more money and the CDT expected utilities of buying will be higher. But for the exploitation scheme, it's enough to show that the CDT expected utilities of buying are strictly positive. >When you write "$1−P (money in Bi | buyer chooses Bi ) · $3 = $1 − 0.25 · $3 = $0.25.", you assume that P(m
What's the big deal about Bayes' Theorem?

I've skimmed over A Technical Explanation of Technical Explanation (you can make links end do over stuff by selecting the text you want to edit (as if you want to copy it); if your browser is compatible, toolbar should appear). I think that's the first time in my life when I've found out that I need to know more math to understand non-mathematical text. The text is not about Bayes' Theorem, but it is about application of probability theory to reasoning, which is relevant to my question. As far as I understand, Yudkowski writes about the same algorithm that... (read more)

What's the big deal about Bayes' Theorem?

That's interesting. I've heard about probabilistic modal logics, but didn't know that not only logics are working towards statisticians, but also vice versa. Is there some book or videocourse accessible to mathematical undergraduates?

What's the big deal about Bayes' Theorem?

This formula is not Bayes' Theorem, but it is a similar simple formula from probability theory, so I'm still interested in how you can use it in daily life.

Writing P(x|D) implies that x and D are the same kind of object (data about some physical process?) and there are probably a lot of subtle problems in defining hypothesis as a "set of things that happen if it is true" (especially if you want to have hypotheses that involve probabilities). 

Use of this formula allows you to update probabilities you prescribe to hypotheses, but it is no... (read more)

2Vladimir_Nesov8moHere's an example [https://www.lesswrong.com/posts/kz6A7z6JBFiwaAZFK/the-lottery-paradox?commentId=yjChX36cLN8GbdSXi] of applying the formula (to a puzzle).

The above formula is usually called "odds form of Bayes formula". We get the standard form by letting in the odds form, and we get the odds form from the standard form by dividing it by itself for two hypotheses ( cancels out).

The serious problem with the standard form of Bayes is the term, which is usually hard to estimate (as we don't get to choose what is). We can try to get rid of it by expanding but that's also no good, because now we need to know . One way to state the problem... (read more)