justinpombrio

Wiki Contributions

Comments

Paxlovid Remains Illegal: 11/24 Update

Do you have references for the Paxlovid paradox? How transparent is the FDA about what their reasoning is, and what they are doing during the approval process?

As is, some people are asking about specifics, that could make the slow approval look more reasonable. For example:

  • Did they actually stop the Paxlovid trial, or did they transform it from a "is this drug effective?" to "is this drug safe?" trial by giving the control group Paxlovid and continuing to monitor everyone?
  • Are they currently looking at manufacturing techniques, labeling, dosing, and such? Or are they just waiting until March to give a generic thumbs-up?

Overall, I'm wondering how strong a case could be made to normal people for whom "but safety!" is a strong counterargument to "30,000 deaths!".

[Book Review] "Sorceror's Apprentice" by Tahir Shah

From what I've heard, the trick with lead is to wet your hand first. Then the lead boils the water, which presumably turns into a layer of steam that briefly pushes the lead away from your hand.

This is not advice. Do not stick you hand in molten metal.

SIA > SSA, part 1: Learning from the fact that you exist

Isn't the conclusion to the Sleeping Beauty problem that there are two different but equally valid ways of applying probability theory to the problem; that natural language and even formal notation makes it very easy to gloss over the difference; and that which one you should use depends on exactly what question you mean to ask? Would those same lessons apply to SIA vs. SSA?

In Sleeping Beauty, IIRC the distinction is between "per-experiment" probabilities and "per-observation" probabilities. My interpretation of these was to distinguish between the question "what's the probability that the coin came up heads" (a physical event that happened exactly once, when the coin landed on the table) from "what's the probability that Beauty will witness the coin being heads" (an event in Beauty's brain that will occur once or twice). The former having probability 1/2 and the latter having probability 1/3. Though it might be a bit more subtle than that.

For SSA vs. SIA, who do you want to be right most often? Do you want a person chosen uniformly at random from among all people in all possible universes to be right most often? If so, use SIA. Or do you want to maximize average-rightness-per-universe? If so, use SSA, or something like it, I'm not exactly clear.

Let's be concrete, and look at the "heads: 1 person in a white room and 9 chimps in a jungle; tails: 10 people in a white room" situation.

If God says "I want you to guess whether the coin landed heads or tails. I will exterminate everyone who guesses wrong.", then you should guess tails because that saves the most people in expectation. But if God says "I want to see how good the people of this universe are at reasoning. Guess whether the coin landed heads or tails. If most people in your universe guess correctly, then your universe will be rewarded with the birth of a single happy child. Oh and also the coin wasn't perfectly fair; it landed heads with probability 51%.", then you should guess heads because that maximizes the chance that the child is born.

I'm not sure that's all exactly right. But the point I'm trying to make is, are we sure that "the probability that you're in the universe with 1 person in the white room" has an unambiguous answer?

Three enigmas at the heart of our reasoning

As you said, very often a justification-based conversation is looking to answer a question, and stops when it's answered using knowledge and reasoning methods shared by the participants. For example, Alice wonders why a character in a movie did something, and then has a conversation with Bob about it. Bob shares some facts and character-motivations that Alice didn't know, they figure out the character's motivation together, and the conversation ends. This relied on a lot of shared knowledge (about the movie universe plus the real universe), but there's no reason for them to question their shared knowledge. You get to shared ground, and then you stop.

If you insist on questioning everything, you are liable to get to nodes without justification:

  • "The lawn's wet." / "Why?" / "It rained last night." / "Why'd that make it wet?" / "Because rain is when water falls from the sky." / "But why'd that make it wet?" / "Because water is wet." / "Why?" / "Water's just wet, sweetie.". A sequence of is-questions, bottoming out at a definition. (Well, close to a definition: the parent could talk about the chemical properties of liquid water, but that probably wouldn't be helpful for anyone involved. And they might not know why water is wet.)
  • "Aren't you going to eat your ice cream? It's starting to melt." / "It sure is!" / "But melted ice cream is awful." / "No, it's the best." / "Gah!". This conversation comes to an end when the participants realize that they have fundamentally different preferences. There isn't really a justification for "I dislike melted ice cream". (There's an is-ought distinction here, though it's about preferences rather than morality.)

Ultimately, all ought-question-chains end at a node without justification. Suffering is just bad, period.

And I think if you dig too deep, you'll get to unjustified-ish nodes in is-question-chains too. For example, direct experience, or the belief that the past informs the future, or that reasoning works. You can question these things, but you're liable to end up on shakier ground than the thing you're trying to justify, and to enter a cycle. So, IDK, you can not count those flimsy edges and get a dead end, or count them and get a cycle, whichever you prefer?

We would just go and go and go until we lost all energy, and neither of us would notice that we’re in a cycle?

There's an important shift here: you're not wondering how the justification graph is shaped, but rather how we would navigate it. I am confident that the proof applies to the shape of the justification graph. I'm less confident you can apply it to our navigation of that graph.

“huh, it looks like we are on a path with the following generator functions”

Not all infinite paths are so predictable / recognizable.

[This comment is no longer endorsed by its author]Reply
Three enigmas at the heart of our reasoning

If you ask me whether my reasoning is trustworthy, I guess I'll look at how I'm thinking at a meta-level and see if there are logical justifications for that category of thinking, plus look at examples of my thinking in the past, and see how often I was right. So roughly your "emperical" and "logical" foundations.

And I sometimes use my reasoning to bootstrap myself to better reasoning. For example, I didn't used to be Bayesian; I did not intuitively view my beliefs as having probabilities associated with them. Then I read Rationality, and was convinced by both theoretical arguments and practical examples that being Bayesian was a better way of thinking, and now that's how I think. I had to evaluate the arguments in favor of Bayesianism in terms of my previous means of reasoning --- which was overall more haphazard, but fortunately good enough to recognize the upgrade.

From the phrasing you used, it sounded to me like you were searching for some Ultimate Justification that could by definition only be found in regions of the space that have been ruled out by impossibility arguments. But it sounds like you're well aware of those reasons, and must be looking elsewhere; sorry for misunderstanding.

But honestly I still don't know what you mean by "trustworthy". What is the concern, specifically? Is it:

  • That there are flaws in the way we think, for example the Wikipedia list of biases?
  • That there's an influential bias that we haven't recognized?
  • That there's something fundamentally wrong with the way that we reason, such that most of our conclusions are wrong and we can't even recognize it?
  • That our reasoning is fine, but we lack a good justification for it?
  • Something else?
Three enigmas at the heart of our reasoning

(2) doesn't require the graph to be finite. Infinite graphs also have the property that if you repeatedly follow in-edges, you must eventually reach (i) a node with no in-edges, or (ii) a cycle, or (iii) an infinite chain.

EDIT: Proof, since if we're talking about epistemology I shouldn't spout things without double checking them.

Let G be any directed graph with at most countably many nodes. Let P be the set of paths in G. At least one of the following must hold:

(i) Every path in P is finite and acyclic. (ii) At least one path in P is cyclic. (iii) At least one path in P is infinite.

Now we just have to show that (i) implies that there exists at least one node in G that has no in-edges. Since every path is finite and acyclic, every path has a (finite) length. Label the nodes of G with the length of the largest path that ends at that node. Pick any node N in G. Let n be its label. Strongly induct on n:

  • If n=0, we're done: the maximum path length ending at this node is 0, so it has no in-edges. (A.k.a. it lacks justification.)
  • If n>0, then there is a non-empty path ending at N. Follow it back one edge to a node N'. N' must be labeled at most n-1, because if its label was larger then N's label would be larger too. By the inductive hypothesis, there exists a node in G with no in-edges.
Three enigmas at the heart of our reasoning

Yeah. Though you might be able to re-phrase the reasoning to turn it into one of the others?

EDIT: in more detail, it's something like this. I have a whole bunch of ways of reasoning, and can use many of them to examine the others. And they all generally agree, so it seems fine. (Sean Carrol says this.) You can't use completely broken reasoning to figure the world out. But if you start with partially broken reasoning, you can bootstrap your way to better and better reasoning. (Yudkowski says this.)

The main point is that I have been convinced by the reasoning in my previous comment and others that a search for an Ultimate Justification is fruitless, and have adjusted my expectations accordingly. When your intuitions don't match reality, you need to update your intuitions.

Three enigmas at the heart of our reasoning

Maybe a clearer way to say it is that I actually agree with everything you’ve said, but I don’t think what you’ve said is yet sufficient to resolve the question of whether our reasoning is based on something trustworthy.

I get the impression that by the standards you have set, it is impossible to have a "trustworthy" justification:

  1. For anything you believe, you should be able to ask for its justifications. Thus justifications form a graph, with an edge from A to B meaning that "A justifies B".
  2. Just from how graphs work, if you start from any node and repeatedly ask for its justifications, you must eventually reach (i) a node with no justifications (in-edges), or (ii) a cycle, or (iii) an infinite chain.
  3. However, unjustified beliefs, cyclic reasoning, and infinite regress are all untrustworthy.

Do you simultaneously believe all three of these statements? I disbelieve 3.

Three enigmas at the heart of our reasoning

Have you read the Sequences, or Sean Carrol's 'The Big Picture'? Both talk about these questions. For example:

We can appeal to empiricism to provide a foundation for logic, or we can appeal to logic to provide a foundation for empiricism, or we can connect the two in an infinitely recursive cycle.

See explain-worship-ignore

and more generally

mysterious-answers

The system of empiricism provides no empirical basis for believing in these foundational principles.

See no-universally-compelling-arguments-in-math-or-science

and more generally

mind-space

simpler hypotheses are more likely to be true than complicated hypotheses

I'm not sure if this appeared in the Sequences or not, but there's a purely logical argument that simpler hypotheses must be more likely. For any level of complexity, there are finitely many hypotheses that are simpler than that, and infinitely many that are more complex. You can use this to prove that any probability distribution must be biased towards simpler hypotheses.

We need not doubt all of mathematics, but we might do well to question what it is that we are trusting when we do not doubt all of mathematics.

"All of mathematics" might not be as coherent as you think. There's debate around the foundations. For example:

  • Should the foundation be set theory (ZF axioms), or constructive type theory?
  • Axiom of Choice: true or false?
  • Law of excluded middle: true or false?

(I'm not a mathematician, so take this with a grain of salt.)

There are two very different notions of what it means for some math to be "true". One is that the statement in question follows from the axioms you're assuming. The other is that you're using this piece of math to model the real world, and the corresponding statement about the real world is true. For example, "2 + 2 = 4" can be proved using the Peano axioms, with no regard to the world at all. But there are also (multiple!) real situations that "2 + 2 = 4" models. One is that if you put two cups together with two other cups, you'll have four cups. Another is that if you pour two gallons of gas into a car that already has two gallons of gas, the car will have four gallons. In this second model, it's also true that "1/2 + 1/2 = 1". In the first model, it isn't: the correspondence breaks down because no one wants a shattered cup.

I'm actually very interested to see what assumptions about the real world correspond to mathematical axioms. For example, if you interpret mathematical statements to be "objectively" true then the law of the excluded middle is true, but if you interpret them to be about knowledge or about provability, then the law of the excluded middle is false. I have no idea what the axiom of choice is about, though.

the-simple-truth

I am asking you to doubt that your reason for correctly (in my estimation) not doubting ethics can be found within ethics.

Have you read about Hume's is-ought distinction? He writes about it in 'A Treatise of Human Nature'. It says that ought-statements cannot be derived from is-statements alone. You can derive an is-statement from another, for example by using modus-ponens. And you can derive one ought-statement from another ought-statement, plus some is-statement reasoning, for example "you shouldn't punch him because that would hurt him, and someone being hurt is bad". But you can't go from pure is-statements to an ought-statement. Yudkowski says similar things. Once you instinctively see this distinction, it's not even tempting to look for an ultimate justification of ethics or within empiricism, because it's obviously not there.


The problem, I suspect, is that these questions of deep doubt in fact play within our minds all the time, and hinder our capacity to get on with our work.

It's always dangerous to put thoughts in other people's minds! These questions really truly do not play within my mind. I find them interesting, but doubt they're of much practical importance, and they do not bother me. I'm sure I'm not alone.

It seems like you are unhappy without having "a satisfying conceptual answer to 'why should I believe it?' within the systems that we are questioning." Why is that? Do you not want to strongly believe in something without a strong and non-cyclic conceptual justification for it?

Book review: The Checklist Manifesto

Super-intelligence deployment checklist:

  1. DO NOT DEPLOY THE AGI UNTIL YOU HAVE COMPLETED THIS CHECKLIST.
  2. Check the cryptographic signature of the utility function against MIRI's public key.
  3. Have someone who has memorized the known-benevolent utility function you plan to deploy check that it matches their memory exactly. If no such person is available, do not deploy.
  4. Make sure that the code references that utility function, and not another one.
  5. Make sure the code is set to maximize utility, not minimize it.
  6. Deploy the AGI.

(This was written in jest, and is almost certainly incomplete or wrong. Do not use when deploying a real super-intelligent AGI.)

Load More