abramdemski

# Sequences

Pointing at Normativity
Consequences of Logical Induction
Partial Agency
Alternate Alignment Ideas
Filtered Evidence, Filtered Arguments
CDT=EDT?
Embedded Agency
Hufflepuff Cynicism

# Wiki Contributions

It is possible to manually adjust the number when signing up. But, point taken.

Mostly, but not necessarily. The preservation of some properties, not all or most properties. One could imagine the AI preserving the safety-relevant aspects but radically changing everything else.

I also worry that 'high fidelity copying' connotes some outside system doing the copying, which would miss the point entirely. The difficulty of tiling isn't about the difficulty of copying; the central difficulty is about trusting something as intelligent or more intelligent than yourself; trusting something which you can't predict in detail, and therefore have to trust on general principles (such as understanding its goals).

Hmm, any fun name suggestions?

I think you are interpreting me as saying the proposition , which is a statement rather than an argument. What I meant was , the argument from A to A. Although I didn't think the distinction was so important to focus on in this essay.

You can define circular logic as  if you want, but I think this will be an uncharitable interpretation of most real-life arguments that people would call circular. It also doesn't fit the geometric intuition behind 'circular' well.  leads back around to where it started, while  is doing something else.

The wikipedia article on circular reasoning sides with me on the issue:

Circular reasoning (Latin: circulus in probando, "circle in proving";[1] also known as circular logic) is a logical fallacy in which the reasoner begins with what they are trying to end with.[2] Circular reasoning is not a formal logical fallacy, but a pragmatic defect in an argument whereby the premises are just as much in need of proof or evidence as the conclusion, and as a consequence the argument fails to persuade.

Yeah, I would have liked to dig much deeper into what in the world[1] "justification" points at, but I thought the post would get too long and complex for the simple point being made.

1. ^

(I mean: what thing-in-the-world is being pointed at; what are the real phenomena behind "justification"; why we use such a concept)

circular justifications seem necessary in practice

I didn't see any arguments which point to that unless you mean the regress argument / disjunction

Yes, I agree: the essay doesn't really contain a significant argument for this point. "Seem necessary in practice" is more of an observation, a statement of how things seem to me.

The closest thing to a positive argument for the conclusion is this:

However, in retrospect I think it's pretty clear that any foundations are also subject to justificatory work, and the sort of justification needed is of the same kind as is needed for everything else. Therefore, coherentism.

And this, which is basically the same argument:

My reasons [...] are practical: I'm open to the idea of codifying excellent foundational theories (such as Bayesianism, or classical logic, or set theory, or what-have-you) which justify a huge variety of beliefs. However, it seems to me that in practice, such a foundation needs its own justification. We're not going to find a set of axioms which just seem obvious to all humans once articulated. Rather, there's some work to be done to make them seem obvious.

I also cite Eliezer stating a similar conclusion:

Everything, without exception, needs justification.  Sometimes—unavoidably, as far as I can tell—those justifications will go around in reflective loops.

Circular arguments fail to usefully constrain our beliefs; any assumptions we managed to justify based on evidence of EV will assign negative EV for circular arguments, and so there is no available source of justification from existing beliefs for adopting a circular argument, while there is for rejecting them.

As mentioned in AnthonyC's comment, circular arguments do constrain beliefs: they show that everything in the circle comes as a package deal. Any point in the circle implies the whole.

No branch of this disjunction applies. Justifications for assumptions bottom out in EV of the reasoning, and so are justified when the EV calculation is accurate. A reasoner can accept less than perfect accuracy without losing their justification -- the value of reasoning bottoms out in the territory, not the map, and so "survived long enough to have the thought" and similar are implicitly contributing the initial source of justification.

I can easily interpret this as falling into branches of the disjunction, and I am not sure how to interpret it as falling into none of the branches. It seems most naturally like a "justification doesn't always rest on further beliefs" type view ("the value of reasoning bottoms out in territory, not map").

All circular reasoning which is sound is tautological and cannot justify shifting expectation.

Does your perspective on this also imply that mathematical proofs should never shift one's beliefs? It sounds like you are assuming logical omniscience.

Also, it is possible for a circular argument like "A; A -> B; so, B; and also, B -> A; therefore, A" to be sound without being tautological. The implications can be contingently true rather than tautologically. The premise A can be contingently true rather than tautologically.

The strategy "ignore the arguments" still goes wrong if they've published an incorrect mathematical proof, with a flaw you could have spotted. So it's still clearly wrong in general, even with this adjustment.