Posts

Sorted by New

Wiki Contributions

Comments

gelisam10y40

A → □ A doesn't mean "all true statements are provable" [...]

I agree! You are right, the formula "A → □ A" does not mean that all true statements are provable. It means that all "true" statements are "provable" :)

Discussing the math of provability is tricky, because there are many notions of truth and many notions of provability. There are true-in-the-real-world and provable-in-the-real-world, the notions we are trying to model. True-in-the-real-world does not imply provable-in-the-real-world, so ideally, we would like a model in which there are no rules to infer provable-in-the-model from true-in-the-model. Provability logic provides such a model: it uses "A" for true-in-the-model, and "□ A" for provable-in-the-model.

One extra level of complexity is that "provable-in-the-model" is not the same as "provable-in-the-real-world by following the rules of the model", which I'll abbreviate as "derivable", or "PL-derivable" when the model is Provability logic. The reason the two are different is that "□ A" only represents provability; the actual rules of the model don't care what meaning we assign to the symbols. If the rules were poorly chosen, it could very well be that a formula "□ A" was PL-derivable but that the formula "A" wasn't.

A → □ A [...] means "all statements we can construct a proof term for are provable"

Yes and no. Yes, at runtime, all the values we will actually receive will have been constructed by our callers. No, Agda implications do not always mean that if the left-hand side is constructible, so it the right-hand side. Implications of the form (A → ⊥), for example, mean that the left-hand side is not constructible. In that case, we never receive any value from our callers, and we implement (A → ⊥) by writing a constructive proof that all possible inputs are absurd.

But even if we ignore this corner case, it would only be justified to conclude that receiving an A as input implies that an Agda proof for A exists. Löb's theorem is not about Agda, but about a specific model of provability encoded by the rules of provability logic. If your proof uses (A → □ A) as an assumption (as opposed to ⊢ A → ⊢ □ A), then your proof is not a proof of Löb's theorem, because that theorem makes no such assumption.

EDIT 2: Updated the gist, should actually contain a valid proof of Löb's theorem now

Your latest version at the time of writing is revision 6, which uses a postulate

postulate
  ev : ∀ {A} → A → □ A

instead of the data declaration

data □_ (A : Set β) : Set (suc β)
  ev : A → □ A

That is, you have removed the (□ A → A) direction, but kept the (A → □ A) direction. That's a bit better, but you're still using the forbidden assumption.

the real issue is the construction of the fixpoint

I agree, that part does look hard to implement.

gelisam10y90

When you make a data declaration like this, you are essentially postulating (A → □ A) and (□ A → A), for any formula A. This essentially removes the distinction between A (the formula A is "true") and □ A (the formula A is "provable"), meaning you are not working inside the modal logic you think you are working in. We made the same mistake in the /r/haskell thread at first.

It's an easy mistake to make, because the first rule of inference for provability logic is

⊢ A
------
⊢ □ A

So it looks like (A → □ A) should be a theorem, but notice that the left-hand side of the turnstile is empty. This means that the rule is only valid when there are no assumptions in context; or equivalently in a programming language, when there are no variables in scope. Since Agda doesn't know how to talk about which Agda variables are in scope, we cannot implement this rule as an Agda function, which would be available whether the scope is empty or not.

A few examples would probably help. A valid use of inference rule 1 would look like this:

----- variable rule
A ⊢ A
--------- deduction rule
⊢ (A → A)
----------- inference rule 1
⊢ □ (A → A)

Here, we have used inference rule 1 and a few other common rules to prove that the trivial implication (A → A) is "provable". By comparison, here is what happens when we try to use the same technique to prove that all "true" statements are "provable", i.e. (A → □ A). (I put "true" and "provable" in quotes in order to distinguish "provable according to the map" from provable in the territory. Provability logic is a map we use to understand provability.)

----- variable rule
A ⊢ A
------- inference rule 1 (invalid, the context is not empty)
A ⊢ □ A
--------- deduction rule
⊢ A → □ A

As you can see, the proof is invalid. And that's a good thing, too, because if we could prove that all "true" statements were "provable", then our provability logic would have allowed us to prove something which we know to be false, so the logic would have been a poor model of real-world provability.

Anyway, the conclusion of all this is that we should not represent inference rule 1 by an Agda function from A to □ A, but by an object of type "Theorem (A ⇾ Provable A)", where "⇾" is a syntactic construct representing implication, not an Agda function.

gelisam11y40

I understand how that might have happened. Now that I am no longer a heroic volunteer saving my beloved website maiden, but just a potential contributor to an open source project, my motivation has dropped.

It is a strange inversion of effect. The issue list and instructions both make it easier for me to contribute, but since they reveal that the project is well organized, they also demotivate me because a well-organized project makes me feel like it doesn't need my help. This probably reveals more about my own psychology than about effective volunteer recruitment strategies, though.

gelisam11y30

After finding the source and the issue list, I found instructions which indicate that there is, after all, non-zero engineering resources for lesswrong development. Specifically, somebody is sorting the incoming issues into "issues for which contributions are welcome" versus "issues which we want to fix ourselves".

The path to becoming a volunteer contributor is now very clear.

gelisam11y150

Are you kidding? Sign me up as a volunteer polyglot programmer, then!

Although, my own eagerness to help makes me think that the problem might not be that you tried to ask for volunteers and didn't get any, but rather that you tried to work with volunteers and something else didn't work out.

gelisam11y00

I'm pretty sure I would come up with a reason to continue behaving as today. That's what I did when I discovered, to my horror, that good and bad were human interpretations and not universal mathematical imperatives. Or are you asking what the rational reaction should be?

gelisam12y00

Thanks for the suggestion. This is not, however, what I was looking for.

Cached thoughts explains that hearing a phrase might be enough for our brain to remember it as true, while genetic fallacy warns that the original cause of a belief is not as important as the sum-total of evidence for or against that belief.

I am not, however, looking for evidence that our past taints our beliefs. I have come to realize that finding the historical cause of a thought is a good first step towards getting rid of an unwanted thought, and I wanted to know whether this strategy was covered yet. If not... I'll accumulate a bit more evidence, and then maybe I'll write a post!

gelisam12y00

search for the historical causes of your thoughts, rather than their justifications.

Is there a standard name or LW article on the subject? I first stumbled upon the importance of that skill here, on this site, and I wish I knew more about it than just that one personal anecdote.

gelisam14y80

Great case study, in that studying my own reaction to your article has thought me a lot about my own decision making. And my conclusion is that reading a rationalist blog isn't sufficient to become rational!

I am thin, despite having very bad eating habits (according to conventional dietary wisdom). I had not heard of Taubes before. Specifically, I have never considered that conventional dietary wisdom could be incorrect; people say that I eat unhealthily, and I have simply taken their word for it. The fact that I continue to eat unhealthily has more to do with laziness than rationality.

How much have I shifted my beliefs now that I have read this article? I don't think in probabilities yet, but... by a lot. Probably more than I should. Neither Taubes' belief nor bentarm's belief nor Eliezer's belief nor Taubes' lack of biological credentials has affected my own degree of belief, not by much anyway. Just the fact that there exists an alternative theory of food has succeeded in demolishing the confidence I had in the conventional theory.

How could such a weak amount of evidence cause such a large shift? By relying on a bias which happened to be pushing in the same direction. Just ask me: "since you have decreased your belief in the conventional theory after reading an article on Taubes, you must have correspondingly increased your belief in Taubes' theory, right?"

Well, let me google whether the fatty food I currently eat is high-carb or low-carb, and I'll get back to you on that.

gelisam14y00

could you link some of these stories, please? I am known to entertain utopian ideas from time to time, but if utopias really do hurt people, then I'd rather believe that they hurt people.

Load More