This is a brief elaboration of an analogy I've found insightful for both some purely epistemological questions and some crypto-platform design ideas:

Justification for a belief takes the form of a cryptographic proof.

Here's what happens if we apply this idea to some theories of epistemic justification that are common in the philosophical literature:

  • Foundationalist cryptoepistemology is easy: it involves justifying all beliefs by proof-trees (or dags, or string diagrams) terminating in irreducible basic beliefs, which take the form of root certificate authorities.
  • Coherentist cryptoepistemology grounds beliefs in a global system of mutually endorsing identities. Coherentism corresponds to decentralization. Pure coherentism by itself has a big problem, which in philosophy is called the “regress problem” and in crypto is called the “Sybil attack”: what’s to stop any arbitrary belief from presenting itself as part of a global coherent system; how can one measure its true size?
    • Weak foundationalism corresponds to a “Web of Trust” scheme, in which certain beliefs are accorded slightly privileged status, such that between all the globally mutually coherent systems, the one which most coheres with those privileged beliefs is selected.
    • Proof-of-work is a radical and relatively recent idea which does not yet have a direct correspondent in philosophy. Here, cryptographic proofs witness the expenditure of resources like physical energy to commit to particular beliefs. In this way, the true scale of the system which agrees on certain beliefs can be judged, with the largest system being the winner.
    • Proof-of-stake is a variation of proof-of-work which is much more subtle to analyze. Here, instead of appealing to cryptographic proofs which directly witness the expenditure of resources, we appeal to beliefs (about identities’ stakes) that are justified by previous global beliefs, thereby pushing the majority of the foundational justification to (the hash of) the genesis block of the proof-of-stake chain. 
  • Reliabilist cryptoepistemology would attempt to select endorsers whose beliefs are “truth-tracking”: across a range of possible worlds, their beliefs tend to correspond to actual ground truth. That this doesn’t really work by itself (how could one possibly know the reliability of an arbitrary cryptographic identity without any privileged source of ground truth?) gives some insight into one major reason why, in philosophy, reliabilist epistemology by itself is unconvincing: reliabilism gives no escape from the need to establish, at least modally, on what basis a statement can be judged as accurate. But reliabilism (and/or its Bayesian cousin) could serve as a very useful extension of a more basic epistemology (which only covers something like direct observations) towards more tentative inductive or even abductive claims.
    • Correspondence theory has the same problem—correspondence with what? But it’s worse: there is no redeeming extension, since justified beliefs need to directly correspond with reality, rather than being extrapolated from a process that is judged reliable when it comes to claims that can (eventually, sometimes) be more fundamentally justified.

Some directions for future work:

  • How best to operationalize "coherence" quantitatively (and why)?
  • How best to aggregate a collection of beliefs from different sources that are sufficiently coherent but disagree somewhat into an overall belief-state?
  • How to incorporate incentives for true beliefs, e.g. Reciprocal Scoring, into this story?

30

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 6:27 PM

Wanted to note that "proof of work" seems to correspond to "skin in the game," which I'd be willing to bet NN Taleb would claim is an ancient philosophical concept and justification for knowledge.

Proof-of-work is a radical and relatively recent idea which does not yet have a direct correspondent in philosophy. Here, cryptographic proofs witness the expenditure of resources like physical energy to commit to particular beliefs. In this way, the true scale of the system which agrees on certain beliefs can be judged, with the largest system being the winner.

I think this relates to the notion that constructing convincing falsehoods is more difficult and costly than discovering truths, because (a) the more elaborate a falsehood is, the more likely it is to contradict itself or observed reality, and (b) false information has no instrumental benefit to the person producing it. Therefore, the amount of "work" that's been put into a claim provides some evidence of its truth, even aside from the credibility of the claimant.

Example: If you knew nothing about geography and were given, on the one hand, Tolkien's maps of Middle-Earth, and on the other, a USGS survey of North America, you'd immediately conclude that the latter is more likely to be real, based solely on the level of detail and the amount of work that must've gone into it. We could imagine that Tolkien might get to work drawing a fantasy map even more detailed than the USGS maps, but the amount of work this project would require would vastly outweigh any benefit he might get from it.

  • Correspondence theory has the same problem—correspondence with what? But it’s worse: there is no redeeming extension, since justified beliefs need to directly correspond with reality, rather than being extrapolated from a process that is judged reliable when it comes to claims that can (eventually, sometimes) be more fundamentally justified.

Doesn't correspondence theory more point out the problem with crypto than crypto point out the problem with reality?

Examples:

  • The USG has a monopoly on legitimate violence within its borders. This monopoly is not encoded anywhere on a blockchain or anything. However, that doesn't change the reality that it does have that monopoly.
  • People want to track ownership, truth (for e.g. prediction markets), copyright, etc., using cryptography. However, this faces a number of difficult grounding problems, e.g. there's not really anything that can fully prevent people from cheating at it. But this doesn't change the underlying reality that crypto is supposed to be describing.
  • There's stuff like stablecoins which are supposed to correspond to fiat currencies, but that doesn't mean they necessarily maintain that correspondence. But if they fail, it will not be reality that fails, but instead be the crypto.

New to LessWrong?