This is a brief elaboration of an analogy I've found insightful for both some purely epistemological questions and some crypto-platform design ideas:
Justification for a belief takes the form of a cryptographic proof.
Here's what happens if we apply this idea to some theories of epistemic justification that are common in the philosophical literature:
- Foundationalist cryptoepistemology is easy: it involves justifying all beliefs by proof-trees (or dags, or string diagrams) terminating in irreducible basic beliefs, which take the form of root certificate authorities.
- Coherentist cryptoepistemology grounds beliefs in a global system of mutually endorsing identities. Coherentism corresponds to decentralization. Pure coherentism by itself has a big problem, which in philosophy is called the “regress problem” and in crypto is called the “Sybil attack”: what’s to stop any arbitrary belief from presenting itself as part of a global coherent system; how can one measure its true size?
- Weak foundationalism corresponds to a “Web of Trust” scheme, in which certain beliefs are accorded slightly privileged status, such that between all the globally mutually coherent systems, the one which most coheres with those privileged beliefs is selected.
- Proof-of-work is a radical and relatively recent idea which does not yet have a direct correspondent in philosophy. Here, cryptographic proofs witness the expenditure of resources like physical energy to commit to particular beliefs. In this way, the true scale of the system which agrees on certain beliefs can be judged, with the largest system being the winner.
- Proof-of-stake is a variation of proof-of-work which is much more subtle to analyze. Here, instead of appealing to cryptographic proofs which directly witness the expenditure of resources, we appeal to beliefs (about identities’ stakes) that are justified by previous global beliefs, thereby pushing the majority of the foundational justification to (the hash of) the genesis block of the proof-of-stake chain.
- Reliabilist cryptoepistemology would attempt to select endorsers whose beliefs are “truth-tracking”: across a range of possible worlds, their beliefs tend to correspond to actual ground truth. That this doesn’t really work by itself (how could one possibly know the reliability of an arbitrary cryptographic identity without any privileged source of ground truth?) gives some insight into one major reason why, in philosophy, reliabilist epistemology by itself is unconvincing: reliabilism gives no escape from the need to establish, at least modally, on what basis a statement can be judged as accurate. But reliabilism (and/or its Bayesian cousin) could serve as a very useful extension of a more basic epistemology (which only covers something like direct observations) towards more tentative inductive or even abductive claims.
- Correspondence theory has the same problem—correspondence with what? But it’s worse: there is no redeeming extension, since justified beliefs need to directly correspond with reality, rather than being extrapolated from a process that is judged reliable when it comes to claims that can (eventually, sometimes) be more fundamentally justified.
Some directions for future work:
- How best to operationalize "coherence" quantitatively (and why)?
- How best to aggregate a collection of beliefs from different sources that are sufficiently coherent but disagree somewhat into an overall belief-state?
- How to incorporate incentives for true beliefs, e.g. Reciprocal Scoring, into this story?