Hello everyone. I'm Ciaran Marshall, an economist by trade. I've been following the rationalist community for a while; the breadth of topics frequently discussed at rigour is unparalleled. I run a Substack where you can see my work here (in particular, I recommend the blog on how AI may reduce the efficiency of labour markets, as AI seems to be the most popular topic here): https://open.substack.com/pub/microfounded?utm_source=share&utm_medium=android&r=56swa
I was first introduced to the rationality community from reading Richard Hanania's work. As we both share libertarian perspectives on the world, I naturally aimed to satiate my confirmation bias via reading public figures that agreed with me. However, I'm incredibly high (probably top 5%) in openness to experience, so I then read whatever was recommended to me. Then I gained a grasp of the core ethos of ACX and LessWrong, then read up on Kahneman and Tversky to identify as many cognitive biases as possible.
To me, the Bayesian epistemological framework makes sense: any empirical study (as economists are all aware of, and as most famously stated in SSX's "beware of the man of one study") can be "debunked" in the sense that there will always be flaws. The point is not to debunk one side vs another, but rather to ask, what is the probability that this claim is correct given the available evidence and what we already know (our prior knowledge, or base rates), which invokes a continuous as opposed to a discrete version of the 'truth'. So we have a middle ground between frequentism vs radical skepticism, which I suspect is healthiest: our intellectual discourse is mired in polarisation, disinformation, and mistrust - in such an environment it's easy to either 100% swallow a flawed argument or reject everything. This approach has been proven to work with superforecasting, which suggests this is the optimal epistemic framework to deploy.
So to summarise, I guess my primary motivation for finally signing up to this community is that I'm eager to learn, to satisfy my somewhat selfish desire for intellectual curiosity.
Hello everyone. I'm Ciaran Marshall, an economist by trade. I've been following the rationalist community for a while; the breadth of topics frequently discussed at rigour is unparalleled. I run a Substack where you can see my work here (in particular, I recommend the blog on how AI may reduce the efficiency of labour markets, as AI seems to be the most popular topic here): https://open.substack.com/pub/microfounded?utm_source=share&utm_medium=android&r=56swa
For those of you on X, here is my account: https://x.com/microfounded?t=2S5RSGlluRQX3J4SokTtcw&s=09
I was first introduced to the rationality community from reading Richard Hanania's work. As we both share libertarian perspectives on the world, I naturally aimed to satiate my confirmation bias via reading public figures that agreed with me. However, I'm incredibly high (probably top 5%) in openness to experience, so I then read whatever was recommended to me. Then I gained a grasp of the core ethos of ACX and LessWrong, then read up on Kahneman and Tversky to identify as many cognitive biases as possible.
To me, the Bayesian epistemological framework makes sense: any empirical study (as economists are all aware of, and as most famously stated in SSX's "beware of the man of one study") can be "debunked" in the sense that there will always be flaws. The point is not to debunk one side vs another, but rather to ask, what is the probability that this claim is correct given the available evidence and what we already know (our prior knowledge, or base rates), which invokes a continuous as opposed to a discrete version of the 'truth'. So we have a middle ground between frequentism vs radical skepticism, which I suspect is healthiest: our intellectual discourse is mired in polarisation, disinformation, and mistrust - in such an environment it's easy to either 100% swallow a flawed argument or reject everything. This approach has been proven to work with superforecasting, which suggests this is the optimal epistemic framework to deploy.
So to summarise, I guess my primary motivation for finally signing up to this community is that I'm eager to learn, to satisfy my somewhat selfish desire for intellectual curiosity.