Jessica Taylor. CS undergrad and Master's at Stanford; former research fellow at MIRI.
I work on decision theory, social epistemology, strategy, naturalized agency, mathematical foundations, decentralized networking systems and applications, theory of mind, and functional programming languages.
Blog: unstableontology.com
Twitter: https://twitter.com/jessi_cata
There might be a confusion. Did you get the impression from my post that I think MIRI was trying to solve philosophy?
I do think other MIRI researchers and I would think of the MIRI problems as philosophical in nature even if they're different from the usual ones, because they're more relevant and worth paying attention to, given the mission and so on, and because (MIRI believes) they carve philosophical reality at the joints better than the conventional ones.
Whether it's "for the sake of solving philosophical problems or not"... clearly they think they would need to solve a lot of them to do FAI.
EDIT: for more on MIRI philosophy, see deconfusion, free will solution.
It appears Eliezer thinks executable philosophy addresses most philosophical issues worth pursuing:
Most “philosophical issues” worth pursuing can and should be rephrased as subquestions of some primary question about how to design an Artificial Intelligence, even as a matter of philosophy qua philosophy.
"Solving philosophy" is a grander marketing slogan that I don't think was used, but, clearly, executable philosophy is a philosophically ambitious project.
None of what you're talking about is particular to the Sequences. It's a particular synthesis of ideas including reductionism, Bayesianism, VNM, etc. I'm not really sure why the Sequences would be important under your view except as a popularization of pre-existing concepts.
Decision theory itself is relatively narrowly scoped, but application of decision theory is broadly scoped, as it could be applied to practically any decision. Executable philosophy and the Sequences include further aspects beyond decision theory.
No because it's a physics theory. It is a descriptive theory of physical laws applying to matter and so on. It is not even a theory of how to do science. It is limited to one domain, and not expandable to other domains.
...try reading the linked "Executable Philosophy" Arbital page?
Seems like a general issue with Bayesian probabilities? Like, I'm making a argument at >1000:1 odds ratio, it's not meant to be 100%.
I see why branch splitting would lead to being towards end of universe, but the hypothesis keeps getting strong evidence against it as life goes on. There might be something more like the same number of "branches" running at all times (not sharing computation), plus Bostrom's idea of duplication increasing anthropic measure.
MIRI research topics are philosophical problems. Such as decision theory and logical uncertainty. And they would have to solve more. Ontology identification is a philosophical problem. Really, how would you imagine doing FAI without solving much of philosophy?
I think the post is pretty clear about why I think it failed. MIRI axed the agent foundations team and I can see very very few people continue to work on these problems. Maybe in multiple decades (past many of the relevant people's median superintelligence timelines) some of the problems will get solved but I don't see "push harder on doing agent foundations" as a thing people are trying to do.