I'm not sure what is your point here.
Also note there is Axiology (what things are good/bad?), Morality (what should you do?) and Law (what rules should be made / enforced?). It makes sense to try to figure out what is good, and to try to figure out what should you do, and what institution building activities are necessary.
I think it makes sense to work on these questions, they matter to me and so I see value in someone burning their FLOPs to help me and other people get easy to verify deductions. I also agree that current quality of such work is not that great (including yours).
So, I agree that at some level of abstraction any ought can be rationalized with an is. But, at some point, agents need to define meta-strategies for dealing with uncertain situations; for example - all the decision theories and thought experiments that are necessary to ground the rational frameworks used to evaluate and reason as to what any agent should do to maximize expected outcome based on the utility functions they ascribe an agent should have with respect to the world.
While there is no scientific justification or explanation for value beyond what we ascribe - thus there being no ontological basis for morals - we generally agree that reality is self-aware through our conscious experience. And unless everything is fundamentally conscious, or consciousness does not exist, then the various loci of subjectivity (however you want to define them) form the rational basis for value calculus. So then isn't the debate on what constitutes consciousness, the 'camps' that argue over its definition, and the conclusions we draw from it, exactly what would be used to derive the desired utility recipients of such decision frameworks such a CEV? And is this not a moral philosophy and meta-ethical practice in of itself? Until that's settled - the Camp #2 framework gives you a taxonomy for the structures whereby meta-ethics should be applied (without even a mysticism import), and Camp #1 uses a language that keeps morality ontologically (or least linguistically) inert.
At some point we adopt and agree on axioms where science does not give us the data to reason and those should be whatever we agree may have the highest utility. But by virtue of them not being determined by experiment beforehand we can only use counterfactual reasoning to agree on them - of which in of itself the counterfactual that we ought to have done this because we will do this becomes equally up for debate.
So, I agree that at some level of abstraction any ought can be rationalized with an is.
This is exactly the problem with is-ought. (almost) any ought can be backward-reasoned to an is, but it's very hard to determine the causality and necessity of the relationships. The current ises lead to a large set of contradictory and incomplete oughts.
This is an extract from an appendix of one of my longer blog posts that I keep referring to.
What is pain? Why is pain bad?
It's the same trick: we shouldn't ask, "Why is pain negative," but "Why do we think pain is negative?" Here's the response in the form of a genealogy of morals:
What is Ethics?
We can continue the previous story:
And note that I've never crossed Hume's guillotine during the story.