Question: why do plants grow tall?
Another useful element of an answer here is "Quite a lot of them don't." Any explanation for "why plants grow tall" that implies that mosses and water-lilies should be as tall as redwoods, is proving too much. A good explanation for plant height needs to predict both that redwoods grow tall and that water-lilies lie flat.
If you believe there is no objective way to compare valence between individuals, then I don't see how you can claim that it's wrong to discount the welfare of red-haired people.
You can call that evil according to your own values, but then someone else can just as easily say that ignoring bee welfare is evil.
I guess you could say "Ignoring red-haired people is evil and ignoring bees isn't evil, because those are my values", but I don't know how you can expect to convince anyone else to agree with your values.
You can call that evil according to your own values, but then someone else can just as easily say that ignoring bee welfare is evil.
If you mean evil according to their values, then sure this just seems correct. If someone doesn't hold to objective morality then the same is true for every moral question. Some are just less controversial than others. And you CAN make arguments like, if you agree with me on moral premise X then conclusion Y holds
On some level, yes it is impossible to critique another person's values as objectively wrong, utility functions in general are not up for grabs.
If person A values bees at zero, and person B values them at equivalent to humans, then person B might well call person A evil, but that in and of itself is a subjective (and let's be honest, social) judgement aimed at person A. When I call people evil, I'm attempting to apply certain internal and social labels onto them in order to help myself and others navigate interactions with them, as well as create better decision theory incentives for people in general.
(Example: calling a businessman who rips off his clients evil, in order to remind oneself and others not to make deals with him, and incentivize him to do that less.
Example: calling a meat-eater evil, to remind oneself and others that this person is liable to harm others when social norms permit it, and incentivize her to stop eating meat.)
However, I think lots of people are amenable to arguments that one's utility function should be more consistent (and therefore lower complexity). This is basically the basis of fairness and empathy as a concept (this is why shrimp welfare campaigners often list a bunch of human-like shrimp behaviours in their campaigns: in order to imply that shrimp are similar to us, and therefore we should care about them).
If someone does agree with this, I can critique their utility function on grounds of them being more or less consistent. For example, if we imagine looking various mind-states of humans and clustering them somehow, we would see the red-haired-mind-states mixed in with everyone else. Separating them out would be a high-complexity operation.
If we added a bunch of bee mind-states, they would form a separate cluster. Giving some comparison factor is be a low-complexity operation, you basically have to choose a real number and then roll with it.
If there really was a natural way to compare wildly different mental states, which was roughly in line with thinking about my own experiences of the world, then that would be great. But the RP report doesn't supply that.
You're not ultimately limited to utilitarianism: you can use Kantian or Rawlsian arguments to include redheads.
The situation is more complex and less bad than you are making it out to be. For instance the word "qualia" is an attempt to clarify te word "consciousness", and does have a stipulated meaning, for alm that some people ignore it.. The contention about words like "qualia" and "valence" is about whether and how they are real, and that is not a semantic issue. Rationalists have a long term problem of trying to find objective valance in a physical universe, even though Hume's fork tells you it's not possible
If the success of a moral theory ultimately grounds out in intuition, it's OK to use unintuitiveness to summarily reject a theory.
CEV is group level relativism, not objectivism.
CEV is group level relativism, not objectivism.
I think Eliezer's attempt at moral realism derives from two things: first, the idea that there is a unique morality which objectively arises from the consistent rational completion of universal human ideals; second, the idea that there are no other intelligent agents around with a morality drive, that could have a different completion. Other possible agents may have their own drives or imperatives, but those should not be regarded as "moralities" - that's the import of the second idea.
This is all strictly phrased in computational terms too, whereas I would say that morality also has a phenomenological dimension, which might serve to further distinguish it from other possible drives or dispositions. It would be interesting to see CEV metaethics developed in that direction, but that would require a specific theory of how consciousness relates to computation, and especially how the morally salient aspects of consciousness relate to moral cognition and decision-making.
Other possible agents may have their own drives or imperatives, but those should not be regarded as “moralities”—that’s the import of the second idea.
He seems to believe that, but I dont see why anyone else should. Its like saying English is the only language, or the Earth is the only planet. If morality is having values, any number of entities could have values. If it's rules for living in groups, ditto. If it's fairness, ditto.
This is all strictly phrased in computational terms too
It's not strictly phrased at all..It's very hard to follow what he's saying...or particularly computational.
I agree that unintuitiveness is a valid reason to reject the theory and the report; that doesn't contradict my comment.
If you believe there is no objective way to compare valence between individuals, then I don't see how you can claim that it's wrong to discount the welfare of red-haired people.
This feels too strong of a claim to me. There are still non-objective ways to compare valence between individuals - J Bostock mentions "anchor(ing) on neuron count".
I guess you could say "Ignoring red-haired people is evil and ignoring bees isn't evil, because those are my values", but I don't know how you can expect to convince anyone else to agree with your values.
I might not strongly agree, but I believe in this direction. I think that humans are generally pretty important and I like human values.
There's always going to be some subjectivity: I think this is good.
This post looks to me like an extreme case of an isolated demand for rigor. People release reports with assumptions in them all the time. People release reports with not 100% rigorously defined terms in them all the time. Ditto for academic papers.
The fact that the authors of the report don't give us any proximate theories of consciousness, unfortunately, damns the whole project to h∄ll, which is where poor technical philosophies go when they make contact with reality (good technical philosophies stick around if they're true, or go to h∃aven if they're false).[3]
You have to provide a full theory of consciousness to release a report about priorities? Really? How is this different from arguing that Ajeya Cotra's report on AI timelines is worthless because it doesn't provide a theory of intelligence? (Not a definition of intelligence, mind you, but a theory?)
Unitarianism smuggles in an assumption of "amount" of valence, but the authors don't define what "amount" means in any way, not even to give competing theories of how to do so.
So the failure to define the word "amount" is the smoking gun? What if the report had added a fourth assumption
Quantifiability: valence can be quantified as a scalar number that behaves additively
Now they have defined what amount means. But they essentially did this anyway by saying that
There is an objective thing called 'valence' which we can assign to four-volumes of spacetime using a mathematical function (but we're not going to even speculate about the function here)
This essentially already says that they think valence, under the right theory of consciousness, amounts to a number. The implication is that once you have the right ToC, it will tell you how to quantify valence, and then you just add it up using addition.
Obviously you can disagree with these assumptions, and of course many people do. But the post accuses the report of "smuggling in an assumption about amount of valence". How is this smuggling in anything??? It's explicitly listed as an assumption.
I fail to see how this post provides any value beyond just stating that the author disagreed with the conclusions/assumptions of the report, and I think it's highly likely that the only reason it got upvoted is that most other people also disagreed with both the assumptions and the conclusions. I don't see how anyone who does agree with the assumptions could change their mind based on reading this, or how anyone who doesn't like the report couldn't have already explained why before reading this. Please tell me what I'm missing.
You don't really need to read the report to come to this conclusion. Morality / consciousness / valence / qualia are words which don't have widely agreed definitions because they are trying to point at ideas that arise from confused / magical thinking while still maintaining the respectability of analytical philosophy. So any attempt to precisely measure them will inevitably end up looking a bit silly.
I am a bit cautious of dismissing all of those ideas out-of-hand; while I am tempted to agree with you, I don't know of a strong case that these words definitely don't (or even probably don't point) to anything in the real world. Therefore, while I can't see a consistent, useful definition of them, it's still possible that one exists (c.f. Free Will which people often get confused about, but for which there exists a pretty neat solution) so it's not impossible that any given report contains a perfectly satisfying model which explains my own moral intuitions, extends them to arbitrary minds, and then estimates the positions of various animals in mind-space. Unfortunately this report doesn't do that, and therefore I update my priors downwards about any similar reports containing such a solution.
These issues matter not just for human altruism but also for AI value systems. If an AI takeover occurs and if the AI(s) care about the welfare of other beings at all, they will have to make judgements about which entities even have a well-being to care about, and they will also have to make judgements about how to aggregate all these individual welfares (for the purpose of decision-making). Even just from a self-interested perspective, moral relativism is not enough here, because in the event of AI takeover, you the human individual will be on the receiving end of AI decisions. It would be good to have a proposal for AI value system that is both safe for you the individual, and also appealing enough to people in general, that it has a chance of actually being implemented.
Meanwhile, the CEV philosophy tilts towards moral objectivism. It is supposed that the human brain implicitly follows some decision procedure specific to our species, that this encompasses what we call moral decisions, and that the true moral ideal of humanity would be found by applying this decision procedure to itself ("our wish if we knew more, thought faster, were more the people we wished we were", etc). It is not beyond imagining that if you took a brain-based value system like PRISM (LW discussion), and "renormalized" it according to a CEV procedure, that it would output a definite standard for comparison and aggregation of different welfares.
This would be a good reason not to let AIs take over!
On a more serious note - I think trying to give AI systems some sort of objective (not from a human perspective) moral framework is impossible to get right and likely to end badly for human values.
It's more worth it to focus on giving AI systems a human-subjective framework. I buy that human values are good & should be preserved.
You seem to be arguing "your theory of moral worth is incomplete, so I don't have to believe it". Which is true. But without presenting a better or even different theory of moral worth, it seems like you're mostly just doing that because you don't want to believe it.
To the extent you are presenting a different theory, your conclusions seem inconsistent with that theory. To summarize: I agree that you can't make objective decisions about moral worth. But you can make objective decisions about self-consistency of theories. And the "bees are 7-15% as worthwhile as humans" is more self-consistent than any alternatives I know of, let alone that you've presented.
Having said that, I don't like the conclusions either, and I agree that they're not based on a thorough theory of consciousness or other objective basis of moral worth. I'll even admit that I'm going to do essentially the same thing you're doing, and continuing to enjoy honey here and there based on my ability to not believe just because it's inconvenient and I suspect there's something wrong with it. But unlike you, I'm going to admit that I'm being inconsistent by ignoring the best theory on the topic I know of.
Now in a little more detail on why I think you're being inconsistent:
I don't see why you'd say hair color is obviously a pretty bad criteria but judgments about relative worth are pretty much totally arbitrary and aesthetic. I agree that judgments about moral worth are essentially arbitrary and aesthetic, but surely some claims about relative worth are more self-consistent than others (and probably by a lot), just like hair color.
The argument for worth of bees (which I happened to read) seems like it could be taken as exactly an appeal to consistency. Sure, you could say "well I don't care about them because they're bees" but that sounds exactly like the hair color unless it's accompanied by deeper arguments for the disanalogy between bees and humans (assuming you care about other humans; it's perfectly consistent to just not care about humans, it's just a bit harder to make real friends if that's your position).
So I think there are less-wrong answers out there, we just don't have them yet. But the best answer we have thus far is 7-15%, and dismissing that without addressing the arguments for why that's the most consistent position seems to contradict your own stated position that there are more and less consistent arguments.
Separately, on your accusation of bad faith arguments from that side of the aisle:
Your dismissal of people wanting you to read a long blog post as "in bad faith" seems both quite wrong and quite unhelpful (tending to create arguments vs discussion), but I'll assume you write it in good faith.
I won't go into details, but I think it is, in short, bad to assume ill intent when incompetence will do. Tracking what is good and bad epistemics is complicated, so I sincerely doubt that most of the authors asking you to read that blog post are thinking anything like "haha, that will stop them from arguing with me regardless of whether I'm right!". Okay, maybe a little of that thought sometimes - but usually I'd assume it's mostly in good faith, with the thought being "I'm pretty sure I'm right because I've written a much more careful analysis than anyone is bringing to bear against my conclusion. They'd agree if they'd just go read it". Which might not be a good move, but I do take it to be mostly honest-
Just like I take your rather brusque and shallow dismissal of those arguments to be in good faith.
I don't have to present an alternative theory in order to disagree with one I believe to be flawed or based on false premises. If someone gives me a mathematical proof and I identify a mistake, I don't need to present an alternative proof before I'm allowed to ignore it.
But it would be better if you did. And more productive. And admirable.
You just have to clearly draw the distinction between "not X" claim and "Y" claim in your writing.
You get his point tho right? It's basically this scott article
https://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/
Like, all of us need to have a position about what we value, because that's what we use to guide our decisions. But all theories of ethics are "flawed": basically they're formulated in natural language, and none of the terms are on very firm mathematical footing.
But you should be very careful with using this as an argument against any specific ethical theory, because that line of reasoning enables you to discount any theory you don't want to believe, even that theory actually has stronger arguments for it, by your own standards, than what you currently believe.
I think your proof example is not right, a better example is like:
I'm a mathematician and do tons of mathematical work. You show me your proof of the Riemann Hypothesis. I can't find any real flaws in it, but I tell you its based on ZFC and ZFC is subject to Gödel's incompleteness theorems, and therefore we cant be sure the system you're using to prove RH is even consistent, therefore I ignore your proof. You ask me what to use instead of ZFC, I tell you "I don't have to present an alternative theory in order to disagree with one I believe to be flawed or based on false premises....", then I leave and continue doing my work in william type theory, which is also subject to GI, which I choose not to think about.
Sure, in the case of severely flawed theories. And you'll have to judge how flawed before you stop believing (or severaly downgrade their likelihood if you're thinking in Bayesian terms). I agree that you don't need an alternative theory, and stand corrected.
But rejecting a theory without a better alternative can be suspicious, which is what I was trying to get at.
If you accept some theories with a flaw (like "I believe humans have moral worth even though we don't have a good theory of consciousness") while rejecting others because they have that same flaw, you might expect to be accused of inconsistency, or even motivated reasoning if your choices let you do something rewarding (like continuing to eat delicious honey).
But rejecting a theory without a better alternative can be suspicious
Nah, I still disagree, the set of theories is vast, one being promoted to my attention is not strong evidence it is more true than all of those that haven't. People can separately be hypocritical or inconsistent, but that's something that should be argued for directly
You seem to be arguing "your theory of moral worth is incomplete, so I don't have to believe it". Which is true. But without presenting a better or even different theory of moral worth, it seems like you're mostly just doing that because you don't want to believe it.
I would overall summarize my views on the numbers in the RP report as "These provides zero information, you should update to where you would be before you read them." Of course you can still update on the fact that different animals have complex behaviour, but then you'll have to make the case for "You should consider bees to be morally important because they can count and show social awareness". This is a valid argument! It trades the faux-objectivity of the RP report for the much more useful property of being something that can actually be attacked and defended.
I don't see why you'd say hair color is obviously a pretty bad criteria but judgments about relative worth are pretty much totally arbitrary and aesthetic. I agree that judgments about moral worth are essentially arbitrary and aesthetic, but surely some claims about relative worth are more self-consistent than others (and probably by a lot), just like hair color.
I addressed this in another comment but if you want me to give more thoughts I can.
So I think there are less-wrong answers out there, we just don't have them yet. But the best answer we have thus far is 7-15%, and dismissing that without addressing the arguments for why that's the most consistent position seems to contradict your own stated position that there are more and less consistent arguments
The thing I take issue with is using the RP report as a Schelling point/anchor point that we have to argue away from. When evidence and theory are both scarce, choosing the Schelling point is most of the argument, and I think the RP report gives zero information.
All good points.
I agree that you need an argument for "you should consider bees to be morally important because they can count and show social awareness" I was filling that argument in. To me it seems intuitive and a reasonable baseline assumption, but it's totally reasonable that it doesn't seem that way to you.
(It's the same argument I make in a comment justifying neuron count as very rough proxy for moral consideration I in response to Kaj Sotala's related short form. I do suspect that in this case many of bees cognitive abilities do not correlate with whatever-you-want-to-call-consciousness/sentience in the same way they would in mammals, which is one of the reasons I'll continue eating honey occasionally.)
Agreed that trying to insist on a Schelling or anchor point is bad argumentation without a full justification. How much justification it needs is in the eye of the beholder. It seems reasonable to me for reasons to complex to go into, and reasonable that it doesn't to you since you don't share those background assumptions/reasoning.
For your second part, whoops! I meant to include a disclaimer that I don't actually think BB is arguing in bad faith, just that his tactics cash out to being pretty similar to lots of people who are, and I don't blame people for being turned off by it.
Thanks, that makes sense.
Perhaps I'm being a bit naive; I've avoided the worst parts of the internet :)
I guess I think of arguing in bad faith as being on a continuum, and mostly resulting from motivated reasoning and not having good theories about what clear/fair argumentation is. I think it's pretty rare for someone's faith to be so bad that they're thinking "I'll lie/cheat to win this argument" - although I'm sure this does happen occasionally. I think most things that look like really bad faith are a product of it being really easy to fool yourself into thinking you're making a good valid argument, particularly if you're moving fast or irritated.
I feel like one better way to think about this topic rather than just going to the conclusion that there is no objective way to compare individuals is to continue full-tilt into the evolutionary argument about keeping track of fitness-relevant information, taking it to the point that one's utility function literally becomes fitness.[1][2]
Unlike the unitarian approach, this does seem fairly consistent with a surprising number of human values, given enough reflection on it. For instance, it does value not unduly causing massive amounts of suffering to bees; assuming that such suffering directly affects their ability to perform their functions in ecosystems and the economy, us humans would likely be negatively impacted to some extent. It also seems to endorse cooperation and non-discrimination, as fitness would be negatively impacted by not taking full advantage of specialization and by allowing for others to locally increase their own fitness by throwing our own under the bus.
It also has a fairly nice argument for why we should expect people to have a utility function that looks like this. Any individual with values pointing away from fitness would simply be selected away from the population, naturally selecting for this trait.[3] By this point in human evolution, we should expect most people to at least endorse the outcomes of a decision theory based on this utility function (even if they perhaps wouldn't trust it directly).
Of course, this theory is inherently morally relativist, but I think that given the current environment we live in, this doesn't pose a problem to humans trying to use this. One would have to be careful and methodical enough to consider higher-order consequences, but at least it seems to have a clearer prompt for how one should actually approach problems.
There are some minor issues with this formulation, such as this not directly handing preferences humans have like transhumanism. I think an even more ideal utility function would be something like "the existence of the property that, by its nature, is the easiest to optimize," although I'm not sure of it, given how quickly that descends into fundamental philosophical questions.
Also, if any of you know if there's a more specific name for this version of moral relativism, I would be happy to know! I've been trying to look for it (since it seems rather simple to construct), but I haven't found anything.
Of course, it wouldn't be exact, owing to reliance on the ancestral environment, the computational and informational difficulty of determining fitness, and the unfortunately slow pace of evolution, but it should still be good enough as an approximation for large swaths of System 1 thinking.
A question in the form of an analogy (this is almost certainly not phrased in its strongest form, hopefully it is clear enough and assistance with phrasing and responses to the question are both appreciated):
Lay a sledgehammer on a perfectly stable (assume it will not rot/rust/etc. for the purpose of this thought experiment) table. At rest on the table, some amount of force downward is being applied by the sledge. Call this unit of energy applied daily X.
Alternatively, there exists some velocity and mass combination at which a single swing of the sledgehammer will break the table. Call the minimum unit of energy required to accomplish this feat Y.
There exists many multiples of X that exceed the value of Y. But simultaneously there exists no multiple of X that equals the results of Y. That is, you can leave the sledge on the table for a month, a year, a century, a millennium, even a Graham's number of years and the table doesn't break. But one application of >\= Y energy in a single unit, does break the table. A million times X is clearly more energy than Y, but it doesn't break the table, and Y does.
Therefore there exists some threshold below which X is an irrelevant number with respect to Y. It is a relevant number in many other ways, but not in respect to achieving the outcome of a broken table, which requires values equal to or greater than Y.
Does such a moral threshold exist?
Because I think some people assuming yes and others assuming no, but both being unable to conceive of the alternative position is happening here. And this appears to me to be true regardless of which school of ethics they otherwise subscribe to.
One thing I've been quietly festering about for a year or so is the Rethink Priorities Welfare Range Report. It gets dunked on a lot for its conclusions, and I understand why. The argument deployed by individuals such as Bentham's Bulldog boils down to: "Yes, the welfare of a single bee is worth 7-15% as much as that of a human. Oh, you wish to disagree with me? You must first read this 4500-word blogpost, and possibly one or two 3000-word follow-up blogposts". Most people who argue like this are doing so in bad faith and should just be ignored. Edit: I meant to include here a point that I don't think Bentham's Bulldog in particular is arguing in bad faith when he cites this, I just think that the effect on the reader is similar to some bad-faith argumentation tactics. I apologize for making the initial post unnecessarily combative.
I'm writing this as an attempt to crystallize what I think are the serious problems with this report, and with its line of thinking in general. I'll start with
No, not the church from Unsong. From the report:
- Utilitarianism, according to which you ought to maximize (expected) utility.
- Hedonism, according to which welfare is determined wholly by positively and negatively valenced experiences (roughly, experiences that feel good and bad to the subject).
- Valence symmetry, according to which positively and negatively valenced experiences of equal intensities have symmetrical impacts on welfare.
- Unitarianism, according to which equal amounts of welfare count equally, regardless of whose welfare it is.
Now unitarianism sneaks in a pretty big assumption here when it says 'amount' of welfare. It leaves out what 'amount' actually means. Do RP actually define 'amount' in a satsifying way? No![1]
You can basically skip to "The Fatal Problem" from here, but I want to go over some clarifications first.
I ought to mention that, they do mention three theories about the evolutionary function of valenced experience, but these aren't relevant here, since they still don't make claims about what valence actually is. If you think they do, then consider the following three statements
Firstly, note that these theories aren't at all mutually exclusive and seem to be three ways of looking at the same thing. And none of them give us a way to compare valence between different organisms: for example, if we're looking at fitness-relevant information, there's no principled way to compare +5.2 expected shrimp-grandchildren with +1.5 expected pig-grandchildren.[2]
All of this is fine, since the evolutionary function of valence is a totally different issue to the cognitive representation of valence.
This is called the ultimate cause/proximate cause distinction and crops up all the time in evolutionary biology. An example is this:
Question: why do plants grow tall?
Proximate answer: two hormones (auxins and gibberellins) cause cells to divide and elongate, respectively
Ultimate answer: plants can get more light by growing above their neighbors, so taller plants which grow taller are favoured
The fact that the authors of the report don't give us any proximate theories of consciousness, unfortunately, damns the whole project to h∄ll, which is where poor technical philosophies go when they make contact with reality (good technical philosophies stick around if they're true, or go to h∃aven if they're false).[3]
If I could summarize my biggest issue with the report, it's this:
Unitarianism smuggles in an assumption of "amount" of valence, but the authors don't define what "amount" means in any way, not even to give competing theories of how to do so.
This, unfortunately, makes the whole thing meaningless. It's all vibes! To reiterate, the central claim being made by the report is:
- There is an objective thing called 'valence' which we can assign to four-volumes of spacetime using a mathematical function (but we're not going to even speculate about the function here)
- Making one human brain happy (as opposed to sad) increases the valence of that human brain by one arbitrary unit per cubic-centimeter-second
- On the same scale, making one bee brain happy (as opposed to sad) increases the valence of that bee brain by fifteen thousand arbitrary units per cubic-centimeter-second
I don't think there's a function I would endorse that behaves in that way.
Since I've critiqued other people's positions, I should state my own. It's polite:
I don't think there's an objective way to compare valence between different minds at all. You can anchor on neuron count and I won't criticize you, since that's at least proportional to information content, but that's still an arbitrary choice. You can claim that what you care about is a particular form of self-modelling and discount anything without a sophisticated self-model.[4] All choices of moral weighting are somewhat arbitrary. All utilitarian-ish claims about morality are about assigning values to different computations, and there's not an easy way to compare the computations in a human vs a fish vs a shrimp vs a nematode. The most reasonable critiques are critiques on the marginal consistency of different worldviews. For example, a critique which values the computations going on inside all humans except for those with red hair, is fairly obviously marginally less consistent than one which makes no reference to hair colour. Whether a worldview values one bee as much as 1 human, 0.07 humans, or 1e-6 humans is primarily a matter of choice and frankly aesthetics. Just because we're throwing out objectivity, we need not throw out 'good' and 'bad' as judgements on actions or even people. A person who treats gingers badly based on an assumption like the one above can still be said to be evil.[5] How much of the world you write off as evil is also an arbitrary judgement, and do not make that judgement lightly.
What would it even mean to do that? Suppose you were into free-energy-minimization as a form of perceptual control. You could think of the brain as carrying out a series of prediction-update cycles, where each prediction was biased by some welfare-increasing term. Then you could define the total amount of suffering in the universe as the sum over all cycles of the prediction error. You'd end up a negative utilitarian, but you could do it, and it would give you an objective way of comparing between individuals. Even if this particular example is incoherent in some ways, it does at least contain a term which can be compared between individuals.
Also, consider the normative statements we get if we start talking about moral weight:
Now to me these are incorrect moral statements.
This doesn't actually change the previous statement, but I do find it useful when talking about morality to check every now and then with the question ' What does this imply I should care about?'
I've read chunks of the rest of the report, and it gives me an eyes-glazing-over feeling which I have recently come to recognize as a telltale sign of unclear thinking. Much of it just cites different theories with no real integration of them. I will make an exception for "Does Critical Flicker-Fusion Frequency Track The Subjective Experience of Time?" which raises a very interesting point and is worth a read, at least in part.
I currently think in terms of some combination of the two.
I think there's a Scott Alexander piece which discusses moral disagreements of this form. The conclusion was that some worldviews can be considered evil even if they're in some sense disputes about the world, if they're sufficiently poorly-reasoned.