OK, so the issue here is that you've switched from a thermodynamic model of the gas atoms near the asteroid to one which ignores temperature at the shell. I'm not going to spend any more time on this because while it is fun, it's not a good use of time.
One of the properties of the second law is that if you can't find a single step in your mechanism which violates it, then the mechanism overall cannot violate it. Since you claim that every step in the process obeys the second law, the entire process must obey the second law. Even if I can't find the error I can say with near-certainty that there is one.
Why do you think gas will accumulate close to the shell? This is not how gases work, the gas will form an equilibrium density gradient with zero free energy to exploit.
Actually I think this fails for ordinary reasons. Key question: how are you getting energy out of lowering the helium?
These are interesting, but I think you're fundamentally misunderstanding how power works in this case. The main questions are not "Which intellectual frames work?" but "What drives policy?". For the Democrats in the US, it's often The Groups: a loose conglomeration of dozens to hundreds of different think-tanks, nonprofits, and other orgs. These in turn are influenced by various sources including their own donors but also including academia. This lets us imagine a chain of events like:
This might be wrong but it is functional as a theory of change.
What's the Theory of Change for these frames? Going via voters is pretty risky, because politicians often don't actually know what voters actually want. So who drives policy changes in the Republican party? Lobbyists and Donors? Individual political pressure groups like the Freedom Caucus (but then what drives those pressure groups, we need a smaller theory of change for them). I would guess that the most plausible ToC here is to change the minds of politicians directly through (non-monetary) lobbying. If so, have you had any success doing this?
I am a bit cautious of dismissing all of those ideas out-of-hand; while I am tempted to agree with you, I don't know of a strong case that these words definitely don't (or even probably don't point) to anything in the real world. Therefore, while I can't see a consistent, useful definition of them, it's still possible that one exists (c.f. Free Will which people often get confused about, but for which there exists a ) so it's not impossible that any given report contains a perfectly satisfying model which explains my own moral intuitions, extends them to arbitrary minds, and then estimates the positions of various animals in mind-space. Unfortunately this report doesn't do that, and therefore I update my priors downwards about any similar reports containing such a solution.
You seem to be arguing "your theory of moral worth is incomplete, so I don't have to believe it". Which is true. But without presenting a better or even different theory of moral worth, it seems like you're mostly just doing that because you don't want to believe it.
I would overall summarize my views on the numbers in the RP report as "These provides zero information, you should update to where you would be before you read them." Of course you can still update on the fact that different animals have complex behaviour, but then you'll have to make the case for "You should consider bees to be morally important because they can count and show social awareness". This is a valid argument! It trades the faux-objectivity of the RP report for the much more useful property of being something that can actually be attacked and defended.
I don't see why you'd say hair color is obviously a pretty bad criteria but judgments about relative worth are pretty much totally arbitrary and aesthetic. I agree that judgments about moral worth are essentially arbitrary and aesthetic, but surely some claims about relative worth are more self-consistent than others (and probably by a lot), just like hair color.
I addressed this in another comment but if you want me to give more thoughts I can.
So I think there are less-wrong answers out there, we just don't have them yet. But the best answer we have thus far is 7-15%, and dismissing that without addressing the arguments for why that's the most consistent position seems to contradict your own stated position that there are more and less consistent arguments
The thing I take issue with is using the RP report as a Schelling point/anchor point that we have to argue away from. When evidence and theory are both scarce, choosing the Schelling point is most of the argument, and I think the RP report gives zero information.
For your second part, whoops! I meant to include a disclaimer that I don't actually think BB is arguing in bad faith, just that his tactics cash out to being pretty similar to lots of people who are, and I don't blame people for being turned off by it.
On some level, yes it is impossible to critique another person's values as objectively wrong, utility functions in general are not up for grabs.
If person A values bees at zero, and person B values them at equivalent to humans, then person B might well call person A evil, but that in and of itself is a subjective (and let's be honest, social) judgement aimed at person A. When I call people evil, I'm attempting to apply certain internal and social labels onto them in order to help myself and others navigate interactions with them, as well as create better decision theory incentives for people in general.
(Example: calling a businessman who rips off his clients evil, in order to remind oneself and others not to make deals with him, and incentivize him to do that less.
Example: calling a meat-eater evil, to remind oneself and others that this person is liable to harm others when social norms permit it, and incentivize her to stop eating meat.)
However, I think lots of people are amenable to arguments that one's utility function should be more consistent (and therefore lower complexity). This is basically the basis of fairness and empathy as a concept (this is why shrimp welfare campaigners often list a bunch of human-like shrimp behaviours in their campaigns: in order to imply that shrimp are similar to us, and therefore we should care about them).
If someone does agree with this, I can critique their utility function on grounds of them being more or less consistent. For example, if we imagine looking various mind-states of humans and clustering them somehow, we would see the red-haired-mind-states mixed in with everyone else. Separating them out would be a high-complexity operation.
If we added a bunch of bee mind-states, they would form a separate cluster. Giving some comparison factor is be a low-complexity operation, you basically have to choose a real number and then roll with it.
If there really was a natural way to compare wildly different mental states, which was roughly in line with thinking about my own experiences of the world, then that would be great. But the RP report doesn't supply that.
I did a spot check since bivalves are filter feeders and so can accumulate contaminants more than you might expect. Mussels and oysters are both pretty low in mercury, hopefully this extends to other contaminants.
I don't think this stance is as rare as you think. My partner (who doesn't care for rationalism in general and has never met a rationalist other than (I guess) me) regularly says things like "[general wrath] oh wait my period is starting, that's probably why I'm raging, nevermind" and "have you considered that you're only being depressive about [side project] because [main job] is going badly?".
I will admit that selecting on "people who are in a relationship with me" is a pretty strong filter. Overall I'm hopeful for this social tech to become more common.
(In fact now I think about it, were the "you're being hysterical dear" comments of old actually sometimes a version of this, as opposed to being---as is often now assumed---abhorrent levels of sexism)