LESSWRONG
LW

J Bostock
1953Ω14722090
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Generalized Hangriness: A Standard Rationalist Stance Toward Emotions
J Bostock12h106

I don't think this stance is as rare as you think. My partner (who doesn't care for rationalism in general and has never met a rationalist other than (I guess) me) regularly says things like "[general wrath] oh wait my period is starting, that's probably why I'm raging, nevermind" and "have you considered that you're only being depressive about [side project] because [main job] is going badly?".

I will admit that selecting on "people who are in a relationship with me" is a pretty strong filter. Overall I'm hopeful for this social tech to become more common.

(In fact now I think about it, were the "you're being hysterical dear" comments of old actually sometimes a version of this, as opposed to being---as is often now assumed---abhorrent levels of sexism)

Reply
The Asteroid Setup That Demands an Explanation
J Bostock2d20

OK, so the issue here is that you've switched from a thermodynamic model of the gas atoms near the asteroid to one which ignores temperature at the shell. I'm not going to spend any more time on this because while it is fun, it's not a good use of time.

One of the properties of the second law is that if you can't find a single step in your mechanism which violates it, then the mechanism overall cannot violate it. Since you claim that every step in the process obeys the second law, the entire process must obey the second law. Even if I can't find the error I can say with near-certainty that there is one.

Reply
The Asteroid Setup That Demands an Explanation
J Bostock2d30

Why do you think gas will accumulate close to the shell? This is not how gases work, the gas will form an equilibrium density gradient with zero free energy to exploit.

Reply
The Asteroid Setup That Demands an Explanation
J Bostock2d60

Actually I think this fails for ordinary reasons. Key question: how are you getting energy out of lowering the helium? 

  1. If you mean the helium is chemically bound to the sheets (through adsorption) then you'll need to use energy to release it
  2. If you mean the helium is trapped in balloons, then it will be neutrally buoyant in the ambient helium atmosphere unless you expend energy to compress it.
Reply
Applying right-wing frames to AGI (geo)politics
J Bostock2d00

These are interesting, but I think you're fundamentally misunderstanding how power works in this case. The main questions are not "Which intellectual frames work?" but "What drives policy?". For the Democrats in the US, it's often The Groups: a loose conglomeration of dozens to hundreds of different think-tanks, nonprofits, and other orgs. These in turn are influenced by various sources including their own donors but also including academia. This lets us imagine a chain of events like:

  1. Serious Academic Papers are published arguing that AGI is an extinction risk
  2. Serious Academic People decide that AGI is an extinction risk
  3. Some of The Groups decide that AGI is an extinction risk
  4. Democrats endorse (purportedly) AGI-risk-reducing policies to get endorsements from The Groups

This might be wrong but it is functional as a theory of change.

What's the Theory of Change for these frames? Going via voters is pretty risky, because politicians often don't actually know what voters actually want. So who drives policy changes in the Republican party? Lobbyists and Donors? Individual political pressure groups like the Freedom Caucus (but then what drives those pressure groups, we need a smaller theory of change for them). I would guess that the most plausible ToC here is to change the minds of politicians directly through (non-monetary) lobbying. If so, have you had any success doing this?

Reply
You Can't Objectively Compare Seven Bees to One Human
J Bostock3d40

I am a bit cautious of dismissing all of those ideas out-of-hand; while I am tempted to agree with you, I don't know of a strong case that these words definitely don't (or even probably don't point) to anything in the real world. Therefore, while I can't see a consistent, useful definition of them, it's still possible that one exists (c.f. Free Will which people often get confused about, but for which there exists a ) so it's not impossible that any given report contains a perfectly satisfying model which explains my own moral intuitions, extends them to arbitrary minds, and then estimates the positions of various animals in mind-space. Unfortunately this report doesn't do that, and therefore I update my priors downwards about any similar reports containing such a solution.

Reply
You Can't Objectively Compare Seven Bees to One Human
J Bostock3d72

You seem to be arguing "your theory of moral worth is incomplete, so I don't have to believe it". Which is true. But without presenting a better or even different theory of moral worth, it seems like you're mostly just doing that because you don't want to believe it.

I would overall summarize my views on the numbers in the RP report as "These provides zero information, you should update to where you would be before you read them." Of course you can still update on the fact that different animals have complex behaviour, but then you'll have to make the case for "You should consider bees to be morally important because they can count and show social awareness". This is a valid argument! It trades the faux-objectivity of the RP report for the much more useful property of being something that can actually be attacked and defended.

I don't see why you'd say hair color is obviously a pretty bad criteria but judgments about relative worth are pretty much totally arbitrary and aesthetic. I agree that judgments about moral worth are essentially arbitrary and aesthetic, but surely some claims about relative worth are more self-consistent than others (and probably by a lot), just like hair color.

I addressed this in another comment but if you want me to give more thoughts I can.

So I think there are less-wrong answers out there, we just don't have them yet. But the best answer we have thus far is 7-15%, and dismissing that without addressing the arguments for why that's the most consistent position seems to contradict your own stated position that there are more and less consistent arguments

The thing I take issue with is using the RP report as a Schelling point/anchor point that we have to argue away from. When evidence and theory are both scarce, choosing the Schelling point is most of the argument, and I think the RP report gives zero information.

Reply
You Can't Objectively Compare Seven Bees to One Human
J Bostock3d40

For your second part, whoops! I meant to include a disclaimer that I don't actually think BB is arguing in bad faith, just that his tactics cash out to being pretty similar to lots of people who are, and I don't blame people for being turned off by it.

Reply
You Can't Objectively Compare Seven Bees to One Human
J Bostock4d111

On some level, yes it is impossible to critique another person's values as objectively wrong, utility functions in general are not up for grabs.

If person A values bees at zero, and person B values them at equivalent to humans, then person B might well call person A evil, but that in and of itself is a subjective (and let's be honest, social) judgement aimed at person A. When I call people evil, I'm attempting to apply certain internal and social labels onto them in order to help myself and others navigate interactions with them, as well as create better decision theory incentives for people in general.

(Example: calling a businessman who rips off his clients evil, in order to remind oneself and others not to make deals with him, and incentivize him to do that less.
Example: calling a meat-eater evil, to remind oneself and others that this person is liable to harm others when social norms permit it, and incentivize her to stop eating meat.)

 

However, I think lots of people are amenable to arguments that one's utility function should be more consistent (and therefore lower complexity). This is basically the basis of fairness and empathy as a concept (this is why shrimp welfare campaigners often list a bunch of human-like shrimp behaviours in their campaigns: in order to imply that shrimp are similar to us, and therefore we should care about them).

If someone does agree with this, I can critique their utility function on grounds of them being more or less consistent. For example, if we imagine looking various mind-states of humans and clustering them somehow, we would see the red-haired-mind-states mixed in with everyone else. Separating them out would be a high-complexity operation.

If we added a bunch of bee mind-states, they would form a separate cluster. Giving some comparison factor is be a low-complexity operation, you basically have to choose a real number and then roll with it.

If there really was a natural way to compare wildly different mental states, which was roughly in line with thinking about my own experiences of the world, then that would be great. But the RP report doesn't supply that.

Reply
If you want to be vegan but you worry about health effects of no meat, consider being vegan except for mussels/oysters
J Bostock5d20

I did a spot check since bivalves are filter feeders and so can accumulate contaminants more than you might expect. Mussels and oysters are both pretty low in mercury, hopefully this extends to other contaminants.

Reply
Load More
Dead Ends
Statistical Mechanics
Independent AI Research
Rationality in Research
No wikitag contributions to display.
2Jemist's Shortform
4y
37
pretty neat solution
9Demons, Simulators and Gremlins
1d
1
37You Can't Objectively Compare Seven Bees to One Human
4d
25
36Lurking in the Noise
16d
2
11We Need a Baseline for LLM-Aided Experiments
2mo
1
34Everything I Know About Semantics I Learned From Music Notation
4mo
2
19Turning up the Heat on Deceptively-Misaligned AI
6mo
16
26Intranasal mRNA Vaccines?
6mo
2
4Linkpost: Look at the Water
6mo
3
20What is the alpha in one bit of evidence?
Q
9mo
Q
13
255The Best Lay Argument is not a Simple English Yud Essay
10mo
15
Load More