LESSWRONG
LW

447
Bentham's Bulldog
239471430
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Hedonism Sequence
It's Better To Save Infinite Shrimp From Torture Than To Save One Person
[+]Bentham's Bulldog1mo-7-9
It's Better To Save Infinite Shrimp From Torture Than To Save One Person
Bentham's Bulldog1mo10

I mean it's totally coherent to value a shrimp at infinitesimal.  But that is unintuitive in the ways I describe in the post (involving some arbitrarily vast gulf between the first generaiton that's non-infintesimal wrt the spectrum argument) and implying that you should torture 10^10000000 shrimp to prolong a person's life by one second. 

Reply
sam's Shortform
Bentham's Bulldog1mo50

I've fixed the 97% statistic! I agree that was a stupid error to make. It wasn't from LLMs and one bit of evidence for this is that the statistic isn't online anymore, so an LLM wouldn't find it.  In fact, I remembered the statistic from an old round of high school debate, and emailed someone to find the link. I would be happy to send you an email of the correspondence if you are skeptical. 

I am quite certain that I did not use LLM's in composing the post.  Now, I don't think the mere example of that joke is very convincing evidence that I used LLMs.  Would AI really make a Yarvin joke or a joke about Trump not having neurons?  Hard to imagine. As for use  of em-dashes, as you can see if you read my old posts from before the dawn of AI, I used them a lot too! I've also made a similar comment before in a way that pretty clearly doesn't look AI generated https://benthams.substack.com/p/lyman-stone-continues-being-dumb?utm_source=publication-search:

"Lyman then repeats his argument about shrimp having few neurons, once again ignoring the abundance of evidence that neuron count is a bad proxy for moral significance. Perhaps the shrimp aren’t the only ones with few neurons…"

For reference, I just asked chat-GPT to write a polemic in my style and it was not very similar to what you suggest--here it is (it also didn't make much sense):

On the Absurdity of Caring Less About the Future Just Because You’ll Be Dead

Every so often, someone will tell me—usually with the smug air of someone who thinks they’ve just said something deeply profound—that “Well, I care about the future, but not too far into the future. After all, I’ll be gone by then.”

This is supposed to be self-evident wisdom, a mic-drop moment where the utilitarian shuts up, nods sadly, and says, “Yes, of course, how could I have forgotten: once you’re dead, ethics ceases to function.”

But here’s the thing: no, it doesn’t. You can die without taking the moral law with you.

If you think people matter, they keep mattering after you’re gone. If you think suffering is bad, it remains bad even in the year 3000. You don’t get to mark an expiration date on morality like it’s a jug of milk.

Imagine applying this logic in any other domain:

“I oppose slavery in 100 years, but in 200 years? Pfft, who cares—won’t be my problem.”

Or:

“I’d like the cure for cancer to be found in my lifetime, but if it comes a decade after my death, well, frankly, let the tumors win.”

The bizarre thing is that the people who say this aren’t usually sociopaths. They’ll donate to help children they’ll never meet, they’ll praise great reformers who died centuries ago—but as soon as you point to future people they’ll never meet, it’s all “Eh, let them fend for themselves.”

It’s time to call this what it is: a lazy, self-exonerating dodge. The moral circle doesn’t collapse when you die. Your concern for the world shouldn’t come with a tombstone-shaped asterisk. The universe will keep running whether or not you’re around to watch, and the future will be inhabited by beings capable of joy and suffering. That is reason enough to care—no matter how many centuries or millennia away they are.

Because, let’s face it, if morality only applies while you’re alive, you’re not really doing ethics. You’re just doing public relations for your lifespan.

Reply
It's Better To Save Infinite Shrimp From Torture Than To Save One Person
Bentham's Bulldog1mo1-1

You say by the same reasoning.  Can you give me one of the arguments that is the same?  None of the premises I give assume utilitarianism.

Reply
Why I Just Took The Giving What We Can Pledge
Bentham's Bulldog1mo-10

Right so you can discount extremely low probabilities.  But presumably the odds of insects being conscious--a view believed by a large number of experts--isn't low enough to fully discount.  

Reply
How To Cause Less Suffering While Eating Animals
Bentham's Bulldog1mo20

Yep.

Reply
Why I Just Took The Giving What We Can Pledge
Bentham's Bulldog2mo-30

I thought you weren't planning on responding!  

If you're going to rely on neuron counts, you should engage with the arguments RP gives against neuron counts that are, to my mind, very decisive.  It's particularly unreasonable to rely on neuron counts in a domain like this where there's lots of uncertainty.  If a model tells you A matters less than B by a factor of 100,000 or something, most of the expected value of B relative to A is in possible worlds where the model is wrong.  https://www.lesswrong.com/posts/GrtbTAPfkJa4D6jjH/confidence-levels-inside-and-outside-an-argument

To use neuron counts as a proxy for even simple creatures, you have to be extremely confident--north of 99% confident--that the right proxy only assigns very minimal consciousness to animals.  But it's not clear what justifies this overwhelming confidence.  

Analogy: if you have a model that predicts aliens being only one millimeter in size, even if you're pretty sure it's right, you shouldn't use it as a proxy for expected alien size, because the overwhelming majority of expected size is in worlds where the model is wrong.  

Why is the hypothesis that bees are more than insignificantly conscious a highly specific prior with insignificant odds?  We know that humans are capable of intense pain.  There is some neural architecture that produces intense pain.  What gives us immense confidence that this isn't present in animals?  Being confident a priori that insects don't feel intense pain at a billion to one odds is silly--it's like being confident a priori that insects don't have a visual cortex.  It's not like there's some natural paramaterization of possible physical states that give rise to consciousness where only a tiny portion of them entail insect consciousness.  

As an aside, I think people take the long lesson away from the Mark Xu essay.  Specific evidence gives a very high Bayes factor.  The reason someone saying their name is Mark Xu gets such a high Bayes factor is that Mark Xu is a very specific name--as all specific names are.  But a person merely asserting some proposition isn't good for any comparable Bayes factor.  For more see http://www.wall.org/~aron/blog/just-how-certain-can-we-be/

Also, as I have said several times, it's not about aggregate considerations of moral worth but about intensity of valenced experience.  It's about how intense they feel pleasure and pain.  I think a human's life matters more than seven bees.  Now, once again, it seems insane to me to start with a prior on the order of one in a billlion of bees feeling pain at least 1/7 as intensely as people.  What licenses such great confidence? 

My question, if we are going to continue this, is as follows: 

  1. Are you astronomically certain that insects aren't conscious at all, or just not intensely conscious?  
  2. What licenses very high confidence in this?  If it's the alleged specificity of the hypothesis, what is the parameterization on which this takes up a tiny slice of probability space?  

Also happy to have you on the podcast! 

Reply
Why I Just Took The Giving What We Can Pledge
Bentham's Bulldog2mo-3-1

Your example of me being obviously wrong is that you have an intuition that the numbers I rely on, from the most detailed report to date, are wrong.  

Size likely correlates slightly with mental complexity, but not to the extent it affects our intuitions.  The gulf between bees and fish mentally is pretty small, while the gulf between bees and fish in terms of our intuitions about consciousness is very large.  I was making a general claim about people's sentience intuitions not yours.  

Probably unreflective was the wrong word--direct would have been better.  What I meant was that you weren't relying on any single model, or average of models, or anything of the sort.  Instead, you were just directly relying on how conscious animals seemed to you which, for the reasons I gave, strikes me as an absolutely terrible method.  

(I also find it a bit rich that you are acting like my comment is somehow beyond the pale, when I'm responding to a comment of yours that basically amounts to saying my arguments are consistently so idiotic my posts should be downvoted even when they don't say anything crazy).  

To think insect expected sentience is very low, then you have to be very confident their sentience is low.  Such great confidence would require some very compelling argument for why even dramatic behavior isn't indicative of much sentience.  Suffice it to say, I don't see an argument like that, and I think there are plenty of reasons to think it's reasonably likely insects feel intense pain.  

Reply
Why I Just Took The Giving What We Can Pledge
[+]Bentham's Bulldog2mo-9-10
Why I Just Took The Giving What We Can Pledge
Bentham's Bulldog2mo4-1

Can you give an example?  I addressed your previous (in my view, quite unpersuasive) objections at some length https://benthams.substack.com/p/you-cant-tell-how-conscious-animals

Reply
Load More
-14The Comprehensive Case Against Trump
1mo
34
109The Bone-Chilling Evil of Factory Farming
1mo
11
-11It's Better To Save Infinite Shrimp From Torture Than To Save One Person
1mo
14
-29Why I Just Took The Giving What We Can Pledge
2mo
18
11How To Cause Less Suffering While Eating Animals
2mo
3
-15Don't Eat Honey
2mo
70
37Summary of John Halstead's Book-Length Report on Existential Risks From Climate Change
3mo
14
-5The Lies of Big Bug
3mo
2
-7A Very Simple Case For Giving To Shrimp
3mo
1
5The Unparalleled Awesomeness of Effective Altruism Conferences
3mo
0
Load More