LESSWRONG
LW

565
Jay Bailey
78751610
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
2Jay Bailey's Shortform
3y
15
No wikitag contributions to display.
Contra Shrimp Welfare.
Jay Bailey9d*21

I can see why you might find that frustrating. I think a lot of us, myself included, do think that the science is the most important part of the argument - but we don't understand the science well enough to distinguish true arguments from false ones, in this domain.

I don't have the proper expertise to evaluate your claims about pain receptors properly, but I do have the proper expertise to conclude that SWP calling their opponents "irrational, evil, or both" is bad, and that this correlates with shoddy reasoning everywhere else, including the parts I don't understand. Thus, there's a limit to how much I can update even from an incredibly strong neurological takedown, if that takedown requires knowledge of neurology that I don't have in order to fully appreciate its correctness.

So, in terms of what we care about, think of it less as "How many bits of information should this point be worth" and more "How many bits of information can your audience actually draw from this piece of information?"

Reply1
Contra Shrimp Welfare.
Jay Bailey10d63

If the above is true, I think this is really good information that would have been very nice to have cited within the article. That would make me a lot more skeptical of SWP and of their conclusions, and it'd be great to see links for these examples if you could provide them.

Especially this paragraph:

"It just intuitively seems like they are." This is proposed as a rebuttal for critiques of the shrimp welfare project, not very convincing to me, yet they claim that those who don't support them are "irrational, evil or both". I find that making that claim with sparse, scattered and unclear evidence is not great, and paints anyone who opposes their views as as flawed person.

I agree with the value claims in this paragraph completely, so if you have sources for those quotes I think that would be very persuasive to a lot of us here on this site, and it might even be worth a labelled edit to the main post.

Reply
Contra Shrimp Welfare.
Jay Bailey12d5725

I think you make some interesting points here, but there are two points I would disagree with:

First is "The Shrimp Welfare Project wants shrimp to suffer so they can have a new problem to solve." This claim is made with no supporting evidence whatsoever. You don't even argue for why it might be the case, and show no curiosity about other explanations for this. They claim to disagree with you, so clearly they have ulterior, malicious motives. (I would say knowingly creating a charity that doesn't solve a real problem, just to be able to say you're solving a new problem, is quite unethical!) Why is it so hard to believe the people who founded SWP did so with the intent of reducing as much suffering as possible, and just happened to be incorrect? What makes you completely dismiss this hypothesis so much that it isn't even worth mentioning the alternative in your article?

Second is "At best, a shrimp sentium would encode only the most surface-level sensory experience: raw sensation without context, meaning, or emotional depth. Think of the mildest irritation you can imagine, like the persistent squeak of a shopping cart wheel at Walmart."

I don't see how the second sentence follows from the first. When I imagine a migraine, the worst pain I personally have ever experienced (being a rather fortunate individual) it doesn't seem to me like the reason I am suffering is because of the context, meaning, or emotional depth of my pain. I'm suffering because it hurts. A lot. It doesn't seem that complicated. It seems like it would be much more principled, using your analysis, to treat 860,000 shrimps freezing to death as suffering equivalent to one human experiencing the sensation of freezing to death, not experiencing mild irritation. I say "experiencing the sensation of" because things like being aware of one's own mortality does seem like a thing that's out of reach of a shrimp. So it's not equivalent to a human actually dying, in my view, but freezing to death is likely still quite unpleasant, and not something I'd do for fun, and I'd much rather experience a squeaky wheel at Walmart even if I was fine as soon as I lost consciousness and had no chance of mental trauma from the incident, which I think still matches the shrimp equivalence.

What I still think makes this article interesting is that 550 humans experiencing the sensation of freezing to death across twenty minutes is bad, but not as bad as even one human death, which could be prevented by orders of magnitude less cost than a shrimp stunner. So even despite this article's flaws I still think it's a good article on net and worth engaging with for a proponent of shrimp welfare.

While a reply isn't required, if you are going to engage with only one of these points, I would prefer it be the first one even though I wrote a lot less about it. The second point doesn't actually change the overall conclusion very much imo, but the first point is generally quite confusing to me, and makes me less confident about rest of the article given the quality of reasoning in that claim.

Reply3
My talk on AI risks at the National Conservatism conference last week
Jay Bailey12d21

This is, as I understand it, the correct pronunciation.

Reply
life lessons from poker
Jay Bailey3mo54

I do remember when I was learning poker strategy, and I learned about the idea of wanting to get your entire stack in the middle preflop if you can get paid off with AA - was a very fundamental lesson to young me! That said, there's a key insight that goes along with the "pocket aces principle" that is missing here, and that's bankroll management.

In poker, there is standard advice for how much money to have in your bankroll before you sit down at a table at all. E.g, for cash games, it's at least 2000 big blinds (20x the largest stack you can buy). This is what allows you to bet all-in on pocket aces - if your entire bankroll is on the table, you should bet more conservatively. The point of bankroll management is to allow you to make the +EV play of putting your entire stack into the middle without caring about the variance when you lose 20% of the time.

To apply this metaphor to real life, you might say something like "Consider how much you're willing to lose in the event things turn out badly (e.g, a year or two on a startup, six months on a relationship) and then, within that amount, bet the house."

Reply
Jay Bailey's Shortform
Jay Bailey3mo2-2

There's a counterargument to the AGI hype that basically says - of course the labs would want to hype this technology, they make money that way, just because they say they believe in short timelines doesn't mean it's true. Specifically, the claim here is not that the AI lab CEO's are mistaken, but rather that they are actively lying, and they know AGI isn't around the corner.

What actions have frontier AI labs taken in the last year or two that wouldn't make sense, given the above explanation? Stuff like GDM's merger or OpenAI (reportedly) operating at a massive loss. Ideally these actions would be reported on by entities other than those companies themselves, in order to help convince skeptics. I've definitely seen stuff like this around but I can't remember where and the search terms are too vague.

I've also tried using Deep Research for this, but it doesn't seem to understand the idea of only looking at actions that are far more likely in the non-hype world than the hype-world, talking about things like investor decks projecting high returns that are very compatible with them hyping themselves up.

Reply
t-cole's Shortform
Jay Bailey7mo20

Significant Digits is (or was, a few years ago) considered the best one, to my recollection.

Reply
Jay Bailey's Shortform
Jay Bailey7mo30

Here you go: https://chatgpt.com/share/67b31788-32b0-8013-8bbf-a4100abf0457

Reply1
Jay Bailey's Shortform
Jay Bailey7mo30

https://chatgpt.com/share/67b1c1dc-e88c-8013-a6f0-88b25155a0d6

Here you are :)

Reply
Jay Bailey's Shortform
Jay Bailey7mo40

I bought a month of Deep Research and am open to running queries if people have a few but don't want to spend 200 bucks for them. Will spend up to 25 queries in total.

A paragraph or two of detail is good - you can send me supporting documents via wnlonvyrlpf@tznvy.pbz (ROT13) if you want. Offer is open publicly or via PM.

Reply
Load More
53Reflections on my first year of AI safety research
2y
3
31Features and Adversaries in MemoryDT
2y
6
13Spreadsheet for 200 Concrete Problems In Interpretability
2y
0
82Reflections on my 5-month alignment upskilling grant
3y
4
58Deep Q-Networks Explained
3y
8
2Jay Bailey's Shortform
3y
15