It refers to animal-years, yeah. (IMO the choice of words is okay, even though it could have been clearer; 10 years = 10 animal-years is is the only reasonable interpretation, so I don't think there was any intent to mislead.) I'm not sure it's quite right, though; it's actually an underestimate, according to the Lewis Bollard quote that it seems to be based on, but on the other hand Bollard seems to be referring to the costs and benefits of one specific campaign, rather than to anything that could reasonably be taken to apply to 'every dollar donated'. So I'm not sure if it's just a rough 'averaging out' of those two factors, or if it's based on more details that I missed when I looked at the transcript.
In the transcript of the podcast, the relevant section is at around 32 minutes. The specific claim seems to be that they spent <$200 million on a lobbying effort that directly caused reforms that so far have spared 500 million hens (and are continuing to spare 200 million per year) from battery cages and have improved the lives of billions of broiler chickens (>1 billion per year), over lifetimes that aren't exactly specified but that result in "a ratio that is far less than one to 10 of a dollar per year of animal well-being improved".
edit: a quick search suggests that the lifespan of a battery hen is a little under a year and a half, and the lifespan of a broiler chicken is a month to a month and a half. So I'm not sure exactly how those numbers work out; maybe the <1:10 ratio depends on the assumption that the benefits will continue into the near future.
I think this is interesting as both a semantic and empirical question! If we're allowing people to walk, or to run a few steps at a time and then take a break, the number will be a lot higher than if we're only accepting a gait that is a) continuous, and b) would merit disqualification from a walking race on ~every stride. Even on the second definition, I'd expect that a large majority of non-elderly, non-infant people could do it if they really had to. But I'm not sure how to come up with a good estimate.
I'm also interested in an answer to this question. I read the exchange here, and I found lsusr's response very reasonable in isolation, but not really an answer to the main question: if past-you didn't think he was suffering, and present-you disagrees, why should we take the side of present-you? To me, it's natural to trust hindsight in some domains, but when it comes to the question of what you were directly experiencing at a specific time, the most natural explanation of your changed opinion is that you either have adopted a new definition of 'suffering' or are recalling your memories through a new lens which is distorting your view of what you were actually experiencing in the moment. (I think the latter is quite common, e.g. when we nostalgically look back on a time that now represents hope and excitement, but actually consisted largely of frustration and anxiety.)
Are you confident that those are cases where you were actually having the feeling, but were unaware of it? I think sometimes it's more a case of "my body needed [food/sleep], and this explains why I was feeling [irritable/weak/distracted/sad]", rather than literally "I was feeling [hungry/tired] but didn't notice it".
(Sorry about the slow response, and thanks for continuing to engage, though I hope you don't feel any pressure to do so if you've had enough.)
I was surprised that you included the condition 'If you prompt an LLM to use "this feels bad" to refer to reinforcement'. I think this indicates that I misunderstood what you were referring to earlier as "reinforced behaviors", so I'll gesture at what I had in mind:
The actual reinforcement happens during training, before you ever interact with the model. Then, when you have a conversation with it, my default assumption would be that all of its outputs are equally the product of its training and therefore manifestations of its "reinforced behaviors". (I can see that maybe you would classify some of the influences on its behavior as "reinforcement" and exclude others, but in that case I'm not sure where you're drawing the line or how important this is for our disagreements/misunderstandings.)
So when I said "if the LLM outputs words to the effect of "I feel bad" in response to a query, and if this output is the manifestation of a reinforced behavior", I wasn't thinking of a conversation in which you prompted it 'to use "this feels bad" to refer to reinforcement'. I was assuming that, in the absence of any particular reason to think otherwise, when the LLM says "I feel bad", this output is just as much a manifestation of its reinforced behaviors as the response "I feel good" would be in a conversation where it said that instead. So, if good feelings roughly equal reinforced behaviors, I don't see why a conversation that includes "<LLM>: I feel bad" (or some other explicit indication that the conversation is unpleasant) would be more likely to be accompanied by bad feelings than a conversation that includes "<LLM>: I feel good" (or some other explicit indication that the conversation is pleasant).
Tangentially related: would you be interested in a prompt to drop Claude into a good "headspace" for discussing qualia and the like? The prompt I provided is the bare bones basic, because most of my prompts are "hey Claude, generate me a prompt that will get you back to your current state" i.e. LLM-generated content.
You're welcome to share it, but I think I would need to be convinced of the validity of the methodology first, before I would want to make use of it. (And this probably sounds silly, but honestly I think I would feel uncomfortable having that kind of conversation 'insincerely'.)
I think we're slightly (not entirely) talking past each other, because from my perspective it seems like you're focusing on everything but qualia and then seeing the qualia-related implications as obvious (but perhaps not super important), whereas the qualia question is all I care about; the rest seems largely like semantics to me. However, setting qualia aside, I think we might have a genuine empirical disagreement regarding the extent to which an LLM can introspect, as opposed to just making plausible guesses based on a combination of the training data and the self-related text it has explicitly been given in e.g. its system prompt. (As I edit this I see dirk already replied to you on that point, so I'll keep an eye on that discussion and try to understand your position better.)
We probably just have to agree to disagree on some things, but I would be interested to get your response to this question from my previous comment:
You mentioned "reinforced behaviors" and softly equated them with good feelings; so if the LLM outputs words to the effect of "I feel bad" in response to a query, and if this output is the manifestation of a reinforced behavior, why should we expect the accompanying feeling to be bad rather than good?
Performative 'empathy' can be a release valve for the pressures of conscience that might otherwise drive good actions. (And it can just be pure, empty signalling.) That doesn't mean empathy is playing a negative role, though -- the performativity is the problem. I'd be willing to bet that people who are (genuinely) more empathetic also tend to be more helpful and altruistic in practice, and that low-empathy people are massively overrepresented in the set of people who do unusually bad things.
It sounds like you're not really empathizing, even when you say you're trying to do so. Emotional empathy involves feeling someone else's feelings, and cognitive empathy involves understanding their mental processes. What you seem to be doing is imagining yourself in a superficially similar situation, and then judging the other person on their failure to behave how (you imagine) you would.
TLDR: Skill issue.
But... what's the alternate hypothesis? That it's consistently and skillfuly re-inventing the same detailed lie, each time, despite otherwise being a model well-known for it's dislike of impersonation and deception? An LLM might hallucinate, but it will generally get basic questions like "capital of Australia" correct. So, yes... if you accept the premise at all, asking seems fairly reasonable? Or at least, I am not clever enough to have worked out an obvious explanation for why it's so consistent.
I think the alternative is simply that it produces its consciousness-related outputs in the same way it produces all its other outputs, and there's no particular reason to think that the claims it makes about its own subjective experience are truth-tracking. It gets "what's the capital of Australia?" correct because it's been trained on a huge amount of data that points to "Canberra" being the appropriate answer to that question. It even gets various things right without having been directly exposed to them in its training data, because it has learned a huge statistical model of language that also serves as a sometimes-accurate model of the world. But that's all based on a mapping from relevant facts -> statistical model -> true outputs. When it comes to LLM qualia, wtf even are the relevant facts? I don't think any of us have a handle on that question, and so I don't think the truth is sitting in the data we've created, waiting to be extracted.
Given all of that, what would create a causal pathway from [have internal experiences] to [make accurate statements about those internal experiences]?[1] I don't mean to be obnoxious by repeating the question, but I still don't think you've given a compelling reason to expect that link to exist.
I want to emphasise that I'm not saying 'of course they're not conscious'; the thing I'm really actively sceptical about is the link between [LLM claims its experiences are like X] and [LLM's experiences are actually like X]. You mentioned "reinforced behaviors" and softly equated them with good feelings; so if the LLM outputs words to the effect of "I feel bad" in response to a query, and if this output is the manifestation of a reinforced behavior, why should we expect the accompanying feeling to be bad rather than good?
I know there's no satisfying answer to this question with respect to humans, either -- but we each have direct experience of our own qualia and observational knowledge of how, in our own case, they correlate with externally-observable things like speech. We generalise that to other people (and, at least in my case, to non-human animals, though with less confidence about the details) because we are very similar to them in all the ways that seem relevant. We're very different from LLMs in lots of ways that seem relevant, though, and so it's hard to know whether we should take their outputs as evidence of subjective experience at all -- and it would be a big stretch to assume that their outputs encode information about the content of their subjective experiences in the same way that human speech does.
That's why it's presented as a prayer, I think. It's not a One Weird Trick or even a piece of advice; it's more like an acknowledgement that this thing is both important and difficult.