Richard_Kennaway

Wiki Contributions

Comments

Sorted by

surplus physical energy is a wonderful thing.

It is indeed. I imagine the causal connections differently. Strenuous movement cultivates the energy; the body demands food as necessary to refuel. I don't get high energy simply from eating.

One reason is just that eating food is enjoyable. I limit the amount of food I eat to stay within a healthy range, but if I could increase that amount while staying healthy, I could enjoy that excess.

Ah. I eat to sustain myself. Given that I must, I make it reasonably enjoyable, but it’s a chore I’d just as soon do without.

I'm missing something here. Why would I want a bigger liver? I mean, from this account, liver size is obviously something that the body is controlling. You list various interventions to make it bigger, which predictably have bad effects. But why would I want to change something that my body is already managing perfectly well?

The only reason I could find was this:

Athletes have higher resting metabolic rates than non-athletes; their bodies use more energy, even when they’re not exercising. That means they can eat more without getting fat.

Is that it? Why not just[1]...not eat more? These are athletes. They eat to sustain themselves in the pursuit of athletic excellence. They can already "just" not eat more. If they couldn't, they would not be athletes.

I agree there are people, notably Eliezer, who can't "just" not eat more without being as unable to function as if they were starving. I can't see a larger liver burning up more energy helping with that.


  1. If anyone's hackles rise at a sentence beginning "Why not just—", you're quite right. No problem can be solved by "just"...whatever it is. If it could, it would not be a problem. ↩︎

some of which still strikes me as completely unbelievable (like leaving water in the sun to absorb energy)

Ultraviolet disinfection?

Just a speculation, generated by nailing the custom to the wall and seeing what hypothesis accretes around it.

Rather, I am pointing out that #1 is the case. No-one means the words that an AI produces. This is the fundamental reason for my distaste for AI-generated text. Its current low quality is a substantial but secondary issue.

If there is something flagrantly wrong with it, then 2, 3, and 4 come into play, but that won't happen with standard average AI slop, unless it were eventually judged to be so persistently low quality that a decision were made to discontinue all ungated AI commentary.

You and the LW team are indirectly responsible, but only for the general feature. You are not standing behind each individual statement the AI makes. If the author of the post does not vet it, no-one stands behind it. The LW admins can be involved only in hindsight, if the AI does something particularly egregious.

Both. I do not want to have AI content added to my post without my knowledge or consent.

In fact, thinking further about it, I do not want AI content added to anyone's post without their knowledge or consent, anywhere, not just on LessWrong.

Such content could be seen as just automating what people can do anyway with an LLM open in another window. I've no business trying to stop people doing that. However, someone doing that knows what they are doing. If the stuff pops up automatically amidst the author's original words, will they be so aware of its source and grok that the author had nothing to do with it? I do not think that the proposed discreet "AI-generated" label is enough to make it clear that such content is third-party commentary, for which the author carries no responsibility.

But then, who does carry that responsibility? No-one. An AI's words are news from nowhere. No-one's reputation is put on the line by uttering them. For it is written, the fundamental question of rationality is "What do I think I know and how do I think I know it?" But these AI popovers cannot be questioned.

And also, I do not personally want to be running into any writing that AI had a hand in.

(Oh, hey, you're the one who wrote Please do not use AI to write for you)

I am that person, and continue to be.

I would like to be able to set my defaults so that I never see any of the proposed AI content. Will this be possible?

Against hard barriers of this kind, you can point to arguments like “positing hard barriers of this kind requires saying that there are some very small differences ... that make the crucial difference between [two things]. ... And can epsilon really make that much of a difference?”

Sorites fallacy/argument by the beard/heap paradox/continuum fallacy/fallacy of grey/etc.

Load More