Richard_Kennaway

Wiki Contributions

Comments

Sorted by

There is nothing routine about my dismissal of the text in question. Remember, this is not the work of a writer, skilled or otherwise. It is AI slop (and if the "author" has craftily buried some genuine pearls in the shit, they cannot complain if they go undiscovered).

If you think the part I quoted (or any other part) means something profound, perhaps you could expound your understanding of it. You yourself have written on the unreliability of LLM output, and this text, in the rare moments when it says something concrete, contains just as flagrant confabulations.

The sufficiently skilled writer does not generate foggy texts. Bad writers and current LLMs do so easily.

Oh, I read some of them. It was like listening to Saruman. Or to draw a non-fictional comparison, an Adam Curtis documentary. There is no point in engaging with Saruman. One might as well argue with quicksand.

The We-sphere and the They-sphere each have a philosophy. We in the We-sphere have rationally concluded that our philosophy is right (or it would not be our philosophy). Where Their philosophy is different, it is therefore irrational and wrong. This is proved by listing all the differences between Our philosophy and Theirs. That They adhere to Their wrong views instead of Our true views proves that They are irrational and closed-minded. But We adhere to Our views, which are right, proving Us to have superior rationality.

I doubt the interviewees are doing anything more than reaching for a word to express "badness" and uttering the first that comes to hand.

Answer by Richard_Kennaway90

I tried to find some concrete exposition in the paper of what the authors mean by key words such as “organism”, “agent”, and so on, but to me the whole paper is fog. Not AI-generated fog, as far as I can tell, but a human sort of fog, the fog of philosophers.

Then I found this in the last paragraph of section 3:

The problem is that such algorithmic systems have no freedom from immediacy, since all their outputs are determined entirely—even though often in intricate and probabilistic ways—by the inputs of the system. There are no actions that emanate from the historicity of internal organization.

Well, that just sinks it. All the LLMs have bags of “historicity of internal organization”, that being their gigabytes of weights, learned from their training, not to mention the millions of tokens worth of context window that one might call “short-term historicity of internal organization”.

The phrase “historicity of internal organization” seems to be an obfuscated way of saying “memory”.

Utilitarianism is not supposed to be applied like this. It is only a perspective. If you apply it everywhere, then there's a much quicker shortcut: we should kill a healthy person and use this person's organs to save several other people who would otherwise be healthy if not for some organ disfunction.

And Peter Singer would say yes, yes we should. But only in secret, because of the bad effects there would be if people knew they might be chopped for spares. (Which rather contradicts Singer’s willingness to publish that paper, but you’d have to ask Singer about that.)

Is there some Internet Law that says that however extreme the reductio, there is someone who will bite the bullet?

Load More