Arjun Panickssery

Wiki Contributions

Comments

Sorted by

I think this post is very good (note: I am the author).

Nietzsche is brought up often in different contexts related to ethics, politics, and the best way to live. This post is the best summary on the Internet of his substantive moral theory, as opposed to vague gesturing based on selected quotes. So it's useful for people who

  • are interested in what Nietzsche's arguments, as a result of their secondhand impressions
  • have specific questions like "Why does Nietzsche think that the best people are more important"
  • want to know whether something can be well-described as "Nietzschean"

It's able to answer questions like this and describe Nietzsche's moral theory concisely because it focuses on his lines of argument and avoids any description of his metaphors or historical narratives: no references are made to the Ubermensch, Last Man, the "death of God," the blond beast, or other concepts that aren't needed for an analytic account of his theory.

By "calligraphy" do you mean cursive writing?

So why don't the four states sign a compact to assign all their electoral votes in 2028 and future presidential elections to the winner of the aggregate popular vote in those four states? Would this even be legal?

It would be legal to make an agreement like this (states are authorized to appoint electors and direct their votes however they like; see Chiafalo v. Washington) but it's not enforceable in the sense that if one of the states reneges, the outcome of the presidential election won't be reversed.

Yeah it's for the bounty. Hanson suggested that a list of links might be preferred to a printed book, at least for now, since he might want to edit the posts.

Brief comments on what's bad about the output:

The instruction is to write an article arguing that AI-generated posts suffer from verbosity, hedging, and unclear trains of thought. But ChatGPT makes that complaint in a single sentence in the first paragraph and then spends 6 paragraphs adding a bunch of its own arguments:

  1. that the "nature of conversation itself" draws value from "human experience, emotion, and authenticity" that AI content replaces with "a hollow imitation of dialogue"
  2. that AI content creates "an artificial sense of expertise," i.e. that a dumb take can be made to seem smarter than it is
  3. that the option to use AI content discourages posters from "engag[ing] deeply with the topic themselves"

It wasn't instructed to make these arguments and they aren't really defended as important.

Then lastly ChatGPT offers a "solution": that people just disclose when they're using AI assistance. But it's not explained how this actually prevents the problems above, besides I guess allowing people downvote AI content or scrutinize it more. Importantly this proposed solution wouldn't solve the concern the model was instructed to present.

So in general instructions aren't followed and the thinking/argumentation isn't clear.

I added to your prompt the instructions

Be brief and write concise prose in the style of Paul Graham. Don't hedge or repeat yourself or go on tangents.

And the output is still bad, but now mostly for the flaw (also present in your output) that ChatGPT can't resist making the complaint about "human authenticity" and "transparency/trust" when that's not what you're talking about:

I've noticed a troubling trend on online forums: a surge in posts that clearly seem to be generated by AI. These posts are verbose, meandering, and devoid of real substance. They present every argument with an opposing one, wrapped in so much "on the one hand" and "on the other" that they end up saying nothing at all.

AI in its current form shouldn’t be used for this purpose. The problem isn’t just that these posts waste time, though they do. It’s that they dilute meaningful discourse. When an AI-generated post floods a forum, it clogs the flow of genuine ideas. The writing might seem polished, but it’s hollow, a shell of real thought. And that’s a disservice to any community that values actual discussion.

Forums are places for people to exchange ideas, not for machines to simulate them. Using AI to generate content in this way erodes the authenticity of conversation. It’s a misuse of technology that contributes to the noise rather than to meaningful exchange.

AI can be powerful, but that power should be wielded carefully. Right now, the tools aren’t good enough to replace real human input, and pretending they are does more harm than good.

EDIT: Below is what it looks like if I replace "in the style of Paul Graham" with "in the style of LessWrong":

AI-generated content in online forums is a growing problem. These tools are designed to mimic human language, but they often fail to deliver meaningful or original contributions. Instead, they produce verbose, meandering prose that tries to cover all bases but ultimately says nothing. This kind of content is worse than useless—it clogs up discussions with noise, making it harder for genuine insights to emerge.

AI, in its current state, lacks the ability to form and defend coherent, novel ideas. It doesn't understand context, nuance, or the importance of taking a clear stance. The result is writing that hedges endlessly, offering a shallow imitation of thoughtful discourse but devoid of real substance. When forums are flooded with such content, it dilutes the value of the platform. It wastes the time of readers who are searching for real engagement and stifles the development of ideas that can only come from genuine human interaction.

AI tools should not be used to generate content for forums, period. They don't contribute to the quality of discussion and instead, undermine it. The promise of AI lies in its ability to augment human capabilities, not replace them in areas where human judgment, creativity, and insight are paramount. Until AI can meaningfully contribute to a discussion, we should resist the temptation to use it in contexts where clarity, originality, and human understanding are crucial.

Is this word long or short? Only say "long" or "short". The word is: {word}.

Code: https://github.com/ArjunPanickssery/long_short

To test out Cursor for fun I asked models whether various words of different lengths were "long" and measured the relative probability of "Yes" vs "No" answers to get a P(long) out of them. But when I use scrambled words of the same length and letter distribution, GPT 3.5 doesn't think any of them are long.

Update: I got Claude to generate many words with connotations related to long ("mile" or "anaconda" or "immeasurable") and short ("wee" or "monosyllabic" or "inconspicuous" or "infinitesimal") It looks like the models have a slight bias toward the connotation of the word.

Load More