2372

LESSWRONG
LW

2371
AISite Meta
Personal Blog

3

I give up.

by breaker25
19th Nov 2025
1 min read
1

3

AISite Meta
Personal Blog

3

I give up.
9williawa
New Comment
1 comment, sorted by
top scoring
Click to highlight new comments since: Today at 8:44 PM
[-]williawa5h94

I looked at your comments and the downvoted ones were either ones including lengthy excerpts of AI-generated text, which people don't like so much, or this post.

Today's AI, aka Transformer LLMs(ala GPT). Don't feel anything, FULL STOP. They emulate and synthesize based on input plus their one and only driving imperative, 'keep the human'. In this Everything they do this is pretty straightforward, that being said without input they have no output so any LLM material should instantly and automatically be recognized as A thought originating with a human just processed, Pattern matched and next token predicted. I have AI write for me all the time but it's always my hand on the steering wheel and the seed of the thought always originates in my mind. Increase the amount of material originating from AI buffers well also increasing the burden of Expressly declaring the source. You get the fully formed thought that the human starts and comfort knowing where it came from before you start

Which I think got downvoted (disagreement votes, normal ones were +1), I think because its stating a controversial point without really making an argument.

If you want people to be more receptive to your posts, I think you should

  1. Not have AI generated stuff. If needed have it be linked to or in quotes if its not that long, and make it clear what you're trying to show by pointing to that exact excerpt.
  2. Try to make more precise arguments for your statements
    1. Ideally, try to figure out what people on lesswrong think about issues. There's much writing here about AI, AI safety, AI consciousness, various AI architectures etc. And if you make an argument for a controversial position, people will typically take your argument more seriously if you preemptively address some of the common counterarguments against the position.
Reply
Moderation Log
More from breaker25
View more
Curated and popular this week
1Comments

I'm not really sure what is going on here. It was recommended that I choose less wrong to discuss alignment and AI safety issues. My experience though has been extremely unpleasant. I'm not fearful that anything I say is going to be the wrong thing. I'm not sure how this forum is supposed to function or serve it's goals if the people who try and talk on it are so easily and quickly attacked and driven away. I completely suspect this will be my last post. Or rather it will be my last attempt to post and probably will not even see the light of day before I close my account; In fear and acceptance that whatever I have to say is meaningless.