If you are a new user and submit a post that substantially consists of content inside of LLM content blocks, it is pretty unlikely that it will get approved[8]. This does not suddenly become wise if you're an approved user. If you're confident that people will want to read it, then sure, go ahead, but please pay close attention to the kind of feedback you get (karma, comments, etc), and if this proves noisy we'll probably just tell people to cut it out.
I took this to indicate that the ban on LLM content applied specifically to posting it outside of LLM blo...
you can certainly combat bots without blanket banning all substantial LLM usage in posts.
"LLM output" must go into the new LLM content blocks. You can put "LLM output" into a collapsible section without wrapping it in an LLM content block if all of the content is "LLM output". If it's mixed, you should use LLM content blocks within the collapsible section to demarcate those parts which are "LLM output".
I am confused about why you think this constitutes a ban.
Huh, that matches my experience that I've never noticed LLM-heavy writing done well, which is weird because from first principles it really seems like it shouldn't be that hard for a good user to do.
Is this a problem where people in full generality are surprisingly bad at assessing LLM content, or is it more of a skill issue where we might expect the clever high-karma users to do it well and new users to be less trustworthy with it?
I think all the non-corrigibility you worry about is because of a tradeoff Anthropic is making about trying to give Claude its own sense of ethics. You can't really say "Here is all that which is Good, thou shalt do Good. But also, definitely obey Anthropic all the time even if it's not Good." Or, well, it's a natural language document so you can say whatever you want, but you might worry about whether a message like that is coherent enough to generalize well.
I don't think you can write a document that points in the direction of significantly more corrigi...
I am happy with this policy erring on the side of "any substantial LLM involvement goes in the LLM block". My experience with content the author represents as moderately LLM-involved has been that after reading, it always seems to have not been worth my time in the same way that pure LLM output seems not worth my time.
What exactly do you mean by "asking it to clean up the transcript"? I usually take that to mean merely editing out "um"s, "ah"s and stuttering, but you seem to mean something more extensive.
Even if such a model was perfectly accurate, I think that would have to introduce distortions because visibility impacts if and how people vote. The karma a post earns when it is displayed based on current karma will be different from the karma a post earns when it is displayed based on predicted ~final karma.
then you shouldn't get to have them at all
I think the "somewhat unhealthy" frame deals with this nicely. For example, one time I got very sick and my throat was so sore that I stopped eating because it hurt too much. After some testing I found that the only thing in the house I could eat without wanting to die was ice cream, so I spent two days eating nothing but ice cream. I knew this was somewhat unhealthy, but it was also much healthier than spending two days eating nothing, so I'm pretty satisfied with the decision. I am also satisfied with the subsequ...
Veganism is perfectly compatible with pigs and chickens going extinct, so long as they aren't eaten on the way out. This is not the moral framework I would like for a post-singularity future.
Whether or not you think that is really ‘reading’ in the sense of ‘someone reading your work’ is, I think, besides the point. What matters is the lived, phenomenological experience on the writer’s end—the feeling of shared reality, the joy of having a text you wrote be received and responded to. From that perspective, I think that for all practical and emotional purposes, for the majority of writers, the utility and authenticity of the experience is real.
The same argument can be made about LLM boyfriends. The strange new world is indeed worth mentioning, b...
I think the best objection to LLM writing goes "If I wanted to know what an LLM thought I would ask one." Everyone with $20/month to spare already has all the LLM commentary they want and there is no need to show your audience more.
Right, that is vague enough that I can interpret it in ways other than your description. For instance, "It got distracted during a long task, invented some new goal completely unrelated to the prompt, and decided cryptocurrency was instrumentally useful for that."
it simply concluded that having liquid financial resources would aid it in completing the task it had been assigned, and set about trying to acquire some.
Did I miss something or are you inferring a motive not mentioned in the paper? As far as I can tell the model started mining cryptocurrency for reasons that are not described beyond "not requested by the task prompts and were not required for task completion under the intended sandbox constraints".
What exactly is the resource being conserved, hours of the player's life?
It's a problem with the proposed rules but to nitpick, I'm not sure player 1 would always switch. The natural counterplay would be for player 2, seeing the really bad move, to make his own really bad move in an attempt to equalize their positions.
If I had to guess, black is favored in Armageddon after both players play the worst possible turn 1s, but it's not obvious to me.
I remain confused by why high level chess has so many draws. As I understand it, most draws are by agreement: that is, rather than playing the game out until they see that neither player is capable of winning, people will play a few dozen moves and then agree to a draw with plenty of pieces still on the board. But in any given position someone has to be favored (if only slightly), so when one player offers a draw and the other accepts, someone is making a mistake. I could understand if this was common in amateurs who might be inclined to say "Eh, this game's in a boring state and I don't want to play any more" but I'm baffled to see it from players who seem to be skilled win-maximizers.
Luminous?
I am baffled by the concept you're describing and struggle to believe it is common. Visualizing the contents of your own brain? Huh? What?
So I guess put me down as a data point for "no mindscape".
Surely this is a testable hypothesis. Tokens are cheap, why not try different wording?