LESSWRONG
LW

rsaarelm
49601140
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
[Meta] New moderation tools and moderation guidelines
rsaarelm15d40

There's maybe a stronger definition of "vibes" than Rafael's "how it makes the reader feel", that's something like "the mental model of the kind of person who would post a comment with this content, in this context, worded like this". A reader might be violently allergic to eggplants and would then feel nauseous when reading a comment about cooking with eggplants, but it feels obvious it wouldn't then make sense to say the eggplant cooking comment had "bad vibes".

Meanwhile if a poster keeps trying to use esoteric Marxist analysis to show how dolphin telepathy explains UFO phenomena, you're might start subconsciously putting the clues together and thinking "isn't this exactly what a crypto-Posadist would be saying". Now we've got vibes. Generally, you build a model, consciously or unconsciously, about what the person is like and why they're writing the things they do, and then "vibes" are the valence of what the model-person feels like to you. "Bad vibes" can then be things like "my model of this person has hidden intentions I don't like", "my model of this person has a style of engagement I find consistently unpleasant" or "my model is that this person is mentally unstable and possibly dangerous to be around".

This is still somewhat subjective, but feels less so than "how the comment makes the reader feel like". Building the model of the person based on the text is inexact, but it isn't arbitrary. There generally needs to be something in the text or the overall situation to support model-building, and there's a sense that the models are tracking some kind of reality, even though inferences can go wrong, different people can pay attention to very different things. There's still another complication that different people also disagree on goals or styles of engagement, so they might be building the same model and disagree on the "vibes" of it. This still isn't completely arbitrary, most people tend to agree that the "mentally unstable and possibly dangerous to be around" model has bad vibes.

Reply
Said Achmiz's Shortform
rsaarelm21d54

I feel like it's a thing where you should use human moderator judgment once the account isn't new. Figure out how the person is being counterproductive, warn them about it, and if they keep doing the thing, ban them. Ongoing mechanisms like this make sense for something like Reddit where there is basically zero community at this point, but on LW if someone is sufficiently detached from the forum and community that it actually makes sense to apply a mechanical paper cut like the rate limit on them after years of them being on site and accumulating positive karma, they probably shouldn't be here to begin with.

The basic problem is that it's not treating the person as a person, like a human moderator actually talking to them and going "hey, we think you're not helping here, here's why ... in the future could you ..." (and then proceeding to a ban if there's no improvement) would be. People occasionally respond well the moderator feedback, but being hit by the rate limiter robot is pretty likely to piss off just about any invested and competent person and might also make them go "cool, then I'll treat your thing as less of a community made of people and more like a video game to beat on my end as well", which makes it less likely for things to be improved in the future.

Reply1
Said Achmiz's Shortform
rsaarelm22d105

How does it make sense to just run the rate limiter robot equally on everyone no matter how old their account and how much total karma they have? It might make sense for new users, as a crude mechanism to make them learn the ropes or find out the forum isn't a good fit for them. But presumably you want long-term commenters with large net-positive karma staying around and not be annoyed by the site UI by default.

A long-term commenter suddenly spewing actual nonsense comments where rate-limiting does make sense sounds more like an ongoing psychotic break, in which case a human moderator should probably intervene. Alternatively, if they're getting downvoted a lot it might mean they're engaged in some actually interesting discussion with lots of disagreement going around, and you should just let them tough it out and treat the votes as a signal for sentiment instead of a signal for signal like you do for the new accounts.

Reply
sunwillrise's Shortform
rsaarelm24d150

Out of curiosity, what evidence would change your mind?

This one seems pretty easy. If multiple notable past contributors speak out themselves and say that they stopped contributing to LW because of individual persistently annoying commenters, naming Said as one of them, that would be pretty clear evidence. Also socially awkward of course. But the general mindset of old-school internet forum discourse is that stuff people say publicly under their own accounts exists and claimed backchannel communications are shit someone made up to win an argument.

Reply
Said Achmiz's Shortform
rsaarelm1mo5-8

Just gonna chime in that I agree with Said here about this not just a two-way thing but a question of what the audience gets to see as well. I think his comments on your posts are valuable and banning him makes things worse as far as I'm concerned.

Reply1
How Epistemic Collapse Looks from Inside
rsaarelm1mo10

It might be that they're one novel thing he could both discern as a specific thing and pretty much completely understand what their purpose is once he started paying attention to them. Just about everything in a modern city is an unfamiliar thing tied to a large context of other unfamiliar things, so you'll just zone out when you're missing the context, but stairs and carpets are pretty much just stairs and carpets.

Reply
October The First Is Too Late
rsaarelm2mo22

“Clarity didn’t work, trying mysterianism” is the title of a short story by Scott Alexander

Was it the title? I always thought Scott used the phrase as commentary on why he was posting the story, same as gwern is doing here. As in, he tried to clearly say "an omnipresent personal AI agent that observes your life and directly tells you the best way to act in every situation you encounter would be a bad thing because building up your own mind into being able to overcome challenging situations is necessary for a meaningful life", people didn't buy it, and then he went "okay, let's try this untitled short story to illustrate the idea".

For this gwern thing though, I've no idea what the failed preceding non-mysterian attempt was.

Reply
Eukryt Wrts Blg
rsaarelm2mo21

I haven’t looked into this, but I’m guessing the IQ results are from some form of language barrier?

Many people have tried very hard to find explanations for the IQ results that are something other than "low intelligence" for decades. If a replicating result that provides such an explanation had been established, it would have been broadly publicized in popular media and even laymen would know about it. Instead, we're being told we are not supposed to look into this topic at all.

Reply
Kodo and Din
rsaarelm2mo30

It seems like the neologism is mostly capturing the meaning of signal from Shannon's information theory (which "signal and noise" points towards anyway), where you frame things by having yes/no questions you want to have answered and observations that answer your questions are signals and observations that do not are noise. So if you need to disambiguate, "signal (in the information-theoretic sense)" could be a way to say it.

Reply
Programming Language Early Funding?
rsaarelm5mo21

The dark age might have gotten darker recently. Everyone's scrabbling around trying to figure out what AI will mean for programming as a profession going forward, and AI mostly only boosts established languages it has large corpora of working code for.

I've been following the Rust project for the last decade and have been impressed at just how much peripheral scutwork contributes to making the language and ecosystem feel solid. This stuff is a huge undertaking. I'm not terribly excited any more about incremental improvement languages. They seem to be mostly a question of not making crippling foundational design errors (hello C++) and expending enormous engineering effort on tooling and libraries. What might be more interesting is something that targets a specific niche and does something very cool for exactly that niche (Inform 7, dhall), or languages that go for a shoot-for-the-moon paradigm shift like Unison or the "we need to make large programs writable in 100x smaller codebases" from Alan Kay's VPRI research team.

Reply1
Load More
No posts to display.