There are people I would like to interact less with online: perhaps they post inane things, perhaps they pull comment threads off in bad directions, perhaps they make terrible arguments for things I agree with. The standard tools social networks offer for this sort of situation are:

  • Blocking: you can't see their stuff, they can't see your stuff, you can't interact.

  • Hiding: you don't see their stuff, or you see less of it. You can still interact.

I'd like another tool:
  • Lite blocking: as if they'd hidden you.

Social networks generally have far more things they could show you than you'll be able to look at. To prioritize they use inscrutable algorithms that boil down to "we show you the things we predict you're going to like". You can think of hiding as "dramatically lower the prediction that I would like seeing their stuff", and lite blocking as "dramatically lower the prediction that they would like seeing my stuff".

Lite blocking could be symmetrical or not, but the important thing to me is that the network would stop encouraging people to interact with me if I don't want those interactions.

Perhaps I should just block people? I'm glad blocking exists, and there are times when it's the right tool. But other times it's much too powerful:

  • Blocking is obvious, but with the level of "who knows why the things decided to show me what it did" lite blocking maintains plausible deniability. This is important for people you want to avoid interacting with but need to stay on good terms with for other reasons.

  • Blocking is too thorough. Maybe I don't like the way you tend to come into threads I host and derail them, but I'd still like you to be able to find our past discussions and reference them if intentionally seek them out.

  • Groups often have high thresholds for kicking people out, but an in-between level where someone would just see fewer group posts in their feed would be helpful in cases where the moderators think someone's posts are generally making the group worse.

Since lite blocking explicitly overrides the network's prediction about how much someone will like things, you could imagine a change to add it being difficult to get past launch review. It would probably look bad on the core metrics, with a decrease in estimated user satisfaction and engagement. But the metrics don't capture the ways "so and so isn't showing up in my discussions any more" would make others happier and improve their experience using the network.

Comment via: facebook

New to LessWrong?

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 1:28 PM

There is no economical reason to optimize for your happiness, if you can't easily switch to a competing platform. Maybe unhappy people click more ads, who knows... But for the sake of thought experiment let's assume that our corporate overlords are benevolent.

Could this feature be somehow abused? The first idea that comes to my mind is, imagine that I hate you and decide to spread some nasty rumors about you. Let's assume that we are already connected as "friends". So I use this feature to lite-block you. Because you wanted plausible deniability, it means that our mutual friends would still see that we are friends. And they will see what I write about you. But you won't see it, so you won't be able to react. But they will assume that you saw it, and could interpret your silence as consent.

In less personal situations, this could be used to make someone look bad by association. Imagine a politician or a celebrity, who "friends" tons of people without thinking twice. So I make hundred fake accounts, use them all to "friend" my target, then all of them will lite-block the target, and start posting some sort of bad stuff. Now when anyone else looks at the target, they will be "uh, this guy has a lot of Nazi friends". Only the target will not see anything bad during their everyday usage of the platform.

Maybe not the most convincing examples, but generally it seems bad to me that the functionality you want is messing with other person's vision, without them being aware of it. That feels like something that can be abused. So we have to consider how a bad actor would abuse it.

I think your two examples of abusing the feature can be more easily and subtly done today:

  • If you hate me and want to spread rumors you could just share with "friends except jkaufman". To your other friends it just looks like a normal post with restricted visibility. You can do this today on FB, either directly or by adding me to your "restricted" list (which is automatically carved out from posts shared to "friends"). You can also just block me and expect reasonably that no one will notice.

  • If you want to make me look bad by assocation and you can convince me to accept a friend request from your fake accounts, all you have to do is post boring content I won't interact with. The network will quickly decide that these new friends are not interesting to me, and then when they later start posting hateful things I won't notice.

Social networks generally have far more things they could show you than you'll be able to look at. To prioritize they use inscrutable algorithms that boil down to "we show you the things we predict you're going to like".

Presumably, social networks tend to optimize for metrics like time spent and user retention. (There might even be a causal relationship between this optimization and threads getting derailed.)

Also, this seems like a stable/likely state, because if any single social network would unilaterally switch to optimize for 'showing users things they like' (or any other metric different from the above), competing social networks would plausibly be "stealing" their users.

Users liking / interacting with things is a strong leading indicator of engagement and time spent, and you get it on a per-item basis. So you use those predictions heavily in deciding what to show people, but tune your model based on your larger scale metrics like time spent.