The silence is deafening – Devon Zuegel

by Ben Pace1 min read4th Jul 202012 comments

28

Site MetaWorld Modeling
Frontpage

Imagine you're at a dinner party, and you're getting into a heated argument. As you start yelling, the other people quickly hush their voices and start glaring at you. None of the onlookers have to take further action—it's clear from their facial expressions that you're being a jerk.

In digital conversations, giving feedback requires more conscious effort. Silence is the default. Participants only get feedback from people who join the fray. They receive no signal about how the silent onlookers perceive their dialogue. In fact, they don't receive much signal that onlookers observed the conversation at all.

As a result, the feedback you do receive in digital conversations is more polarized, because the only people who will engage are those who are willing to take that extra step and bear that cost of wading into a messy conversation.

 

It's a great post, and has a really solid UI idea in the footnotes.

One idea I'd really like to see platforms like Twitter or Reddit try is to provide a mechanism for low-friction, private, negative feedback. For example, you could imagine offering a button where you can downvote or thumbs-down content (i.e. the opposite of a Like), but the count is only visible to the OP and not to anyone else.

The LW team has been thinking about building private responses like this for a while, but in comment form. Buttons that give more constrained private info are very interesting...

12 comments, sorted by Highlighting new comments since Today at 1:32 PM
New Comment

What about trolls? What about pile-ons?

Trolls: some people are not upset by negative feedback or even actively seek it. I think this could be structured such that this negative feedback would not be rewarding to such people, but it merits consideration, because backfire is at least in principle possible.

Pile-ons: There are documented cases of organized downvote brigades on various platforms, who effectively suppress speech simply because they disagree with it. Now, I wouldn't object to a brigade of mathematicians on a proof wiki downvoting and pages they disagreed with and thereby censoring the pages or driving away their authors; but in most other cases, I think such brigades would be a problem. Again, you might be able to design a version that successfully discouraged such brigades (for instance: have "number of downvotes", "correlation with average downvoter", and "correlation with most-similar downvoter" all visible in someone's profile?), but it merits thought.

I'm sure you could think of a dozen solutions to fill this out into a well-defined system if you spent 5 minutes thinking about it.

Zuegel's point is that you want some people to be able to express implicit or tacit disapproval in a less legible way than leaving a public criticism. To continue the dinner party analogy: you don't go to a dinner party with 10 people chosen at random from billions of people; they are your friends, relatives, coworkers, people you look up to, famous people etc. A look of disapproval or a conspicuous silence from them is very different from context collapse causing a bunch of Twitter bluechecks swarming your replies to crush dissent. So the question is who to choose.

You could, for example, just disable these implicit downvotes for anyone you do not 'follow', or anyone you have not 'liked' frequently. You could have explicit opt-in where you whitelist specific accounts to enable feedback. You could borrow from earlier schemes for soft-voting or weighting of votes like Avogadro: votes are weighted by the social graph, and the more disconnected someone is from you, the less their anonymous downvote counts (falling off rapidly with distance).

My first thought for LW was “post author plus anyone in the comments plus anyone over 1k karma” as the default.

It seems to me that LessWrong already has a downvote button and that downvote button is effectively used to drive out content that the community doesn't want to see in the way that's described.

So far I have found the LW voting behavior instructive and reasonable. It seems like LW'ers do vote on your epistemology rather than the content of your post (like in reddit). It's very cool.

I don't think the post was about LessWrong specifically (at all); think Twitter or Facebook or random blog comments.

Here on this site, yes both downvotes and the absence of upvotes are strong mostly-legible signals.

The post is written by a person from the team that programs LW.com and tagged as "site meta". 

People only receive feedback from people that are engaged enough to give it. Unsurprisingly, a mouse click is typically a very low effort of caring. It isn't quality feedback.

Internet comment voting is a skinner box, and giving people clicky buttons that literally dispense dopamine isn't going to do anything but turn them into button clicking addicts. Showing them click counters just makes that worse.

If you are going to reward behaviour to encourage it then ignore negative feedback entirely. Give people a limited number of medals that they can award to a few comments a day. Force people to think about it and slow down. If you can't be anything but positive (because you don't have the option of anything else) then you're going be forced to make a positive act by default. You will ignore the low value and negative comments because you're looking for positive ones to give reward to.

If you must have a 'negative' button then that button could be a personal block button. You can tally blocks on a given comment and if it hits thresholds you can move it to the bottom and collapse it. A huge problem with downvotes is that regardless of what is said they're for, they always turn into an agree/disagree binary rather than a metric of quality.

Whatever you reward is what you end up getting more of. Reward design is far from trivial, and people routinely hand out rewards that make the behaviour they're trying to manage even worse. That's before we get to dark patterns like you see on social media that exploit discord and the lack of IRL feedback to amplify engagement. Gossipy and angry people welded to their phones are a common sight these days because that's what makes social media companies money. It isn't of net benefit to the users thereof, but it's digital crack so the majority of people don't care even if they're aware of it.

People only receive feedback from people that are engaged enough to give it.

On The Internet, that's generally true. But that's not so true IRL, face-to-face. And the point of the post is that we could engineer feedback-by-default like the reactions people mostly can't help having when they're visible (or audible) in small groups.

I don't think that's so easy. Firstly, how are you going to capture involuntary responses, and secondly, how are you going to get people to sign up for that? Lots of people have had to do zoom meetings thanks to the plague, and lots of them have done as I have on occasion: simply turned the video off because I don't want to be giving out involuntary data.

If there's any default on the internet (and anyone with backend statistics can back this up) it is indifference. The vast majority of people never write anything, never participate, they just consume. There's nothing wrong with that, but I think there could be a trap where we assume that the system must be re-engineered to accommodate a small number of super engaged individuals. I'm not against the experiment, but I do think it is more complicated that it first appears.

As I've stated in examples involving facebook, discord is often a feature and not a bug. It could be the case here that commentary that is purported to be a problem is actually something that people are seeking (whether or not they're aware of that). Human social dynamics seem straightforward but they aren't, and that can be a trap when it comes to designing systems to accommodate them.

There's also the issue that the internet is basically a written medium. The limitations aren't necessarily a bad thing here, people have been writing for thousands of years and doing just fine. I would posit that the issue is speed and immediacy - you basically have to respond, right here, right now (and then have it preserved for all eternity whilst every prick out there combs over it for the slightest angle to criticise). That doesn't leave a lot of room for consideration. Perhaps there's something to be said for creating some social conventions around communication in a forum rather than trying to engineer it programmatically. If you have accepted rules for policing yourself and others then transgressions can be managed socially. If it is socially acceptable to say "Knock your bullshit off" then someone will and 9 times out of 10 that will be enough to fix the problem.

Sorry I wasn't clearer – "engineer" was intended to encompass things like social conventions, not just software.

Interesting - I had previously been thinking about the problems that arise from there being so few approving looks on the Internet (upvotes, "likes" etc. are a step in that direction, but still not the same). It hadn't occurred to me consider the reverse as well.