Or, A Nicer Way to Commodify Attention
 

Observation 1: I read/listen to a lot of public intellectuals--podcasters, authors, bloggers, and so on--and I frequently find myself thinking:

Ugh, I hate it when this guy spouts ignorant nonsense about [some domain]. And he's been doing it for years. I wish he would just read [relevant book]. Then he’d at least be aware of the strongest counterarguments.

Observation 2: Sometimes podcasters/bloggers will poll their followers with questions like, “Hey guys, who should I interview next?” or “what book should I review next?”

 

Having observed these things I wonder if they could be improved by monetary transactions.

For a specific example, it would be cool for David Deutsch to let the highest bidder choose a book for him to read and review. If we're lucky (or spendthrift), we get to see him finally give a considered response to the particular claims in Human Compatible or Superintelligence or Life 3.0.

Less specifically, there are a lot of cognoscenti who command substantial influence while holding themselves to disappointingly low epistemic standards. For example, Sean Carroll is a science communicator who dabbles in a little bit of everything on the side; and although I consider his epistemic standards to be above-average, I can tell he has not read the best of Slate Star Codex. I think if he did read a post such as "Asymmetrical Weapons", there’s a decent chance he would feel compelled to raise the bar for himself.

I have some close friends who sometimes spew (what I perceive to be) ignorant drivel. For some of them, I might be willing to pay a surprisingly high price to see them write a review of a book that cogently challenges their stupid priors. I would pay the highest price for friends that I already know can update on sound arguments, I would pay a lower price to find out if a friend has that ability, and for the attention of the hopelessly obstinate I would not want to pay anything.

Why This Wouldn’t Work

  • It's easy to underestimate how pervasive and sticky those signaling/tribal motivations are. The gains from trade available here might not be enough to overcome the pressure to protect narratives and maintain appearances.
  • There might be perverse incentives. Maybe the cognoscenti want to charge more, so they reduce supply (that is, they read less). Maybe they spout more ignorant nonsense in order to increase the bids.

Those are just off the top of my head. If you can think of more/better reasons please share.

New to LessWrong?

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 8:50 PM

I do this for movies, and formerly did it for books and TV shows, but people mainly try to just pay me to watch anime.

I predict that this will not become popular, mostly because of the ick-factor around monetary transactions between indviduals that most people have.

However, the inverse strategy seems just as interesting (and more likely to work) to me.

That's a pretty good link, thanks. And yeah, the inverse had occurred to me, but I forgot to mention it except kind of in the title.

I'd guess that reciprocal exchanges might work better for friends:
I'll read any m books you pick, so long as you read the n books I pick.

Less likely to get financial ick-factor, and it's always possible that you'll gain from reading the books they recommend.

Perhaps this could scale to public intellectuals where there's either a feeling of trust or some verification mechanism (e.g. if the intellectual wants more people to read [some neglected X], and would willingly trade their time reading Y for a world where X were more widely appreciated).

Whether or not money is involved, I'm sceptical of the likely results for public intellectuals - or in general for people strongly attached to some viewpoint. The usual result seems to be a failure to engage with the relevant points. (perhaps not attacking head-on is the best approach: e.g. the asymmetrical weapons post might be a good place to start for Deutsch/Pinker)

Specifically, I'm thinking of David Deutsch speaking about AGI risk with Sam Harris: he just ends up telling a story where things go ok (or no worse than with humans), and the implicit argument is something like "I can imagine things going ok, and people have been incorrectly worried about things before, so this will probably be fine too". Certainly Sam's not the greatest technical advocate on the AGI risk side, but "I can imagine things going ok..." is a pretty general strategy.

The same goes for Steven Pinker, who spends nearly two hours with Stuart Russell on the FLI podcast, and seems to avoid actually thinking in favour of simply repeating the things he already believes. There's quite a bit of [I can imagine things going ok...], [People have been wrong about downsides in the past...], and [here's an argument against your trivial example], but no engagement with the more general points behind the trivial example.

Steven Pinker has more than enough intelligence to engage properly and re-think things, but he just pattern-matches any AI risk argument to [some scary argument that the future will be worse] and short-circuits to enlightenment-now cached thoughts. (to be fair to Steven, I imagine doing a book tour will tend to set related cached thoughts in stone, so this is a particularly hard case... but you'd hope someone who focuses on the way the brain works would realise this danger and adjust)

When you're up against this kind of pattern-matching, I don't think even the ideal book is likely to do much good. If two hours with Stuart Russell doesn't work, it's hard to see what would.

I think the advantage of reading a book over having a conversation is that you're less concerned with saving face or "winning", so can focus more on the actual argument.

That's a good point, though I do still think you need the right motivation. Where you're convinced you're right, it's very easy to skim past passages that are 'obviously' incorrect, and fail to question assumptions.
(More generally, I do wonder what's a good heuristic for this - clearly it's not practical to constantly go back to first principles on everything; I'm not sure how to distinguish [this person is applying a poor heuristic] from [this person is applying a good heuristic to very different initial beliefs])

Perhaps the best would be a combination: a conversation which hopefully leaves you with the thought that you might be wrong, followed by the book to allow you to go into things on your own time without so much worry over losing face or winning.

Another point on the cause-for-optimism side is that being earnestly interested in knowing the truth is a big first step, and I think that description fits everyone mentioned so far.

This is a really interesting post. I wonder how implementable this is, it does touches the edges of collective action. Imagine a change.org petition for someone to read something and make a review of it, despite public interest it does miss the incentive structure for one who actually carries the task.

Going further, some people are tokenizing the hours of their day and selling them on the blockchain (this is too broad, but imagine a particular action being tokenized, where people can fund it through sheer interest and then someone like David Deutsch could claim it.) This does not seem so far-fetched to me.