Illustration from Michael Haddad for Wired. It was originally commissioned for an article about biohackers, but I find that it captures the spirit of agency and self-improvement that is well-aligned with some of rationalist values.
I may attempt a more comprehensive analysis to suggest some tests later (although I'm not sure that would be very successful - my rationality skills feel like they are nascent at best), but from a superficial read, it seems to me that points A13 and B10 are essentially the same - both deal with stupidity becoming more widespread as a matter of consumerist/capitalist politics/market foces. That could be opposed by noticing that B10 deals with actually smart individuals who pretend to be stupid to reap the benefits, and A13 deals with actually stupid individuals, which in turn is opposable by the classic Ben Kenobi argument. But at the very least, if A13 deals with actions of level-0 actors, B13 would then be the level-1 response to that.
Of course, you have to modulate that by the possibility that allowing people to live off their UBI or blow it on frivolous spending will cancel out those good effects. That question is beyond my pay grade, and I suspect nobody really knows
This makes me think of something. Can't we look at what people who experienced windfall gains spent their newfound money on? Looking into lottery winners seems like an easy enough to obtain sample, although not unproblematic - it takes a certain kind of person to participate in a lottery in the first place. But if we can break that sample down by some demographic factors - maybe cultural factors, or education level, or something like that - maybe there can be some emergent pattern that tells us how many of those people and which will go "fish", or use that money to propel themselves out of the poverty trap, or blow it on frivolous spending.
This is interesting. I wonder how this would apply to the literature on investing mistakes (esp. amateur investors like people who get burned on Robinhood).
If your idea is correct, IMO people then must be thinking about investing money using priors, concepts and feelings from their own spending experience. Maybe some patterns of systemic bias/mistakes associated with trying to use the "feeling for what N dollars buys and feels like spending" can be teased out.
Eventually - sure. But for that eventuality to take place, the "electrical shock tyranny" would have to be more resilient than any political faction we've known of and persist for thousands of year. I doubt that this would be possible.
Sorry if I wasn't clear enough. My critique refers to your point about scenarios where humans evolve like a dystopia not being applicable because if it were, suffering should be a rare occurence - if I understand you correctly, you're stating that if we could evolve to like dystopias, by this point in time we would have evolved to either avoid or like any source of suffering. My counterpoint to this is that there is a massive sub-multitude of sources of suffering that do not affect evolution in any way because they are too transient to effect any serious selection pressure.
You could perhaps engineer scenarios where humans will genuinely evolve to like a dystopia
I think that this kind of misrepresents the scale on which evolution happens - it's not one generation, or two, it's hundreds and thousands, and it's taken relatively good care of the sources of suffering that are fundamental enough to persist and keep the selection pressure on across that time frame - we're pretty good at not eating things that are toxic, breeding, avoiding predators and so on. The problem with evolution is that a significant number of sources of suffering are persistent enough to have a detrimental impact on an individual's life, but transient enough to not be able to affect selection across generations.
This post reminds me of an insight from one of my uni professors.
Early on at university, I was very frustrated with that the skills that were taught to us did not seem to be immediately applicable to the real world. That frustration was strong enough to snuff out most of the interest I had for studying genuinely (that is, to truly understand and internalize the concepts taught to us). Still, studying was expensive, dropping out was not an option, and I had to pass exams, which is why very early on I started, in what seemed to me to be a classic instance of Goodhart, to game the system - test banks were videly circulated among students, and for the classes with no test banks, there were past exams, which you could go through, trace out some kind of pattern for which topics and kinds of problems the prof puts on exams, and focus only on stuying those. I didn't know it was called "Goodhart" back then, but the significance of this was not lost on me - I felt that by pivoting away from learning subjects and towards learning to pass exams in subjects, I was intellectually cheating. Sure, I was not hiding crib sheets in my sleeves or going to the restroom to look something up on my phone, but it was still gaming the system.
Later on, when I got rather friendly with one of my profs, and extremely worn down by pressures from my probability calculus course, I admitted to him that this was what I was doing, that I felt guilty, and didn't feel able to pass any other way and felt like a fake. He said something to the effect of "Do you think we don't know this? Most students study this way, and that's fine. The characteristic of a well-structured exam isn't that it does not allow cheating, it's that it only allows cheating that is intelligent enough that a successful cheater would have been able to pass fairly."
What he said was essentially a refutation of Goodhart's Law by a sufficiently high-quality proxy. I think this might be relevant to the case you're dealing with here as well. Your "true" global optimum probably is a proxy, but if it's a well-chosen one, it need not be vulnerable to Goodhart.
I understand that this may be well outside the scope of your writing, but still - any chance you could actually post some epistemic defense decks for Anki? Or are there any good ones already available?
(Apologies if the question is stupid, I'm somewhat new to LW)