Posts

Sorted by New

Wiki Contributions

Comments

So, according to this estimate, if we could freeze-frame a single moment of our working memory and then explain all of the contents in natural language, it would take about a minute to accomplish.

This seems like a potentially misleading description of the situation. It seems to say that the contents of working memory could always be described in one minute of natural language, but this is not implied (as I'm sure you know based on your reasoning in this post). A 630-digit number cannot be described in one minute of natural language. 2016 bits of memory and about 2016 bits of natural language per minute really means that if our working memory was perfectly optimized for storing natural language and only natural language, it could store about one minute of it.

(And on that note, how much natural language can the best memory athletes store in their working memory? One minute seems low to me. If they can actually store more, it would show that your bit estimate is too low.)

Even assuming perfect selfishness, sometimes the best way to get what you want (X) is to coordinate to change the world in a way that makes X plentiful, rather than fighting over the rare Xs that exist now, and in that way, your goals align with other people who want X.

E.g. learning when you're rationalizing, when you're avoiding something, when you're deluded, [...] when you're really thinking about something else, etc.

It seems extremely unlikely that these things could be seen in fMRI data.

I think I got it. Right after the person buys X for $1, you offer to buy it off them for $2, but with a delay, so they keep X for another month before the sale goes through. After the month passes, they now value X at $3 so they are willing to pay $3 to buy it back from you, and you end up with +$1.

What happens if the parrots have their own ideas about who to breed with? Or the rejected parrots don’t want to be sterilised?

It's worth noting that both of these things are basically already true, and don't require great intelligence.

Autonomous lethal weapons (ALWs; we need a more eerie, memetic name)

There's already a more eerie, memetic name. Slaughterbots.

Maybe something like "mundane-ist" would be better. The "realists" are people who think that AI is fundamentally "mundane" and that the safety concerns with AI are basically the same as safety concerns with any new technology (increases inequality by making the powerful more powerful, etc.) But of course "mundane-ist" isn't a real word, which is a bit of a problem.

Wild speculation ahead: Perhaps the aversion to this sort of rationalization is not wholly caused by the suboptimality of rationalization, but also by certain individualistic attitudes prevalent here. Maybe I, or Eliezer Yudkowsky, or others, just don't want to be the sort of person whose preferences the world can bend to its will.

Yes, and another meaning of "rationalization" that people often talk about is inventing fake reasons for your own beliefs, which may also be practically rational in certain situations (certain false beliefs could be helpful to you) but it's obviously a major crime against epistemic rationality.

I'm also not sure rationalizing your past personal decisions isn't an instance of this; the phrase "I made the right choice" could be interpreted as meaning you believe you would have been less satisfied now if you chose differently, and if this isn't true but you are trying to convince yourself it is to be happier then that is also a major crime against epistemic rationality.

Load More