I operate by Crocker's rules. All LLM output is explicitely designated as such. I have made no self-hiding agreements.
That doesn’t mean you are better off than a Khan. Even if you don’t care about status and ability to boss people around, or other ways in which it is ‘good to be the king,’ and we focus only on material wealth, you are especially not better off in the most important respect.
Also in terms of reproductive success the Khan certainly had more going for him (From Marco Polo's "The Travels of Marco Polo"):
You must know that there is a tribe of Tartars called Kungurat, who are noted for their beauty. The great Khan sends his commissioners to the province to select four or five hundred, or whatever number may be ordered, of the most beautiful young women, according to the scale of beauty enjoined upon them.... The commissioners on arriving assemble all the girls of the province, in the presence of appraisers appointed for the purpose. These carefully survey the points of each girl in succession, as for example her hair, her complexion, eyebrows, mouth, lips, and the proportion of all her limbs... And whatever standard the great Khan may have fixed for those that are to be brought to him, ... the commissioners select the required number from those who have attained that standard, and bring them to him. And when they reach his presence he has them appraised anew by other parties, and has a selection made of thirty or forty of those, who then get the highest valuation. Now every year a hundred of the most beautiful maidens of this tribe are sent to the great Khan, who commits them to the charge of certain elderly ladies dwelling in his palace. And these old ladies make the girls sleep with them, in order to ascertain if they have sweet breath and do not snore, and are sound in all their limbs. Then such of them as are of approved beauty, and are good and sound in all respects, are appointed to attend on the emperor by turns. Thus six of these damsels take their turn for three days and nights, and wait on him when he is in his chamber and when he is in his bed, to serve him in any way, and to be entirely at his orders. At the end of the three days and nights they are relieved by another six. And so throughout the year, there are reliefs of maidens by six and six, changing every three days and nights.
Marco Polo continues to describe how the Khan had twenty sons from his main wives.
Hi, does anyone from the US want to donation-swap with me to a German tax-deductible organization? I want to donate $2410 to the Berkeley Genomics Project via Manifund.
I found Yarrow Bouchard's quick take on the EA Forum regarding LessWrong's performance in the COVID-19 pandemic quite good.
I don't trust her to do such an analysis in an unbiased way [[1]] , but the quick take was pretty full of empirical investigation that made me change my mind wrt to how well LessWrong in particular did.
There's much more historiography to be done here, who believed what, when, what the long-term effects of COVID-19 are, which interventions did what, but this seems like the state of the art on "how well did LessWrong actually perform in the early pandemic". Shout out to this review by @DirectedEvolution.
I wish some historians would sit down and write a history of the COVID-19 pandemic, since it it affected ~2× the number of person-years:
Soviet Union (1922-1991): ~220M average population × 69 years ≈ 15 billion person-years
COVID-19 (2020-2023): ~8B global population × 4 years ≈ 32 billion person-years
She hates LessWrong for somewhat vibesy reasons. ↩︎
It's too bad the BALROG benchmark isn't being updated with the newest models. Nethack is both really hard, gives a floating point score, and is text-based, so if a model is vision-impaired (like the Claudes) there's less contamination through "the model just can't see where it is".
Will reveal 2030-01-01.
Hashsum used: SHA-256
303de030331f8e546d015ee69ab9fa91e6339b0560c51ab978f1ef6d8b6906bc
8b21114d4e46bf6871a1e4e9812c53e81a946f04b650e94615d6132855e247e8
To be revealed: 2024-12-31
Revealed content (@Zach Stein-Perlman didn't shame me into revealing this):
Manifold dating will fail:
Most men want to date cis women, there are too few cis women on
Manifold. Additionally, the people participating are low-status nerds,
and the scheme is not 100x better than swipe-match making (which is the
factor I'd put on how much better it needs to be to sway good-looking
cis-women to participate.
Also, having 𝒪(n²) markets for matchmaking requires too many
participants, and these markets can't get deep & liquid, traders get
annoyed by repetitive questions fast. Sure, you can just bet among your
friend circle, but then the value-add is small: one-gender-heavy friend
circles can't solve the scarcity problem that swipe-online dating attempts
to solve. So you either get way too many markets that nobody bets on,
or sparse markets that are limited by social connections.
I fortunately know of TAPs :-) (I don't feel much apocalypse panic so I don't need this post.)
I guess I was hoping there'd be some more teaching from up high about this agent foundations problem that's been bugging me for so long, but I guess I'll have to think for myself. Fine.
I got Claude to read this text and explain the proposed solution to me [[1]] , which doesn't actually sound like a clean technical solution to issues regarding self-prediction, did Claude misexplain or is this an idiosyncratic mental technique & not a technical solution to that agent foundations problem?
C.f. Steam (Abram Demski, 2022), Proper scoring rules don’t guarantee predicting fixed points (Caspar Oesterheld/Johannes Treutlein/Rubi J. Hudson, 2022) and the follow-up paper, Fixed-Point Solutions to the Regress Problem in Normative Uncertainty (Philip Trammell, 2018), active inference which simply bundles the prediction and utility goal together in one (I find this ugly (I didn't read these two comments before writing this one, so the distaste for active inference was developed independently)).
I guess this was also talked about in Embedded Agency (Abram Demski/Scott Garrabrant, 2020) under the terms "action counterfactuals", "observation counterfactuals"?
Your brain has a system that generates things that feel like predictions but actually function as action plans/motor output. These pseudo-predictions are a muddled type in the brain's type system.
You can directly edit them without lying to yourself because they're not epistemic beliefs — they're controllers. Looking at the place in your mind where your action plan is stored and loading a new image there feels like predicting/expecting, but treating it as a plan you're altering (not a belief you're adopting) lets you bypass the self-prediction problem entirely.
So: "I will stay sane" isn't an epistemic prediction that would create a self-fulfilling prophecy loop or violate the belief-action firewall. It's writing a different script into the pseudo-model that connects to motor output — recognizing that the thing-that-feels-like-a-prediction is actually the controller, and you get to edit controllers.
I didn't want to read a bunch of unrelated text from Yudkowsky about a problem I don't really have. ↩︎
In which ways?