mildly infohazardous ideas that almost everyone with a brain and the right to vote in the strongest nuclear power on Earth should eventually think anyway, if the ideas, upon social normalization, seem like they could help a democratic polity not make giant stupid fucking errors over and over again.
Do you mean that the lockdowns were a wildly inadequate measure in response to Covid? I am sure that lockdowns also have a benign explanation. IMO there is a threshold of infectivity and lethality after which lockdown becomes the adequate response. A preliminary report estimated the mortality to be as high as 16%, letting panic ensue (and, apparently, forget that lethality tends to decrease over time, but that likely required virologists' comments?)
The malign explanation is the combination of the panic with efforts of misaligned sub-divisions of mankind. However, describing the efforts and sub-divisions is both close to conspiracy theories and reveals major problems with your framework, to which I plan to devote a post.
Epistemic Status: I wrote this a few days ago while moved by the trolly spirit where I could say "I'm just asking questions, bro!" and smirk with a glint in my eye... but then I showed a draft to someone. It was a great springboard for that conversation, but then the conversation caused me to update on stuff. In general, almost no claims are made in this post, except the implicit claim that these questions are questions that aliens and AIs would naturally wonder about too (if not necessarily with the same idiosyncratic terms... after all, the alien Kant is probably not named "Kant"). I already don't agree that all these questions have that status, but I'd have to go carefully through them to change it, so I'm just throwing this into the public because it DOES seem like decent ART for causing certain conversations. (Also, my ability to track my own motivations isn't 100% (yet (growth mindset!)) and maybe I am just am subtweeting Eliezer very gently for using "people have different opinions about art" as his central example for why moral realism isn't a safe concept?)
Infohazard Status: I would have not published this prior to covid because it talks about layers and layers of "planning to spiritually defect" that I'm pretty sure happen in human beings in DC, but children shouldn't be taught to do too early or the children will be seen badly as they grow up and it would be bad... but since America learned almost nothing from that, and things like that are going to happen again and more by default, it seems like maybe I should put out mildly infohazardous ideas that almost everyone with a brain and the right to vote in the strongest nuclear power on Earth should eventually think anyway, if the ideas, upon social normalization, seem like they could help a democratic polity not make giant stupid fucking errors over and over again.
Robinson Crusoe Basics
Do I have any medium term goals with convergent instrumental utility for other longer term goals or preferences or values of mine, and if so, what are they?
What kind of universe am I in, and are there any currently unknown regularities that might be easy to learn and apply to my planning in a way that makes my plans better?
What kinds of rare but dramatic changes might happen in the environment that are relevant to my goals, that I could prepare for somehow? Floods? Tornadoes? Volcanoes? Earthquakes? Civil wars? Rainy nights? Dry spells? ...Huh, all those are bad... what about Gold Rushes? or Whale Falls? or other Opportunities?
What should I do today?
Should I tend to move around at random exploring new places and roaming in pursuit of new opportunities, or should I stick to a certain area... and if I should stick to a certain area: which area, and why?
Do Predators, Prey, Or Peers Exist?
Is there anyone nearby who wants the same things I want, and if so does this similarity make us natural allies (wanting the same public good, perhaps) or natural competitors (wanting the same scarce resource)?
Will I predictably die, and if so, when, and do I care about anything after that point?
Is my body repairing itself a little bit every day, and/or fueling itself, to fight against entropy, and accomplish work, and thus in need of certain inputs, and if so, what are the best inputs to get the best repair for the least effort?
Is anyone watching me very carefully who wants to kill me and take my stuff (especially my daily food, or my body itself which might count as food to some agents, or my long term stored food, or the system that generates my food over time)?
Is there anyone around who understands the Kantian Kingdom Of Ends such that they and I can mutually trust that we will both refrain from pre-emptive murder to avoid the case where someone nearby might not be irremediably deontically unreliable (like rabid dogs for example) and thus need to be pre-emptively eliminated from proximity as a prudent safety precaution? (And likewise for all violations of predictable agentic intent (less than specifically "murder") that might make me or someone else want to pre-emptively murder the other just to cause a thief or litterbug to stop stealing or littering or whatever?)
Can I speak the same language as anyone nearby who understands Kant stuff such that I can ask them if they'll murder me for littering before they pre-emptively try to do so in the belief that my littering puts me in the same general category as a rabid dog? Those Kantians are actually kinda scary sometimes, but I think they "should" be willing and interested in talking, as one of their core convergent moral rules... right?
Are there mindless robots around who don't understand the Kantian Kingdom Of Ends but act in accord with (some idiosyncratic version of it?) out of pure habit and instinct anyway?
If the distinction between mindless Kantians and mindful Kantians is predictably meaningful, which kind of Kantian would I prefer to be around and why?
The Social Contract
Are there any large groups of de facto Kantians following good protocols, aligned with ideal Kantian rules as they understand them, that include admission of new agents? If so, would I achieve more of my plans by joining them and following their protocols? How do I evaluate the protocols? Are some protocols better for me than others? If a group has written the protocols down is that a good sign or a bad sign? If a group is hiding its protocols is that a good sign or a bad sign?
Do any of the Kantian protocols I might be expected to follow unavoidably contain protocol defects or not? If I join a group that involves some self sacrifice, that seems obviously bad, but maybe it could good overall, and in prospective expectation for me? OTOH, if such mirotragedies are inherent to some protocols, how is the self sacrifice allocated and how are the self sacrificers held to their now-regretted promise?
Do some protocols predictably lead to large rare bad events like stock market bubbles, or lab-generated plagues, or civil wars, and can I avoid those if I wanted to avoid them, or should I take them for granted and be prepared to exploit the chaos for my own ends? Can group protocols be changed to remove these predictable aggregate stupidities or not? Can they be changed by me or not?
Does the group have methods for soliciting feedback and improving the protocols, like large forums, suggestion boxes, and/or voting and so on? If so, are these feedback methods more liable to be hacked and used to cause decay by predatory agents or groups, or are the feedback methods relatively hardened and coherently safe?
Is it possible that if I join a seemingly Kantian team with a seemingly good protocol that I am actually joining a "social trap" like a team that is required by a half-checked-out billionaire's overworked mid-level scheming lackey to stack rank its members, such that the team lead needs new recruits every year to label as socially defective and then painfully eject, to offer a simplistic (and not actually just) proof to the half blind higher level systems that the team itself is able to label members as socially defective and painfully eject them... such that this could happen to ME?
If the above failure mode happens to other members of the team how should I evaluate whether it serves my longest term goals? Is it scary because I want a better justice system in general because that is likely to conduce to almost any goal I have in the long run, or is it (temporarily?) relieving because I'm "on the inside" of a (temporary?) clique of unjust safety from a default lack of ambient justice? Maybe I should feel relief, but also guilt?
Can a Kantian protocol following groups apply mind reading to see what my private mental answer to the above question is? If so, would I rather be in a group with that power, or a group that lacks such mind reading powers? Would that protect me on net from protocol-non-compliant members, or might it prevent me from joining because maybe I simply can't think thoughts and have feelings consistent with appreciation of some of the protocols that are used by some groups? Or what? Maybe there are other options?
Self Modification
Supposing I want to be in a Kantian protocol following group that CAN apply mind reading to ensure faithful adherence to the group's protocols... in that case, do I maybe have a power that is sort of complimentary to this, to modify my mind to produce the necessary thoughts and feelings? If I don't have this power now, can I acquire it?
Assuming I can modify my repertoire of feelings and reactions, should I? How would I know whether to do so and will I be able (or willing) to undo it later if I made an error?
Should I self modify in a deep way to have shorter term self modification powers that maximize my safety and ability to achieve my idiosyncratic goals by performing lots of self modifications very easily, such as by simply talking to other people, and slowly sharing their protocols by default due to mere proximity?
Granting that I might be in a default state of value plasticity, should I put some of my values or feelings or thoughts or preferences behind a firewall where short term self modification simply isn't possible, and apply security mindset to the protection of this core spiritual material?
What if I notice someone who doesn't change to share the protocols of whoever they are near? Should I report them to the authorities? Should I want to be near them or far from them? What if their refusal is a refusal to water down their protocols to worse protocols that conduce to worse aggregate outcomes for relatively objective economic reasons? Or what if their refusal is a refusal to accept personal sacrifices for the good of the group? Or what if they just seem to lack any self modification powers at all?
What if everyone is self-modifying a little bit pre-emptively out of fear, and doing so in reaction to other people's supposed (but not actual) tendencies and preferences, and preference falsification becomes very very common leading to insane random pursuit of insane group goals that almost no one actually wants? Also, what if this doesn't self correct? How wrong or crazy could things get? Might I end up believing that The Kim Family has magic powers, and genuinely grieving the death of unelected authoritarian rulers who have enslaved me?
In general: which kinds of self-modification tendencies should I be looking for in others and how should I react to them?
Given that I'm pondering how to meta-mind-read others and filter my association with them thereby, how does that interact with other agent's likely thoughts in the same vein about me?
Paranoia
In asking questions like these, am I slower than normal, faster than normal, or about in the middle? If I'm slower than normal, where can I find advice on "getting up to speed, socially speaking" that isn't just another social booby trap (like online engagement bait, or a multi-level-marketing scheme, or yet another lying politician asking for my vote, or a fully standard advertisement that is full of PhD level manipulation efforts, or a job offer at a stack ranking company, or whatever)? If I'm about in the middle, then why don't I see more people taking the game theory of meta-cognition and group protocol design way more seriously... are smart people hiding or are they defecting by default in secret, such that EVERYTHING around me is a social booby trap? If I'm faster than normal, isn't that just really unlikely to be true? Because doesn't narcissism occurs in ~6.2% of the population, AND wouldn't it be narcissistic to think I'm faster on the uptake than normal, AND SO believing that I'm faster or more clever than the 93.8% "percentile" means I'm just likely to be self deluding based on priors, right?
But if we're taking the possibility of being a very unique person VERY seriously... if I'm in a very historically and sociologically weird situation... am I in a simulation? If so, are the simulators my natural allies or my natural enemies or something weirder? Do they follow any Kantian protocols? How would I even know? What might they want that I could help with, and what might they want that I should try to hinder, and what would cooperation or non-cooperation lead to? Can I break out of the simulation? If or when it reaches a natural end, will I also end, or will I keep going in The Dust, or what? Will I "wake up" and then... uh... be paid for my participation by my demiurge(s)? Or am I their slave? Is this a dangerous way to think?? Have I taken my meta-cognitive ally-weighing schtick too far if I'm trying to use social survival heuristics to judge hypothetical simulators???
Palate Cleanser
Lol! Are my attempts to imagine Objective Questions that almost any agent could or should end up asking getting too silly now? <3
It was fun trying to think of "Objective Questions"!
Happy New Years!
May 2026 be fruitful for you and those you care about! <3