Wiki Contributions

Comments

I apologize. I think the topic is very large, and inferential distances would best be bridged either by the fortuitous coincidence of us having studied similar things (like two multidisciplinary researchers with similar interests accidentally meeting at a conference), or else I'd have to create a non-trivially structured class full of pre-tests and post-tests and micro-lessons, to get someone from "the hodge-podge of high school math and history and biology and econ and civics and cognitive science and theology and computer science that might be in any random literate person's head... through various claims widely considered true in various fields, up to the active interdisciplinary research area where I know that I am confused as I try to figure out if X or not-X (or variations on X that are better formulated) is actually true". Sprawl of words like this is close to the best I can do with my limited public writing budget :-(

Public Choice Theory is a big field with lots and lots of nooks and crannies and in my surveys so far I have not found a good clean proof that benevolent government is impossible.

If you know of a good clean argument that benevolent government is mathematically impossible, it would alleviate a giant hole in my current knowledge, and help me resolve quite a few planning loops that are currently open. I would appreciate knowing the truth here for really real.

Broadly speaking, I'm pretty sure most governments over the last 10,000 years have been basically net-Evil slave empires, but the question here is sorta like: maybe this because that's mathematically necessarily how any "government shaped economic arrangement" necessarily is, or maybe this is because of some contingent fact that just happened to be true in general in the past... 

...like most people over the last 10,000 years were illiterate savages and they didn't know any better, and that might explain the relatively "homogenously evil" character of historical governments and the way that government variation seems to be restricted to a small range of being "slightly more evil to slightly less evil".

Or perhaps the problem is that all of human history has been human history, and there has never been a AI dictator nor AI general nor AI pope nor AI mega celebrity nor AI CEO. Not once. Not ever. And so maybe if that changed then we could "buck the trend line of generalized evil" in the future? A single inhumanly saintlike immortal leader might be all that it takes!

My hope is: despite the empirical truth that governments are evil in general, perhaps this evil has been for contingent reasons (maybe many contingent reasons (like there might be 20 independent causes of a government being non-benevolent, and you have to fix every single one of them to get the benevolent result)).

So long as it is logically possible to get a win condition, I think grit is the right virtue to emphasize in the pursuit of a win condition.

It would just be nice to even have an upper bound on how much optimization pressure would be required to generate a fully benevolent government, and I currently don't even have this :-(

I grant, from my current subjective position, that it could be that it requires infinite optimization pressure... that is to say: it could be that "a benevolent government" is like "a perpetual motion machine"?

Applying grit, as a meta-programming choice applied to my own character structures, I remain forcefully hopeful that "a win condition is possible at all" despite the apparent empirical truth of some broadly catharist summary of the evils of nearly all governments, and darwinian evolution, and so on.

The only exceptions I'm quite certain about are the "net goodness" of sub-Dunbar social groupings among animals.

For example, a lion pride keeps a male lion around as a policy, despite the occasional mass killing of babies when a new male takes over. The cost in murdered babies is probably "worth it on net" compared to alternative policies where males are systematically driven out of a pride when they commit crimes, or females don't even congregate into social groups.

Each pride is like a little country, and evolution would probably eliminate prides from the lion behavioral repertoire if it wasn't net useful, so this is a sort of an existence proof of a limited and tiny government that is "clearly imperfect, but probably net good".

((

In that case, of course, the utility function evolution has built these "emergent lion governments" to optimize for is simply "procreation". Maybe that must be the utility function? Maybe you can't add art or happiness or the-self-actualization-of-novel-persons-in-a-vibrant-community to that utility function and still have it work?? If someone proved it for real and got an "only one possible utility function"-result, it would fulfill some quite bleak lower level sorts of Wattsian predictions. And I can't currently rigorously rule out this concern. So... yeah. Hopefully there can be benevolent governments AND these governments will have some budgetary discretion around preserving "politically useless but humanistically nice things"?

))

But in general, from beginnings like this small argument in favor of "lion government being net positive", I think that it might be possible to generate a sort of "inductive proof".

1. "Simple governments can be worth even non-trivial costs (like ~5% of babies murdered on average, in waves of murderous purges (or whatever the net-tolerable taxation process of the government looks like))" and also..

If N, then N+1: "When adding some social complexity to a 'net worth it government' (longer time rollout before deciding?) (more members in larger groups?) (deeper plies of tactical reasoning at each juncture by each agent?) the WORTH-KEEPING-IT-property itself can be reliably preserved, arbitrarily, forever, using only scale-free organizing principles".

So I would say that's close to my current best argument for hope.

If we can start with something minimally net positive, and scale it up forever, getting better and better at including more and more concerns in fair ways, then... huzzah!

And that's why grit seems like "not an insane thing to apply" to the pursuit of a win condition where a benevolent government could exist for all of Earth.

I just don't have the details of that proof, nor the anthropological nor ethological nor historical data at hand :-(

The strong contrasting claim would be: maybe there is an upper bound. Maybe small packs of animals (or small groups of humans, or whatever) are the limit for some reason? Maybe there are strong constraints implying definite finitudes that limit the degree to which "things can be systematically Good"?

Maybe singleton's can't exist indefinitely. Maybe there will always be civil wars, always be predation, always be fraud, always be abortion, always be infanticide, always be murder, always be misleading advertising, always be cannibalism, always be agents coherently and successfully pursuing unfair allocations outside of safely limited finite games... Maybe there will always be evil, woven into the very structure of governments and social processes, as has been the case since the beginning of human history.

Maybe it is like that because it MUST be like that. Maybe its like that because of math. Maybe it is like that across the entire Tegmark IV multiverse: maybe "if persons in groups, then net evil prevails"?

I have two sketches for a proof that this might be true, because it is responsible and productive to slosh back and forth between "cognitive extremes (best and worst planning cases, true and false hypotheses, etc) that are justified by the data and the ongoing attempt to reconcile the data" still.

Procedure: Try to prove X, then try to prove not-X, and then maybe spend some time considering Goedel and Turing with respect to X. Eventually some X-related-conclusion will be produced! :-)

I think I'd prefer not to talk too much about the proof sketches for the universal inevitability of evil among men.

I might be wrong about them, but also it might convince some in the audience, and that seems like it could be an infohazard? Maybe? And this response is already too large <3

But if anyone already has a proof of the inevitability of evil government, then I'd really appreciate them letting me know that they have one (possibly in private) because I'm non-trivially likely to find the proof eventually anyway, if such proofs exist to be found, and I promise to pay you at least $1000 for the proof, if proof you have. (Offer only good to the first such person. My budget is also finite.)

I wrote 1843 words in response, but it was a bad essay.

This is a from-scratch second draft focused on linking the specifics of the FDA to the thing I actually care about, which is the platonic form of the Good, and its manifestation in the actual world.

The problem is that I'm basically an albigenisian, or cathar, or manichian, in that I believe that there is a logically coherent thing called Goodness and that it is mostly not physically realized in our world and our world's history.

Most governments are very far from a "Good shape", and one of the ways that they are far from this shape is that they actively resist being put into a Good shape.

The US in 1820 was very unusually good compared to most historically available comparison objects but that's not saying very much since most governments, in general, are conspiracies of powerful evil men collaborating to fight with each other marginally less than they otherwise would fight in the absence of their traditional conflict minimization procedures, thus forming a localized cartel that runs a regional protection racket.

The FDA is thus a locally insoluble instance of a much much larger problem.

From December 2019 to February 2022 the nearly universal failure of most governments to adequately handle the covid crisis made the "generalized evil-or-incompetent state" of nearly all worldy governments salient to the common person.

In that period, by explaining in detail how the FDA (and NIH and OSHA and CDC and so on) contributed to the catastrophe, there was a teachable moment regarding the general tragedy facing the general world.

The general problem can be explained in several ways, but one way to explain it is that neither Putin nor Hamas are that different from most governments.

They are different in magnitude and direction... they are different from other governments in who specifically they officially treat as an outgroup, and how strong they are. (All inner parties are inner parties, however.)

Since Putin and Hamas clearly would hurt you and me if they could do so profitably, but since they also obviously can't hurt you and me, it is reasonably safe for you and me to talk about "how Putin and Hamas would be overthrown and replaced with non-Bad governance for their respective communities, and how this would be Good".

From a distance, we can see that Putin is preying on the mothers and families and children of Russia, and we can see that Hamas is preying on the mothers and families and children of Palestine.

Basically, my argument is that every government is currently preying upon every group of people they rule, rather than serving those people, on net.

I'm opposed to death, I'm opposed to taxes, and I'm opposed to the FDA because the FDA is a sort of "tax" (regulations are a behavioral tax) that produces "death" (the lack of medical innovation unto a cure for death).

These are all similar and linked to me. They are vast nearly insoluble tragedies that almost no one is even willing to look at clearly and say "I cannot personally solve this right now, but if I could solve it then it would be worth solving."

Not that there aren't solutions! Logically, we haven't ruled out solutions in full generality in public discussions yet!

I'm pretty sure (though not 100%) that "science doesn't know for sure" that "benevolent government" is literally mathematically impossible. So I want to work on that! <3

However... in Palestine they don't talk much in public about how to fix the problem that "Hamas exists in the way that it does" and in Russia they don't talk much in public about how to fix that "Putin exists in the way that he does" and in China they don't talk much in public about how to fix that "the CCP exists in the way that it does", and so on...

The US, luckily, still has a modicum of "free speech" and so I'm allowed to say "All of our presidents are and have been basically evil" and I'm allowed to say "FDA delenda est" and I'm allowed to say "The Constitution legally enshrines legalized slavery for some, and that is bad, and until it changes we in the US should admit that the US is pretty darn evil. Our median voter functionally endorses slavery, and so our median voter is functionally a moral monster, and if we have any moral leaders then they are the kind of moral leader who will serve evil voters IN SPITE of the obvious evils."

I don't usually bring up "that the FDA is evil" very much anymore.

Covid is old news. The common man is forgetting and the zeitgeist has moved on.

Lately I've been falling back to the much broader and simpler idea that the US Constitution should be amended to simply remove the part of the 13th amendment that literally legalizes literal slavery.

This seems like a cleaner thing, that could easily fit within the five word limit.

And perhaps, after decades of legalisitic struggle, the US could change this one bad law to finally make slavery fully illegal?

But there are millions of bad laws.

Personally, I think the entire concept of government should be rederived from first principles from scratch and rebooted, as a sort of "backup fallback government" for the entire planet, with AI and blockshit, until all the old governments still exist, like the way there are still torture machines in museums of torture, but we just don't use any of the old governments anymore.

There's a logically possible objection from the other direction, saying that government is necessarily evil and there just shouldn't be one. I disagree with this because good institutions are incredibly important to good outcomes, empirically, and also the consent of the governed seems like valid formula. I'm an archist and not an anarchist.

But I'd aim for a state of affairs where instead of using the old governments, we would use things like a Justice API, and Local Barter Points, and a Council of DACs, and a Polyhive Senate Of Self Defense, and Open Source Parliamentarians (AIs built to represent humans within an Open Source Governance framework like in the backstory of Lady Of Mazes), and other weird new things?

Then at some point I'd expect that if most people on Earth looked at their local violence monopoly and had the thought "hey, I'm just not using this anymore" it would lead to waves, in various places, and due to various crises, of whole regions of Earth upgrading their subscriptions to the new system (maybe taking some oaths of mutual defense and signing up for a few new DACs) and then... we'd have something much much better without the drawbacks of the old stuff.

If such "fallback governance systems" had been designed and built in 2019, then I think covid would have caused such a natural phase transition for many countries, when previous systems had visibly and clearly lost the global mandate of heaven.

And if or when such phase transitions occur, there would still be a question of whether the old system will continue to try to prey on the people voluntarily switching over to a new and better system...

And I think it is clear to me and most of my readers that no such reform plan is within any Overton Window in sight...

...and maybe you therefore don't think THIS could be a realistic way to make the FDA not exist in 2026 or 2028 or 2033 (or any other near term date)... 

...but a cautious first principles reboot of the global order to address the numerous and obvious failures of the old order is currently the best I can currently come up with on BOTH the (1) realism and (2) goodness axes.

And while possible replacement system(s) for the government are still being designed, the only people I think it would be worth working with on this project are people who can independently notice that the FDA is evil, and independently notice that slavery is bad and also legal in the US (and also hopefully they can do math and have security mindset).

So, I still endorse "FDA delenda est" but I don't think there's a lot of point to beating that dead horse, or talking about the precise logistics of how to move deck chairs on the titanic around such that the FDA could be doing slightly less evil things while the ship sinks.

The ship is sinking. The water is rising. Be Noah. Build new ships. And don't bother adding "an FDA" to your new ship. That part is surplus to requirements.

The video you linked to was really interesting! I got TWO big lessons from it!

First, I learned something about ambiguity of design intent in designed environments from going "from my subjective framing to the objective claims about the scene" (where I misunderstood the prompt and got a large list of wrong things and didn't notice a single change, and later realized that almost all the changes preserved the feature of misdesign that had been salient for me).

Second, I learned a lot from "trying to use the video's frame to create a subjectivity that could represent what really happened in a subjectively coherent trace" by watching over and over while doing gestalt awareness meditation... and failing at the meditation's aims... until I stopped to reverse engineer a "theory of what happened" into a "method of observation".

I shall unpack both of these a bit more.

Initially, the instructions were

...spot the items in the room that are a little "out of place".

On my very first watch through I was proud of having noticed all the things not in parentheses: (1) the desk in the left corner (where the ball disappears, it turns out) is horribly designed and had a bent leg, (2) the ugly ceiling tiles (where two tiles entirely disappearance) violate symmetry because one of the four lights has a broken cover with the reflectors showing, (3) the couch is untidy with cloth laying over the edge (what was hanging over changed), (4) the desk is messy (but the mess lost a wine bottle), (5) the coffee table has objects VERY CLOSE to the edge, where they will be very easy to bump off and cause a tragedy if someone bumps them while moving with normal lack of caution (though the cup changed from black to white and the candle changed into a bowl).

As a proud autist, I'm happy to report that these are all flaws. I followed the instructions reasonably and collected a set of things that I could have been instructed to have collected! <3

All the flaws I found persisted from the beginning to the end, and they basically count as "things out of place" in the normal reading of that concept (like to an ergonomic engineer, or a housekeeper, or whatever).

It would be interesting to design another stimuli like this video, and have the room be absolutely tidy, with flawless design and a recent cleaning and proper maintenance of the ceiling, and see if it replicates "as much" despite there being no "latent conceptual distraction" of a reasonable set of "room flaws" to find that had been paired with ambiguity about "what counts as a flaw" in the instructions.

On my second and third watches, I knew what changes to look for but I had not yet read the video title to understand that gradual change blindness was the key concept.

So I just queued up the set of things to be "sensitive to motion about" in my subjective attentiveness filters and waited for "the feeling of something in jerky motion, for me to resist doing an eye saccade towards" to hit my gestalt scene sense... and I got a couple of those!

However, the place they triggered was in the frame-to-frame jumps in the dithering of the "greyscale" of boring parts of the scene that weren't even "officially changing"!

Like dithering is, in some sense, a cryptographic hash of a scene and so my treating "something jumps as something worthy of salience" was only detecting jumps in places that were not carefully controlled by the stimuli designers!

Ultimately, the second thing I learned was how to apply a top-down expectation of change into my observing loop

The thing that finally got me to this place was starting with a list of things that I knew had changed, and then running a rough branch and bound algorithm running a mousing-over along the timeline, and looking at the thumbnail, seeking ANY of the changes showing up as a "jerky pop" as they changed from one thing to the next thing.

This is what proved visually to me no such pops existed. Logically then: the changes were nearly continuous.

The only "pop(!) that looks like a change" that I could then find was scrubbing very fast, so the sped up video finally gave me things that looked like a fade.

What I realized is that to get a subjective sense of what was really happening in real time, I had to buy into the idea that "motion detection will fail me" and I had to make an explicit list of features of "where the scene started" and "what the designers of the scene's shift planned to happen over the course of the shift" and keep both concepts in mind actively during all perceptual acts.

Then, moment to moment, I could flick my attention around to extract, with each saccade of my eyes, a momentary impression like:

  1. "the dithering flickered and the cup on the edge of coffee table is 10% of the way from white to black (which is part of the plan)"...
  2. "the dithering flicked and the exercise ball is 20% disappeared (which is part of the plan)"...
  3. "more flickering and now the candle/bowl on the coffee table is 30% shapeshifted (which is part of the plan)"...
  4. "the portraits on the shelves are 40% moved from low to high (which is part of the plan)"... and so on.

Like here's "the untidy couch object at a fade of ~60% white, ~40% blue" which can be seen and fitted into the expectation of the overall shift that is being consciously perpetrated against your perceptual systems by the stimuli designers:

In the frames before and after it is slightly more or less faded and your visual motion detectors will never see it POP(!) with a feeling of "its like a frog jumped, or a cat's tail writhed, or a bird flew by".

It will always just seem like a locally invalid way for things to be, because it isn't something your inner mental physics simulator could ever generate as a thing that physics does... but also over time the video effect will have one plausible thing slowly be more and more ghostly until it is gone. From valid, to invalid but seemingly static, to valid again.

I think it was critical for this effect that the whole video was 53 seconds long. Auditory working memory is often about 4 seconds long, and I bet video working memory is similar.

The critical thing to make these kinds of "change-blindness mechanism proving stimuli" is probably to make the change "feel invisible" by maintaining a simple and reasonable "invariant over time".

You would want no frame-to-frame visual deltas that are (1) easily perceptible in a side by side comparison (due to low level logarithmic sensitivity processes that science has known about since ~1860) and (2) closer than 5 seconds in time such that the brain could keep lots of detail about any two images (a before and after that are distinct) because the brain will have had more images in between (such as to cause our visual change buffer to overflow before any detector-of-change-classifier actually fires and triggers a new "temporary subjective consensus block" in the brain's overall gestalt consensus summary of "the scene").

...

So that's really interesting! I can instantly imagine ways to transpose this tactic into PR, and management, and politics, and finance, and other domains where the goal is explicitly to gain benefits from hurting people who might have naively and implicitly trusted you to not hurt them through deception.

I bet it will also help with the design of wildly more effective slow missiles.

...

Humans are so fucked. The future is probably going to feel like Blindsight unless our AI overlords love us and want our subjective reality to make sense despite our limitations. "Daily experience as an empathically designed UI for the disabled"?

...

Defensively speaking, (like if there even is any possible defense and we're not just totally doomed) maybe the key principle for the design of systems of defense against the likely attacks would involve archiving obsessively and running offline change detectors on exponentially larger timescales?

It reminds me a bit of Dune "shield fighting": slow on the offense, fast on the defense... but for sense-making?

This bit might be somewhat true but I think that it actually radically understates the catastrophic harms that the FDA caused.

Every week the Covid-19 vaccines were delayed, for example, cost at least four thousand lives. Pfizer sent their final Phase 3 data to the FDA on November 20th but was not approved until 3 weeks later on December 11th. There were successful Phase I/II human trials and successful primate-challenge trials 5 months earlier in July. Billions of doses of the vaccine were ordered by September. Every week, thousands of people died while the FDA waited for more information even after we were confident that the vaccine would not hurt anybody and was likely to prevent death. The extra information that the FDA waited months to get was not worth the tens of thousands of lives it cost. Scaling back the FDA’s mandatory authority to safety and ingredient testing would correct for this deadly bias.

Something else that the FDA regulated was covid testing. In December of 2019 there were many tests for covid in many countries. I could have made one myself, and by February of 2020 I was pricing PCR machines and considering setting up "drive through covid testing" without any regulatory oversight.

Part of my "go / nogo" calculus was that I expected to get personally financially destroyed by the FDA for totally ignoring their oversight processes, but I was imagining that either (1) being destroyed by evil would be worth the good it does or (2) people would begin to realize how evil the FDA is in general and I'd be saved by some equivalent of jury nullification.

If the FAA and CDC and other authorities relevant to ports of entry had had millions of covid tests in US airports in January of 2020 then there is a possibility that nearly all covid deaths in general would have been prevented by preventing community spread by preventing covid from even getting into the US.

One of several reasons nothing like this was even conceivably possibly is that the FDA made all covid tests (except maybe 50 per day done by hand by a couple scientists in Atlanta Georgia) illegal all the way up to March or April of 2020 or so (they started authorizing things irregularly after the panic started, when community spread was undeniable, but not before).

The US was proven to basically entire lack the CONCEPT of "actual public health" where actual public health unpacks into a centralized and strategically coherent system for preventing the entry and spread of communicable diseases in the US.

The FDA is a critical part of the prevention of actual public health for every novel disease that has come along since 1962, and everything that will come along unless they "do correct policy by hand by turning off their stupid policies every time their stupid policies become OBVIOUSLY stupid in a new emergency".

If Ebola had gotten into the US in the past, the FDA would have prevented large volumes of new tests for that too. This is a fully general problem. Until we fix it structurally, we will be at the mercy of either (1) the natural evolution of new diseases or (2) the creation of new diseases by madmen in virology labs.

The US government is catastrophically stupid-to-the-point-of-evil here. It has not banned gain of function research outside of BSL5s. It has not set up a real public health system. It systematically misregulates medicine with the goal of suppressing new medicine.

Right how the US has a godawful mix of public/private "collaboration" so that we have all the charity and kindness of capitalism, mixed with all the flexibility and efficiency of the soviet empire.

We literally don't even have a private medical industry OR a public medical system and BOTH are critical for life and health.

This "worst half of each" combo we have right now should be lit on fire and two better systems should be built on their ashes.

The existing FDA is THE KEYSTONE of this vast edifice of corrupt government-based evil. Any presidential candidate will get my vote if they promise to completely reboot the entire US medical system in the direction of (1) freedom in privatized medicine and (2) huge increases in state capacity to detect and prevent terrible new diseases so that we also have good public medicine.

The CDC should go back to being part of the military. OSHA should stop regulating medical workplaces. The NIH and the residual parts of the FDA that aren't stupid-unto-evil (and I grant that the FDA isn't literally 100% evil because nothing is 100% except in math) should be put under the CDC. The efficacy mandate of the FDA should be removed. The safety mandate of the FDA should ALSO be removed. The right way to manage safety concerns for brand new drugs is tort reform for medical malpractice. Grownups OWN THEIR OWN RISK.

There should be a real right to try for people with terrible illnesses with no known reliably safe cures, who want to roll the dice and try something new that has never been tried before. Doctors in clinical practice should be able to get a signature on a risk acceptance contract, and then do crazy new medicine, and be protected in that from lawsuits.

The time to do "FDA-like oversight of the first 20 people to try a new therapy" is not PROSPECTIVELY for literally EVERY medicine. It should be done in retrospect, when it failed, and the result was sad, and the patient thinks that the sadness was not the sort of sadness they were warned about in the contract they signed when they accepted the risks of trying something new.

The existing medical system has SO MANY bad ideas and so little coherent planning about how to do actual good that a reboot with new people in a new organizational shape is strongly indicated.

The existing FDA is THE KEYSTONE of this vast edifice of corrupt government-based evil.

FDA delenda est.

Reply1111

I do NOT know that "the subjective feeling of being right" is an adequate approach to purge all error.

Also, I think that hypotheses are often wrong, but they motivate new careful systematic observation, and that this "useful wrongness" is often a core part of a larger OODA loop of guessing and checking ideas in the course of learning and discovery.

My claim is that "the subjective feeling of being right" is a tool whose absence works to disqualify at least some wrongnesses as "maybe true, maybe false, but not confidently and clearly known to be true in that way that feels very very hard to get wrong".

Prime numbers fall out of simple definitions, and I know in my bones that five is prime.

There are very few things that I know with as much certainty as this, but I'm pretty sure that being vividly and reliably shown to be wrong about this would require me to rebuild my metaphysics and epistemics in radical ways. I've been wrong a lot, but the things I was wrong about were not like my mental state(s) around "5 is prime".

And in science, seeking reliable generalities about the physical world, there's another sort of qualitative difference that is similar. For example, I grew up in northern California, and I've seen so many Sequoia sempervirens that I can often "just look" and "simply know" that that is the kind of tree I'm seeing.

If I visit other biomes, the feeling of "looking at a forest and NOT knowing the names of >80% of the plants I can see" is kind of pleasantly disorienting... there is so much to learn in other biomes!

(I've only ever seen one Metasequoia glyptostroboides that was planted as a specimen at the entrance to a park, and probably can't recognize them, but my understanding is that they just don't look like a coastal redwood or even grow very well where coastal redwoods naturally grow. My confidence for Sequoiadendron giganteum is in between. There could hypothetically be a fourth kind of redwood that is rare. Or it might be that half the coastal redwoods I "very confidently recognize" are male and half are female in some weird way (or maybe 10% are have even weirder polyploid status than you'd naively expect?) and I just can't see the subtle distinctions (yet)? With science and the material world, in my experience, I simply can't achieve the kind of subjective feeling of confident correctness that exists in math.)

In general, subjectively, for me, "random ass guesses" (even the ones that turn out right (but by random chance you'd expect them to mostly be wrong)) feel very very different from coherently-justified, well-understood, broadly-empirically-supported, central, contextualized, confident, "correct" conclusions because they lack a subjective feeling of "confidence".

And within domains where I (and presumably other people?) are basically confident, I claim that there's a distinct feeling which shows up in one's aversions to observation or contemplation about things at the edge of awareness. This is less reliable, and attaching the feelings to Bayesian credence levels is challenging and I don't know how to teach it, and I do it imperfectly myself...

...but (1) without subjective awareness of confidence and (2) the ability to notice aversion (or lack thereof) to tangential and potentially relevant evidence...

...I wouldn't say that epistemic progress is impossible. Helicopters, peregrine falcons, F-16s, and bees show that there are many ways to fly.

But I am saying that if I had these subjective senses of confidence and confusion lesioned from my brain, I'd expect to be, mentally, a bit like a "bee with only one wing" and not expect to be able to make very much intellectual progress. I think I'd have a lot of difficulty learning math, much less being able to tutor the parts of math I'm confident about.

(I'm not sure if I'd be able to notice the lesion or not. It is an interesting question whether or how such things are neurologically organized, and whether modular parts of the brain are "relevant to declarative/verbal/measurable epistemic performance" in coherent or redundant or complimentary ways. I don't know how to lesion brains in the way I propose, and maybe it isn't even possible, except as a low resolution thought experiment?)

In summary, I don't think "feeling the subjective difference between believing something true and believing something false" is necessary or sufficient for flawless epistemology, just that it is damn useful, and not something I'd want to do without.

This bit irked me because it is inconsistent with a foundational way of checking and improving my brain that might be enough by itself to recover the whole of the art:

Being wrong feels exactly like being right.

This might be true in some specific situation where a sort of Epistemic Potemkin Village is being constructed for you with the goal of making it true... but otherwise, with high reliability, I think it is wrong.

Being confident feels very similar in both cases, but being confidently right enables you to predict things at the edge of your perceptions and keep "guessing right" and you kinda just get bored, whereas being confidently wrong feels different at the edges of your perceptions, with blindness there, or an aversion to looking, or a lack of curiosity, or a certainty that it is neither interesting nor important nor good".

If you go confidently forth in an area where you are wrong, you feel surprise over and over and over (unless something is watching your mind and creating what you expect in each place you look). If you're wrong about something, you either go there and get surprised, or "just feel" like not going there, or something is generating the thing you're exploring.

I think this is part of how it is possible to be genre-savvy. In fiction, there IS an optimization process that IS laying out a world, with surprises all queued up "as if you had been wrong about an objective world that existed by accident, with all correlations caused by accident and physics iterated over time". Once you're genre-savvy, you've learned to "see past the so-called surprises to the creative optimizing author of those surprises".

There are probably theorems lurking here (not that I've seen in wikipedia and checked for myself, but it makes sense), that sort of invert Aumann, and show that if the Author ever makes non-trivial choices, then an ideal bayesian reasoner will eventually catch on.

If creationism was true, and our demiurge had done a big complicated thing, then eventually "doing physics" and "becoming theologically genre-savvy" would be the SAME thing.

This not working (and hypotheses that suppose "blind mechanism" working very well) is either evidence that (1) naive creationism is false, (2) we haven't studied physics long enough, or (3) we have a demiurge and is it is a half-evil fuckhead who aims to subvert the efforts of "genre-savvy scientists" by exploiting the imperfections of our ability to update on evidence.

(A fourth hypothesis is: the "real" god (OntoGod?) is something like "math itself". Then "math" conceives of literally every universe as a logically possible data structure, including our entire spacetime and so on, often times almost by accident, like how our universe is accidentally simulated as a side effect every time anyone anywhere in the multi-verse runs Solomonoff Induction on a big enough computer. Sadly, this is basically just a new way of talking that is maybe a bit more rigorous than older ways of talking, at the cost of being unintelligible to most people. It doesn't help you predict coin flips or know the melting point of water any more precisely, so like: what's the point?)

But anyway... it all starts with "being confidently wrong feels different (out at the edges, where aversion and confusion can lurk) than being confidently right". If that were false, then we couldn't do math... but we can do math, so yay for that! <3

I've written many essays I never published, and one of the reasons for not publishing them is that they get hung up on "proving a side lemma", and one of the side lemmas I ran into was almost exactly this distinction, except I used different terminology.

"Believing that X" is a verbal construction that, in English, can (mostly) only take a sentence in place of X, and sentences (unlike noun phrases and tribes and other such entities) can always be analyzed according to a correspondence theory of truth.

So what you are referring to as "(unmarked) believing in" is what I called "believing that".

((This links naturally into philosophy of language stuff across multiple western languages...
English: I believe that he's tall.
Spanish: Creo que es alto.
German: Ich glaube, dass er groß ist.
Russian: Я верю, что он высокий.
))

In English, "Believing in Y" is a verbal construction with much much more linguistic flexibility, with lets it do what you are referring to as "(quoted) 'believing in'", I think?

With my version, I can say, in conversation, without having to invoke air quotes, or anything complicated: "I think it might be true that you believe in Thor, but I don't think you believe that Thor casts shadows when he stands in the light of the sun."

There is a subtly of English, because "I believe that Sherlock Holmes casts shadows when he stands in the light of the sun" is basically true for anyone who has (1) heard of Sherlock, (2) understands how sunlight works, and (3) is "believing" in a hypothetical/fictional of belief mode similar to the mode of believe we invoke when we do math, where we are still applying a correspondence theory of truth, but we are checking correspondence between ideas (rather than between an idea and our observationally grounded best guess about the operation and contents of the material world).

The way English marks "dropping out of (implicit) fictional mode" is with the word "actual".

So you say "I don't believe that Sherlock Holmes actually casts shadows when he stands in the light of the sun because I don't believe that Sherlock Holmes actually exists in the material world."

Sometimes, sloppily, this could be rendered "I don't believe that Sherlock Holmes actually casts shadows when he stands in the light of the sun because I don't actually believe in Sherlock Holmes."

(This last sentence would go best with low brow vocal intonation, and maybe a swear word, depending on the audience because its trying to say, on a protocol level, please be real with me right now and yet also please don't fall into powertalk. (There's a whole other way of talking Venkat missed out on, which is how Philosophers (and drunk commissioned officers talk to each other.))

That is all quite reasonable!

I. Regarding the CDC

I tried to write about the CDC taking hyperpathogenic evolution due to imperfect vaccines seriously at an object level (where the CDC was the object level thing being looked at).

It kept veering into, selectorate theory, first past the post voting, Solzhenitsyn, and so on. Best not to talk much about that when the OP is about dancing and voluntary association :-)

Treating imperfect diseases as the object level, and "going doubly meta", I'd point out that (1) argument screens off authority, and also (2) the best way for a group of umpires to get the right answer most reliably is for all of them to look ONLY at the object level: collecting the maximally feasible de-correlated observations using all the available eyes and then use good aggregation procedures to reach Bayesian Agreement over the totality of the observations.

Ideal umpires only give correlated answers through the intermediary of describing the same thing in the world (the actual ball landing in some actual place, and so on). This is why each additional umpire's voice means something extra, on an epistemic (rather than military/strategic) level.

If you want to talk politics, we can, but I think I'd rather talk "umpire to umpire", about "the thing in front of us".

(And also separately, if we get into politics, I don't think the CDC is anything like an ideal umpire, hence why I'd prefer to treat "politics" as a semantic stopsign for now. Why does the CDC say what it says? Politics. Does this answer help predict anything else about the CDC? Mostly not. Does it help keep other arguments clean and safe? Hopefully yes.)

II. Regarding Imperfect Vaccines And Imperfect Immune Operation

I think your "A" and "B" are roughly right, and a sign that I've communicated effectively and you've understood what I'm saying :-)

I think imperfect "endogenous immune responses" in one population would/should/could breed diseases that are unusually pathogenic in other populations.

The moral/deontic universalization argument against imperfect "exogenous immune responses" is just (1) it probably works the same way because biology is biology and evolution is evolution... and (2) we actually have a choice here because we can DO() a vaccine in a way that we cannot easily DO() an innate immune response programmed by our genome to happen in our bodies.

I think the logic I'm talking about is similar to the logic that explains why diseases tend to be especially virulent right after jumping from one species to the next.

It also might partly explain why a handful of endemic East Hemisphere diseases were so harmful to West Hemisphere populations during the genocides from ~1492 to ~1880.

A "maybe exceptional thing" here is that the natural immune system actually sometimes gives quite broad protection (equivalent to a perfect vaccine), as when a mild cowpox infection protects against cowpox and smallpox basically for life.

So "broad, perfect, endogenous, immune responses" exist.

If we had "broad, perfect, exogenous, immune responses", many existing pathogens might be eradicated!

It would push more pathogens into "counterfactual worlds" where they can be imagined, as what "would have happened if the infectious disease defense had not been adequate"... but they wouldn't be directly empirically observable. People would see this medical system, and they would see no diseases, and they might be confused.

There's already a bunch of diseases we don't have... like supermeasles and hyperrabies and sneeze-AIDS-herpes (which covid is kinda close to, but not as bad as, so far as I can tell), and so on... that we could hypothetically have if someone created them in a lab on purpose.

These are hard to count as "bayesian evidence" of "diseases that are only counterfactual and have, in some sense, been kept out of material reality due to no one performing the sequence of actions that would create and/or spread and/or not eradicate them".

Compared to all the hypothetically possible diseases, we've "successfully avoided" most of them! <3

If we "ban Gain-of-Function Outside BSL5s" then we could probably avoid nearly all of them forever.

We have a handful of cases of diseases at the edge of counterfactuality, like smallpox and polio and measles, which were diseases that basically didn't happen in the US back before US institutions fell into serious decline.

So those used to be "diseases that we could more easily 'count' because we used to be able to see them". Very long ago (before the germ theory of disease) they were quite common and very tragic, so we know they can exist. Then science and adequate medicine caused them to not ambiently exist to be counted. So their "absence now" is glaring when they are absent (and their return is (for measles) or would be (for worse ones) even more glaring).

In terms of why the immune system might sometimes internally do imperfect immune response already: it might just be that when it happens the species it happens to evolves to extinction, and this might be a way to use Gain-of-Function to kill all humans, if someone (like a hostile AI) wanted to do that. The modeling is very tricky. There are some known evolutionary systems (like hyperparasites) that can probably grow to a certain point and then catastrophically collapse to total extinction if there is a single well-mixed evolutionary compartment.

Also, arguably, it is "genocidally/evolutionarily correct strategy" to maintain a few "pet" diseases within your stable of "imperfect immune response diseases"? (Like earlier I mentioned "sudden oak death" being harborded by bay trees.)

With a "pet hyperpathogen" when you meet other similar animals after periods of long separation you have a decent chance to kill them without even really trying (as with the Europeans in North America), and so maybe this is a "good evolutionary strategy" even if it is wildly immoral. I don't think anyone who was all three of (1) sane, (2) reasonable, and (2) emotionally intact has ever claimed that evolution is stepwise continuously moral. It is at best "long run moral" and maybe not even that.

If my fears about the evolution of worse pathogens due to systematic exposure to imperfect vaccines is valid...

...then I guess "distant people (like future generations and people in other countries)" are just lucky right now that such a small percentage of current Americans are taking the new imperfect covid vaccines.

If my fears are right, then if we took imperfect vaccines very reliably across nearly the whole population, that might hurt distant people by making them either have to take the vaccine as well, or else suffering greatly.

But contrariwise, if my fears about the evolution of more pathogenic strains due to imperfect vaccines are not how things actually would or do or are working (which could be scientifically true as far as I know) then the low level of "personally effective even if imperfect" vaccine uptake is a minor tragedy. We're leaving health on the table for no reason, if that's the world we live in.

All my arguments here boil down to "if it hurts we shouldn't do it, but if it helps then we should do it, and I'm not sure which situation we're actually in, but almost no one is even looking at it very hard".

Knowing which thing is actually true, and convincing lots of people to believe the actual truth, has high aggregate Value of Information (VoI).

Millions of lives and lots of ill health are at stake considering the breadth and depth of time and space.

Answering this question properly is the sort of thing that a competent benevolent philosopher with a decent budget for important empirical efforts "would be interested in being able to do".

The ethics of it would be a little weird. The highest quality evidence would probably involve doing "random assignment challenge trials" on whole human societies, where isolated societies that want to ban imperfect vaccines "just in case" are randomly forced to use them anyway, to satisfy a scientific curiosity about whether that random assignment reliably makes their ambient diseases more harmful to people who haven't taken the imperfect vaccine yet.

With Marek's Disease we can just do this for chickens, since chicken death and illness isn't nearly as morally important as human death and illness. Like: we already torture chickens to death for the sake of Chicken McNuggets, and scientific truth about important questions is much more important than Chicken McNuggets, so I tentatively think it would be ethically OK to do that kind of research in the current wildly-non-utopian situation?

But my understanding is that we've already done that research, and it says "yeah, imperfect vaccines promote the evolution of diseases that are more virulent in the non-vaccinated, in chickens, with this one disease".

Maybe we should kill a lot more chickens with another disease?

Or kill a lot of ferrets with another disease? Or something?

To "prove it more broadly, and more generally, with slightly more data"?

Except I think that most humans simply don't have the patience to think about this stuff, and they won't understand or care about "why one particular vaccine might be net good but some other particular vaccine might be net bad based on <complex evidence and arguments>".

My current working model is that it is just "reasonably inferrable to anyone with the patience and interest in looking at the data and thinking properly" that taking an imperfect covid vaccine is not something a good Kantian would do, because universalizing the behavior among all people able to follow moral maxims (which includes all humans, right?) would be a net negative overall...

But also my current working model says that almost no one cares or wants to think about it very much, especially since the existing levels of imperfect vaccine uptake are already pretty low (quite a bit less than 50%), and therefore less likely to cause the evolutionary effects at the sociologically observed levels of default behavior.

So maybe we can use imperfect vaccines to protect the 5% of people who are most vulnerable, and just watch out for pathogenicity levels in the non-vaccinated, and then ban the imperfect vaccine based on live data? Or something?

Performing medical self-experiments is kind of heroic <3

This is an idea that feels "really really important if true" but that I'm not actually certain about and often bounce off of. Pushing on it a little more, this paper on Marek's Disease from 2015 sketches a theory of "hotness".

Hotness is a hypothetical "conflation of transmissibility and pathogenicity" that might sometimes occur as a spandrel at first, which then is found to be useful by some evolutionary systems, which optimize the spandrel "on purpose".

You could imagine a disease which has one "hotness level" with no vaccines at all (H0?), and a different "hotness level" (H1) in patients with an imperfect vaccine.

With no background knowledge at all H0 > H1 could be true on average regarding viruses (and that is consistent with the idea that vaccines are DESIGNED to help the patient by reducing pathogenicity from a patient-centric perspective).

However, we expect some amount of "hotness" might contribute (from a virus-centric perspective) to "transmissibility" as well... if your nose became so runny you die of dehydration before transmitting that would be "too hot" from a virus centric perspective, but if your nose is not runny at all in any way then maybe the virus isn't causing the host to shed as many viral particles as would maximize the total number of downstream infections.

The thing I'd suggest is that maybe "we as a collective herd" are LUCKY when only 20% of the population is defecting on the strategy that would tame any given virus?

Here's a hypothetical bad path, that probably only kicks in if almost everyone takes these imperfect vaccines, sketched as a possible future:

On step ZERO he first imperfect vaccine is deployed against a naive pathogen, with 60% uptake. H1_0 is kinder to the patient at first (and a reason to buy and take the vaccine, selfishly, for each patient) but H0_0 is tolerable and not (yet) a strong downside reason to take the vaccine to avoid the symptoms...

But then on step ONE the disease, which already had an optimized hotness level (and since 60% are imperfectly vaccinated that's the central case to optimize for), performs some evolutionary cycles so that H1_1 goes up to a higher (closer to optimal) level of hotness... a higher level of BOTH pathogenicity AND transmissibility. What happens to H0_1 is harder to say. It happens more "by accident" than "due to viral evolution". 

On step TWO, humans react by deploying a new imperfect vaccine to lower (pathogenic) hotness in newly vaccinated humans to H1_2. Just as before.

On step THREE the virus reacts by evolving to put H1_3 back up, to the level of hotness it prefers, with uncertain effects on H0_3, but in the battle between humans of viruses it seems like maybe a red queen race between science and evolution, and there's one thing NOT racing here: the naive immune system of naive humans.

On all subsequent even steps "science learns", and lowers "H1" (leaving H0 unconsidered) and if this leads to H0 becoming a large burden that might easily cause more humans (reacting to avoid serious pain) that is actually a nice thing from the perspective of the profit-seeking scientists: their market penetration is getting bigger!

On all subsequent odd steps "the virus learns" and raises "H1" again (not worrying too much about keeping H0 also close to the ideal hotness if the unvaccinated are very very rare, and so in general this could end up almost anywhere because it isn't being optimized by anyone or anything)?

(((Note that this model might be a BAD prediction of the future. It might be mechanistically false! The reason to think it might be false is a sort of "tails come apart" or "goodhart's law" issue, where, if we think that "hotness" is the only thing that exists (which subsumes both pathogencity and transmissibility) so that scientists vs evolution cause this one variable to go up and down over and over... but if the virus and the scientists could ask more specifiucally for exactly what they want then the virus could get very very high transmissibility and the scientists could get very very low pathogencity and they'd both be naively happy. However... this ignores the third party... the patients who the for-profit medical scientists are trying to extract payments from.)))

So overall, over time perhaps we see:

The virus becomes unconcerned if the 0.5% of patients  who lack an imperfect vaccine die from H0 being very very hot, and the for-profit private medical scientists become happy if H0 gets very very hot and kills anyone who doesn't buy their product. And the biology suggest that this might be a stable bioeconomic Red Queen Race... depending on how H0 fluctuates in (a loosely correlated?) response to the dynamic tensions to iteratively raise and lower H1.

A pattern similar to this sometimes "arises for some period of time" within normal evolution (without any intervention by scientists). For example, bay trees have unimportant symptoms when infected with sudden oak death, whereas oak trees are killed by the pathogen.

Bay trees thus have an evolutionary incentive to maintain their infections, which clear the area around them of competing trees, giving them access to the sunlight. Oak trees have an incentive to react to this combined attack, but if they don't learn to ALSO resist the sudden oak death pathogen very quickly they might simply be removed from the game board.

In this analogy, those who take imperfect vaccines would be like the bay trees, and the transition from "mostly oak forests" to "mostly bay forests" would be like what the vaccine-making for-profit scientists would want to cause, to maximize vaccine purchasing among the aggregate "herds of customer/victims" when they sell their products to individuals rather than selling to coordinated centralized (elected?) herd managers.

Something in my soul resonates with the idea of "doing what a benevolent herd manager would tell me to do" if any benevolent herd managers existed.

Since no benevolent-and-competent herd managers exist in the modern world, this is perhaps a silly yearning for me to have, yet I still think about it anyway, because I am a fool.

Separately, I'm not actually sure of the science here. Maybe "hotness" isn't a useful way to think about the relationship between pathogenicity and transmissibility and/or maybe H0 stays reasonably low no matter what, even when there's almost no optimization pressure on it?

Load More