Does that scheme get around the contradiction? I guess it might if you somehow manage to get it into the utility function, but that seems a little fraught / you're weakening the connection to the base incorrigible agent. (The thing that's nice about 5, according to me, is that you do actually care about performing well as well as being corrigible; if you set your standard as being a corrigible agent and only making corrigible subagents, then you might worry that your best bet is being a rock.)
I'm not familiar enough with material science or any of that to make an intelligent call but does it seem like a logical progression or on inspection does it actually raise questions about recovered UFO technology?
I've studied this history moderately closely; I would describe it as a logical progression, and not like a jump from acquiring alien technology.
I hesitate to write this comment, because I feel like it should be predictable, but I guess I will anyway.
It sounds like you're maybe making the point that science, as a human endeavor, involves people sharing facts with each other about the natural world. That is sideways from the frame of the parable, which is about the scientific method from an epistemological perspective, so I think you're missing the point.
The truth, like the fountain of youth, is where it is, and can't be moved. If you starting off thinking "gravity on Earth's surface accelerates weights downwards by 8 m/s/s", what doing science will do is point you towards instead thinking "gravity on Earth's surface accelerates weights downwards by 9.8 m/s/s". When you're 'there,' or believing true things, you get lots of benefits in terms of accurate predictions; as soon as you leave the mountain (i.e. swapping out 9.8 with some other number), you can't take the ability to create accurate predictions with you--using the wrong inputs gives you the wrong outputs.
That said, there's a perhaps deeper philosophical disagreement here. I think that "learning" is much more real than "teaching". [Teachers construct learning environments for students, but as the saying goes, you can lead a horse to water, but you can't make him drink.]
And further, most "science" classrooms are really classrooms for teaching consensus facts about the natural world, not doing "science." The facts of the periodic table are quite different from the methodology by which the periodic table was discovered. Someone who merely memorizes known facts, and doesn't touch the apparatus by which facts comes to be known, is not partaking of the fountain of youth.
It strikes me as conspiratorial thinking that the intel community would disclose its own equipment to the world, with such a preposterous explanation, as a sort of "feint".
So, I think "conspiratorial thinking" is a weird thing to say here. The existence of a conspiracy is not in doubt, and their willingness to lie to the public shouldn't be either. If you're not willing to engage with conspiratorial thinking when considering a literal conspiracy, how are you going to track reality?
That said, is this a tactical or strategic error, and thus unlikely? Sure, that seems like a plausible position to have, but then it's at "mistake" levels of plausibility instead of "impossibile" levels of plausibility.
Sure, but suppose you have a flying saucer that you would like to be able to use for some missions. If you release a fragment of the flying saucer and say "it's aliens guys", this maybe means that when other people see a flying saucer later they don't know it's you.
[Or the part of your organization that found the flying saucer fragments, which isn't cleared to know about the flying saucer, releases it to the public with "WTF is this?" which the part which is cleared to know about them is barred from responding to, and part A didn't know about the existence of part B to clear it with them first.]
Is it the separation between body and mind?
I believe yes, tho I think Vervaeke upgrades it to something a bit more extreme: a separation between mind and reality. (Your only experience of reality, including the other people in it, is mediated by your body.)
PS. I think you missed a couple words here:
Fixed.
But that is not what this post said, which makes it look like you might think there's a deeper ex-ante problem with being vegan, and I think it's very unclear that there is.
I read the post not as claiming that veganism has deeper ex ante problems, but that responsible vegan advocacy should include the PSA you mention (or people should convince Elizabeth that the PSA is not actually necessary).
Science is, poetically speaking, about changing your inner state to reflect the outer state; the same thing as "going to the truth" instead of "bringing the truth to you."
I feel like this is understating things a bit.
In my view (Drexler probably disagrees?), there are two important parts of CAIS:
I think a 'foundation model' world probably wrecks both. I think they might be recoverable--and your post goes some of the way to making that visualizable to me--but it still doesn't seem like the default outcome.
[In particular, I like the point about models with broad world models can still have narrow responsibilities, and think that likely makes them more likely to be safe, at least in the medium term. Having one global moral/law-abiding foundational AI model that many people then slot into their organizations seems way better than everyone training whatever AI model they need for their use case.]