A lot of the nonprofit boards that I've seen use a "consent agenda" to manage the meeting. The way it works is:
It doesn't do much for governance directly, but fewer time-wasting consent votes can make room for more discussion of issues that matter.
In the US, parties still aren't recognized by the Constitution. Every election is a choice between all of the people who qualify for the ballot for each office. Several groups of like-minded politicians quickly emerged, and over time these became our major parties.
It's not uncommon for an American candidate to run as an independent (i.e. not affiliated with a party), although they hardly ever win.
To the extent that I understand what you're saying, you seem to be arguing for curiosity as a means of developing a detailed, mechanistic ("gears-level" in your term) model of reality. I totally support this, especially for the smart kids. I'm just trying to balance it out with some realism and humility. I've known too many people who know that their own area of expertise is incredibly complicated but assume that everything they don't understand is much simpler. In my experience, a lot of projects fail because a problem that was assumed to be simple turned out not to be.
I get your point, and I totally agree that answering a child's questions can help the kid connect the dots while maintaining the kid's curiosity. As a pedagogical tool, questions are great.
Having said that, most people's knowledge of most everything outside their specialties is shallow and brittle. The plastic in my toothbrush is probably the subject of more than 10 Ph.D. dissertations, and the forming processes of another 20. This computer I'm typing on is probably north of 10,000. I personally know a fair amount about how the silicon crystals are grown and refined, have a basic understanding of how the chips are fabricated (I've done some fabrication myself), know very little about the packaging, assembly, or software, and know how to use the end product at a decent level. I suspect that worldwide my overall knowledge of computers might be in the top 1% (of some hypothetical reasonable measure). I know very little about medicine, agriculture, nuclear physics, meteorology, or any of a thousand other fields.
Realistically, a very smart* person can learn anything but not everything (or even 1% of everything). They can learn anything given enough time, but literally nobody is given enough time. In practice, we have to take a lot of things on faith, and any reasonable education system will have to work within this limit. Ideally, it would also teach kids that experts in other fields are often right even when it would take them several years to learn why.
*There are also average people who can learn anything that isn't too complicated and below-average people who can't learn all that much. Don't blame me; I didn't do it.
Being honest, for nearly all people nearly all of the time questioning firmly established ideas is a waste of time at best. If you show a child, say, the periodic table (common versions of which have hundreds of facts), the probability that the child's questioning will lead to a significant new discovery are less that 1 in a billion* and the probability that they will lead to a useless distraction approach 100%. There are large bodies of highly reliable knowledge in the world, and it takes intelligent people many years to understand them well enough to ask the questions that might actually drive progress. And when people who are less intelligent, less knowledgeable, and/or more prone to motivated reasoning are asking the questions, you can get flat earthers, Qanon, etc.
*Based on the guess that we've taught the periodic table to at least a billion kids and it's never happened yet.
I think a better way to look at it is that frequentist reasoning is appropriate in certain situations and Bayesian reasoning is appropriate in other situations. Very roughly, frequentist reasoning works well for descriptive statistics and Bayesian reasoning works well for inferential statistics. I believe that Bayesian reasoning is appropriate to use in certain kinds of cases with a probability of (1-delta), where 1 represents the probability of something that has been rationally proven to my satisfaction and delta represents the (hopefully small) probability that I am deluded.
Wars are an especially nasty type of crisis because there's an enemy. That enemy will probably attempt to use your software for its own ends. In the case of your refugee heatmap idea, given that the Russians are already massacring civilians, that might look like a Russian artillery commander using it to deliberately target refugees. Alternately, they might target incoming buses to prevent the refugees from getting out of the Ukrainian military's way and make the Ukrainians spend essential resources on feeding and protecting them.
Does the Russian military even have the tech dependencies that would make them vulnerable to cyber attacks? I think they're pretty analog.
I spent about 20 years in academic and industrial research, and my firm belief is that almost nobody spends nearly enough time in the library. There have been hundreds of thousands of scientists before you; it is overwhelmingly likely that your hot new idea has been tried before. The hard part is finding it; science is made up of thousands of tiny communities that rarely talk to each other and use divergent terminology. But if you do the digging, you may find a paper from Egypt in 1983 that describes exactly why your project isn't working (real example). Finding that paper two weeks into the project is much better than finding it five years later.
The US has at least 16 intelligence agencies, but we still went into Iraq.
Oddly, it's probably easier for Putin to get credible information about Ukraine's military than about his own. Fewer people have an interest in lying to him about Ukraine.