The next post is Secular interpretations of core perennialist claims. Zhukeepa should edit the main text to explicitly link to it rather than just mentioning that it exists. (Or people could upvote this comment so it's at the top. I don't object to more good karma.)
I think you're missing a few parts. The Autofac (as specified) cannot reproduce the chips and circuit boards required for the AI, the cameras' lenses and sensors, or the robot's sensors and motor controllers. I don't think this is an insurmountable hurdle: a low-tech (not cutting-edge) set of chips and discrete components would serve well enough for a stationary computer. Similarly, high-res sensors are not required. (Take it slow and replace physical resolution with temporal resolution and multiple samples.)
Second, the reproduced Autofacs should be built on movable platforms so different groups can get their own. (Someone comes with a truck and a few forklifts, lifts the platform onto the truck, and drives the Autofac to the new location.)
For large enough cases, changing the legal system is a way to make the debtor/lender "disappear." Ownership and debt are both based on society-level agreement.
The "current leader is also the founder" is a reasonable characteristic common in cults. Many cult-like religious organizations exist to create power or wealth for the founder or the founder's associates.
However, I suspect that the underlying scoring function is a simple additive model (widespread in psychology) in which each answer contributes a weight toward one of the outcomes. Since this characteristic is most valuable in combination - intensifying other factors that indicate cultishness, it doesn't serve very well in the current framework.
You may want to mention in the first question asking about cultishness that people will get to revise their initial estimate after seeing the rest of the questions. I discarded and restarted the survey halfway through because I realized your definition was far removed from my initial one. If I'd known about the ability to re-estimate at the end, you'd have another data point. (For reference, my initial number was 25%, which I dropped to 4% on the re-run. The final score ended up being 3%.)
I shy away from fuzzy logic because I used it as a formalism to justify my religious beliefs. (In particular, "Possibilistic Logic" allowed me to appear honest to myself—and I'm not sure how much of it was self-deception and how much was just being wrong.)
The critical moment in my deconversion came when I realized that if I was looking for truth, I should reason according to the probabilities of the statements I was evaluating. Thirty minutes later, I had gone from a convinced Christian speaking to others, leading in my local church, and basing my life and career on my beliefs to an atheist who was primarily uncertain about atheism because of self-distrust.
Grounding my beliefs in falsifiable statements and probabilistic-ish models has been a beneficial discipline that forces me to recognize my limits and helps predict the outcomes of my actions. I don't know if I could do the same with fuzzy logic and "reasoning by model."