TLDR: First you go with your gut, then you get a logical model, then you improve that model. Trusting your logical model over your gut before it gets good enough is a very common way to believe wrong things.
[epistemic status: probably approximately true, with possible pathological cases around the edges]
The process of getting better at describing and predicting things seems to usually go something like this:
First, you start out with an intuitive model which nature gives you automatically without any effort on your part. This model uses the language of system 1, and is a black box whose contents are unknown to you.
Then, you develop a weak analytical model in the language of your system 2. Your first try at making an analytical model is generally worse on average at describing and predicting things than your intuitive model, which is why I'm calling it "weak".
Finally, after incrementally improving your analytical model over some period of time, you end up with a strong analytical model, a system that surpasses your intuition.
Analytical models are good because they can be easily improved relative to intuitive ones. For example, it is hard to convince your system 1 that getting a vaccine shot is a good idea, but your system 2 can improve its understanding of the world to a point where it understands that getting the shot is worth it.
Analytical models are also nice because you can see how their parts work, which makes it easier to apply lessons learned in one area to another problem.
A model will do better in some situations than others, so whether you should use your intuitive one or your analytical one can be dependent on your situation. The process of figuring out a model's better and worse subjects is beyond the scope of this post.
All analytical models are ultimately composed out of intuitive models. Maybe you start with an intuitive understanding of what bleggs and rubes are, but then quickly come up with the analytical model that says that bleggs are "objects that are round and blue", while rubes are "objects that are cubes and are red". This model doesn't analytically define what "round", "cube", "red", and "blue" mean yet! Those are defined intuitively to start. But when you go back and define, say, what "blue" means in terms of light and human eyes, you have to define light and eyes intuitively. And so on and so on.
In general, the more you improve your model, the deeper that model becomes. This is because the universe happens to be really complicated and you need detail to cover all the nuances.
Some people tend to trust their intuitions more, while others trust logic more. In the short run, intuitive people are better modelers because competing with nature-given models is hard. In the long run, analytical people are better modelers because they can continue to improve more and more over time while intuitive people mostly can't.
The Weak Model Trap
The big trap that people who are inclined to be analytical are likelier to fall into is trusting their analytical models before those models have become mature enough.
For example, I've seen a physics student, upon learning that "an object in motion remains in motion unless acted upon by an outside force", predict that rolling a ball around the inside of a pie tin with a quarter of the rim cut out will result in the ball floating around in a curved path to continue in a circle. This student rejected their gut feeling that the ball will fall out because they favored their mistaken model of physics.
This "weak model trap" seems especially common when trying to understand human values. Adherents to naive utilitarianism seem to be victims of the trap. Also, the argument that death isn't bad because you aren't around to experience it is very clever, but fundamentally misses the point in a way that your gut instinct which says "death is bad" does not.
I can't just recommend that you listen to your gut more often indiscriminately. But I do think it would be better for people to be more aware of what they're doing when they go around biting bullets for their analytical model. I hope this model of models will make you pause and reconsider going against your instincts, so you might be less likely to trust a bad model.
(This is a heavily revised version of this Tumblr post of mine: paradigm-adrift.tumblr.com/post/163145257740/paradigm-adrift-it-seems-like-theres-this)