Does it work if the user makes a natural language request that convinces the AI to make an API call that results in it being shut down?
There is a multitude of hypotheses that have been wrong until now. Is there a different anti-Occamian prior that favors each one?
Your reason for this particular day has never been tested as right or wrong since the morning hasn't arrived yet.
Also, why should the sun be simply absent tomorrow, rather than purple, or duplicated, or square? None of those has ever happened either.
I have also wondered this. Saying "The sun probably won't rise tomorrow because it's risen every day until now" can't just rest on "my anti-Occam prior has been wrong every day until now, so it's bound to be right today". It also has to explain why this particular day is the one where the pattern breaks.
It's probably cheaper to pay larger families to have a marginal child than smaller ones. I expect the effect would be especially strong at the 0-1 threshold.
You can have as many as 11 in a row. Buffalo[-residing] buffalo [animals who] buffalo[-residing] buffalo [animals] buffalo [do themselves] buffalo buffalo[-residing] buffalo [animals who] buffalo[-residing] buffalo [animals] buffalo.
2:25 with hyperventilation and resting.