Interested in math, Game Theory, etc.
Instead of using an equation, a simulation using multiple agents could be employed.
But speedrunning is about sacrificing on other measures to optimize on time, so what did I give up?
The other thing you can do is invest time in figuring out how to do things faster. For instance, what can be done to the same high quality standard, but faster with better form (even if you don't put your body in speed/stress mode)?
Can you learn to braid your hair faster (but in a way that doesn't cost you time later*)?
Do you have to rush getting dressed to pull this off:
So, I first went to the kitchen and turned on the kettle for coffee. Normally, I do this after I get dressed, but this way I could do both at once. Time saved: 5 minutes.
*Optimizing for time as something that doesn't sacrifice time - this shifts focus away from speed and towards time overall, which might undo some of the negative effects of speedrunning. (I've already made a trade-off by not learning to do my hair well, but faster. But I think it can be done, and maybe it's worth it.)
Michael Huemer also tries to affect society without a precise and detailed understanding?
3. I created a search engine for probabilities.
Thank you, this is amazing!
Yes. Though even if something is solved, it can be solved again in a different way.
Sometimes people have responsibilities, and can be charged for failing to execute them, say with 'gross negligence' as in 'lack of slight diligence or care'.
To lay my cards on the table, I’m basically a utilitarian. I think we should maximize happiness and minimize suffering, and frankly am shocked that anyone takes Kant seriously.
Kant's categorical imperative is weird, and seems distantly related to some of the reasoning you employ:
And, perhaps, some of these folks grow up to be bioethicists who advocate against COVID vaccine challenge trials, despite their enormous positive expected utility.
Overall I'd say...
Goals*, Knowledge -> Intentions*/Plans -> Actions
*I'm using "goals" here instead of the OP's "intentions" to refer to the desire for puppies to not suffer prior to seeing the puppy suffering.
Why do consequences matter? Because our models don't always work. What do we do about it? Fix our models, probably.
In this framework, consequences 'should' 'backpropagate' back into intentions or ethics. If it doesn't, then maybe something isn't working right.
It's the dream of covering every topic with a minimum web of posts, that only grows, without overlap? (The dream of non-redundant scholarship)
Or SA doesn't think 'the problem of the criterion' is motivated by a purpose (or SA's purpose)?
What if it is impossible to believe the truth?
How could believing in coherentism cease to be coherent?