Just like with renaming NIPS to NeurIPS, this is wokeness gone wild.
As Dagon said, learning empathy and humility is always a good idea. You don't have to believe your teacher or condone their views or practices, but that's a different issue.
Notice that your teachers are actually rational, if you define rationality as success in life. Believing or at least declaring to believe something you disagree with did not hinder their ability to get the job they want and teach the classes they want. They might not do as well in a science or engineering department, but that is not where they work.
You are stuck considering two options, missing a lot more of them. You think that they are wrong AND irresponsible AND harmful, period, and you can either try to fix it or to ignore it. Ironically, that is where your own failure lies: you can't even consider that their views may actually work for them, and for other students. Art is not science, life is not logic and rationality is not a pursuit of the one truth.
Should I just shut up and focus on graduating? Or would it be unethical of me to just stand by while hundreds are taught to shut off their reasoning skills?
Consider learning empathy (understanding where others come from, why and how). Consider learning humility (accepting that your view might not be the only one worth holding). Consider learning other approaches to life, not necessarily just those based on pure logic. If you manage, you might be surprised by your own personal growth as a human.
Was trying to explain, but it looks like I screwed something up in the reformulation :)
But coin needs to depend on your prediction instead of being always biased a particular way.
I don't see why, where would the isomorphism break?
I am confused about the iterated Parfit's hitchhiker setup. In the one-shot PH agent not only never gets to safety, they die in the desert. So you must have something else in mind. Presumably they can survive the desert in some way (how?) at the cost of lower utility. Realizing that precommitting to not paying results in suboptimal outcomes, the agent would, well, "explore" other options, including precommitting to paying.
If my understanding is correct, then this game is isomorphic to betting on a coin toss:
You are told that you win $1000 if a coin lands heads, and $1 if the coin lands tails. What you do not know is that the coin is 100% biased and always lands tails.
In that less esoteric setup you will initially bet on tails, but after a few losses, realize that the coin is biased and adjust your bet accordingly.
Universities are the Easter Island statues.
making sandwiches is a task relatively similar to tasks we had to deal with in the ancestral environment, and in particular there is not a lot of serial depth to the know-how of making sandwiches. If we pluck a person from virtually any culture in any period of history, then it won't be difficult to explain em how to make a sandwich. On the other hand, in the case of AI risk, just understanding the question requires a lot of background knowledge that was built over generations and requires years of study to properly grasp.
If I understand your argument correctly, it implies that dealing with the agents that evolve from simpler than you are to smarter than you are within a few lifetimes ("foom") is not a task that was ever present, or at least not successfully accomplished by your evolutionary ancestors, and hence not incorporated into the intuitive part of the brain. Unlike, say, the task of throwing a rock with the aim of hitting something, which has been internalized and eventually resulted in NBA, with all the required nonlinear differential equations solved by the brain in real time accurately enough, for some of us more so than for others.
Similarly, approximate basic counting is something humans and other animals have done for millions of years, while, say, accurate long division was never evolutionarily important and so requires engaging the conscious parts of the brain just to understand the question ("why do we need all these extra digits and what do they mean?"), even though it is technically much much simpler than calculating the way one's hand must move in order to throw a ball on just the right trajectory.
If this is your argument, then I agree with it (and made a similar one here before numerous times).
Being stuck in local minima or in a long shallow valley happens in optimization problems all the time, Isn't this what simulated annealing and similar techniques are designed to correct? I've seen this in maximum likelihood Markov chain discovery problems a lot.