Wiki Contributions

Comments

His view on AI alignment risk is infuriating simplistic. To just call certain doomsday scenarios objectively "false" is a level of epistemic arrogance that borders on obscene. 

I feel like he could at least acknowledge the existence of possible scenarios and express a need to invest in avoiding those scenarios instead of just negating an entire argument. 

I will review these. Thank you for your input! 

Really fantastic conflict mitigation advice. Many people do not often stop to think about why they are communicating and with this lack of organized direction, it is easy to fall into more emotional patterns of communicating. People's desires to appeal to others come out more and constructive discourse is no longer possible. Stopping Out Loud does a great job of recognizing these types of things and organizing it into words in a way that really halts what otherwise often becomes a runaway train. 

One point I would add that is I feel that when it comes to online trolling or criticism, more often than not someone is not interested in constructive dialog in the first place and is typically reacting to their own personal ideas being challenged. This type of intention usually leads to people responding negatively to SOL and they will attempt to re-frame your attempt at to de-escalate as you "running away" or refusing to voice your opinions in front of others because you realize how "weak" your arguments are. While I am not sure there is a way to avoid this, it is helpful to be prepared for this type of response when you do go into SOL.