I'm a senior software engineer with 20 years experience who's concerned about AI safety. I bring practical engineering judgment, security mindset, and understanding of how real systems fail. I'm doing an MSc to formalize my ML knowledge and enter the field.
Hello, I've joined LessWrong today to make connections with good people in the field of AI Safety. I have a background as a senior software engineer, but after reading Will MacAskill's What We Owe The Future, I decided that we are probably not currently heading in the right direction, that there are many risks which we don't yet have satisfactory solutions to, and that I want to help as much as I can.
I am currently a full time masters student studying Artificial Intelligence. I have also recently completed the BlueDot Impact Governance course.
I'm looking forward to having many productive discussions with you.
The asteroid metaphor is usually stated with a specific number of years to hit earth. If we stretch the metaphor a little and make it a more metaphysical thought experiment, say we know it is definitely headed for earth but we don't know its velocity. We don't know when it will collide or how much damage it will cause, but we have made many predictions of when it might land and what damage it might cause, with the worst estimates saying two to three years away. What should we do in this situation?
We should not spend too much time arguing over whose estimates are better or worse. Rather we should accept that there are many unknowns and we don't currently have enough specific data to make highly accurate predictions with high confidence, and focus instead on what needs to be done to avoid getting hit.