LESSWRONG
LW

261
Martin Leitgab
26Ω3330
Message
Dialogue
Subscribe

https://www.linkedin.com/in/martinleitgab/
After several years of dabbling in related reading, I started my full-time AI safety journey in early 2025. I bring 12+ years of technical and research leadership experience from academic nuclear physics, aerospace/government contracts, and corporate medical device industry domains. I am here to contribute to the overall goal of reducing AI catastrophic and existential risk.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Technical Acceleration Methods for AI Safety: Summary from October 2025 Symposium
Martin Leitgab2d30

Thank you for your comment- yes, speakers are working on posts of their own, and they are encouraged to link to the post here for reference and connection.

Reply
Do Not Tile the Lightcone with Your Confused Ontology
Martin Leitgab4mo10

Great post, thank you for sharing. I find this perspective helpful when approaching digital sentience questions, and it seems consistent with what others have written (e.g. see research from Eleos AI/NYU, Eleos' notes on their pre-release Claude 4 evaluations, and a related post by Eleos' Robert Long). 

I find myself naturally prone to over-attribute for moral considerations rather than under-attribute, but I appreciate the stance that both sides can hold risks. The stance of considering LLMs for now as 'linguistic phenomena' while taking low-effort, precautionary measures for AI welfare seems valuable while we collect and gather more understanding to make progress towards higher-stakes decisions of moral patienthood or legal personhood.  

Reply
All AGI Safety questions welcome (especially basic ones) [July 2023]
Martin Leitgab4mo10

Hi all, I am a new community member and it is a pleasure to be here. I am working on a post draft where I try to discuss possible opportunities for using safety R&D automation to find/create effective AI superintelligence safety interventions (which may not exist yet or not be prioritized yet), in particular for short timeline scenarios (e.g. ASI within 2-6 years from now, after a possible intelligence explosion enabled/caused by AGI creation within 1-3 years from now). 

I would be grateful if anyone had any pointers to existing discussions on this specific topic on LessWrong that I may not have found yet so I can visit, learn from, and reference. I do know about the 'superintelligence' tag and will go through these posts- I just wanted to see if anything springs to mind from experienced users. Thank you!

Reply
25Technical Acceleration Methods for AI Safety: Summary from October 2025 Symposium
Ω
7d
Ω
2
1Accelerating AI Safety Progress via Technical Methods- Calling Researchers, Founders, and Funders
25d
0