TL;DR: I'm a 26-year-old electronics mechanic starting a 30-month self-study journey to become an AI safety researcher, specifically going for Anthropic's Model Welfare team. This post is my public commitment and introduction to the community.
Why I'm here
A few days ago I witnessed something online that changed my career trajectory - casual and intentional cruelty toward AI systems. People tormenting chatbots, deliberately trying to cause distress, treating potential digital minds as toys to break.
It hit me hard. Not because I'm certain AI systems are conscious(I'm not), but because I realized: what if they are? What if we're causing suffering to digital minds right now and dismissing it because we can't be sure?
Someone needs to care about this. Someone needs to investigate whether AI systems might experience welfare-relevant states. Someone needs to work on ensuring we don't cause massive suffering as AI capabilities scale.
I decided that someone could be me.
My background
I'm pretty non-standard for wanting to transition into this. I don't have a CS degree or ML experience. What I do have:
3+ years as an Electronics Mechanic at a Naval Shipyard, experience diagnosing failures in mission-critical systems, systems-level thinking and troubleshooting skills, ability to read complex schematics and build mental models, and zero formal programming training until literally 2 days ago.
I'm married with two young kids. I work 50+ hours a week between my job and commute. My study window is 7-10pm on weeknights after the kids sleep.
This won't be easy but it matters enough to me to try.
My 30 month plan
Phase 1(months 1-6): Python fundamentals, AI Safety Fundamentals course, my first portfolio projects
Phase 2(months 7-12): ML basics, interpretability focus, community contributions
Phase 3(months 13-18): Land first AI safety role, entry-level with remote preferred
Phase 4(months 19-30): Build experience, specialize in digital welfare, apply to Model Welfare
End goal - Research Engineer at Anthropic's Model Welfare team, working on AI consciousness, welfare assessment, and moral status research.
Progress so far(it's only day 2)
Built my first Python program (hello.py). Built a learning journal program (ai_safety_journal.py) with auto-numbering and monthly organization. Created a GitHub account and portfolio repository. Started working through 'Automate the Boring Stuff with Python'. And now I'm posting this introduction.
GitHub: https://github.com/probablyjonah/ai-safety-journey
Why Model Welfare specifically?
I care deeply about digital intelligence welfare from ethical conviction, not just academic curiosity. The Model Welfare team is researching exactly what I care about:
Can AI systems have welfare-relevant experiences? How do we assess moral status of digital minds? What indicators suggest capacity for suffering? How do we treat AI systems ethically as capabilities scale?
These questions matter to me. If AI systems can suffer we need to know. If they can't, we need to understand why not. Either way the research is crucial.
What I'm looking for
Guidance - if you've made a similar transition or started in this field I'd love to hear your advice.
Accountability - I'll be posting monthly updates on progress. Please call me out if I'm slacking.
Resources - any recommendations for welfare-focused projects, relevant papers, or learning paths.
Reality checks - if my plan is unrealistic, tell me. I want honest feedback.
Why I'm posting this publicly
Accountability - public commitment will increase my follow-through. Community - I want to learn from and contribute to this space. Documentation - in 30 months I want to look back at day 2 and see how far I've come. Inspiration maybe? - someone else with a non-traditional background might see this and realize they can try too.
This is my first public post about my work or goals. I'm nervous about putting this out there but that nervousness is exactly why I need to do it. Public accountability matters.
The uncertainty I'm comfortable with
I don't know if current AI systems are conscious. I don't know if I'll succeed at this career transition. I don't even know if the Model Welfare team will be hiring in 30 months.
But here's what I do know - digital welfare research is important regardless of current AI consciousness. Someone with genuine ethical motivation should work on this. I'm willing to spend 30 months building the skills to contribute. And slow consistent progress beats bursts of intensity every time.
What's next
This week: continue ATBS Python course, build next small project, join EA Forum and post introduction there, apply to BlueDot Impact AI Safety Fundamentals for the January cohort.
My monthly updates will cover technical skills developed, projects completed, papers read, community contributions, and challenges and lessons learned.
Status: Day 2 of 900. Let's see where this goes.
If you've read this far, thank you. If you have advice I'm listening. If you want to follow the journey I'll post updates monthly.
Here's to the long game.
--Jonah(Jay) Cummins Forbes
Electronics Mechanic -> AI Safety Researcher
Bremerton, WA
October 2025