I also wish there was no industry that would serve as an example for that employment model...
nah 🙈, the stupid companies will self-select out of the job market for not-burned-out good programmers, and the good companies will do something like "product engineering" when product managers and designers will make their own PoCs to validate with stakeholders before/without endless specifications handed over to engineers in the first iteration, and then the programming roles will focus on building production quality solutions and maybe a QA renaissance will happen to write useful regression tests when domain experts can use coding assistants to automate boring stuff and focus on domain expertise/making decisions instead of programmers trying to guess the indent behind a written specification twice (for the code and for the test.. or once when it's the same person/LLM writing both, which is a recipe for useless tests IMHO)
(..not making a prediction here, more like a wish TBH)
spotted in an unrelated discord, looks like I'm not the only person who noticed the similarity 😅
Nice, I hope it will last longer for you than my 2.5 years out of corporate environment ... and now observing the worse parts of AI hype in startups too due to investor pressures (as if "everyone" was adding "stupid chatbot wrappers" to whatever products they try to make .. I hope I'm exaggerating and I will find some company that's not doing the "stupid" part, but I think I lost hope on not wanting to see AI all around me .. and not literally every idea with LLM in the middle is entirely useless).
In case this feedback might be useful - I was unable to read this essay because I don't remember following concepts mentioned anywhere in the previous ~5 essays: "safe inputs" and "rogue behaviour".
Especially the word "input" is used in a way that is completely alien to me as a programmer:
Here “inputs” includes all of an AI’s environment/affordances/history, rather than just e.g. the text it is receiving.
(I will wait for a recording of a talk in front of a live audience for this one...)
Yup, learning on the job about microsatellite instabilities, that E2/E6/E7 are gene names and not color dye names, and being able to dive deeper was fun. Politics in a big pharma IT division less so - I didn't feel my daily activities sufficiently add up to the big picture :( I talked with former colleagues recently and a new project sounded interesting, but it's another round of stupid hiring freeze at the moment, so not re-joining them at this time.
Probably will end up choosing a job that's more interesting than easy and settle for a couple of small-meaning items in my life instead of futile search for the one big-meaning thing.
This meta-answer is actually sufficient for my meta-question, so not looking for additional answers at this time (unless you think you have any insight that hasn't been pointed to yet).
But if useful to expand on it:
(since you don't relate to the metaphor of slipping fingers, that suggests you don't see any obvious mistakes in your previous approach, that you still endorse it on reflection without major "bug fixes")
Not asking that question, so ignoring the first part.
Going all the way anti-zen is an option too and I'm glad that the approach worked for you. For me having a goal / meaning / "fundamental want" are all in the same bag of things I'm investigating how other people found their bags, not separate items to pick one at a time 👉👈.
Sounds to me like we always have to calculate a social path integral to a level of approximation appropriate to the situation, even in ask culture... If a friend is lactose intolerant and they know I know that, then even in ask culture it would be weird for me to ask if they want some non-vegan ice cream (and they might assume that if I asked, I would be either joking or offering vegan ice cream, not that I was actively stupid) - so I don't see the option for 0 echos tbh, just an option to agree that coarse approximation of social consequences is totally fine in most situations and as a default and that mistakes are better on the side of oversimplification rather than overthinking it and not interacting at all.
Or some questions like "May I cut your wrists?" seem like they are almost never appropriate, perhaps as a joke between the right kind of people, or meta level sarcasm when judging how much someone is genuinely into the ask culture thingy.. number of echoes can be a fraction sometimes..
So I would imagine that not seeing public comments as needing more social consideration for more diverse audience than DMs is a mistake even in ask culture worth pointing out to people when it could have been formulated with a better escape hatch..