I read your title and thought "exactly!". I then read your post and it was pretty much exactly what I expected after reading the title. So, ironically, it seems like you perfectly compressed the state of your mind into a few words. :) But to be fair, that's probably mostly because we've made very similar experiences and doesn't translate to human<->LLM communication.
When vibe-coding, many things work really fast, but I often end up in these cases where the thing I want changed is very nuanced and I can see that just blurting it out would cause the LLM to do something different from what I have in mind. So I sometimes have to write like 5 paragraphs to describe one relatively small change. Then the LLM comes up with a plan which I have to read, which again takes time, and sometimes there are 1-2 more details to clear up, so it's a whole process, and all of this would kind of work naturally without me even noticing if I were to write the code.
A year ago I wrote a post in a somewhat similar direction, but the recent months of vibe coding with Opus 4.5 really gave me a new appreciation for all the different bottlenecks that remain. Once "writing code" is automated - which is basically now - it's not like programmers are instantly replaced (evidently), we just hop on to the next bottleneck below. So, the average programmer will maybe be sped up by some percentage, with only extreme outliers getting a multiple-fold increase in output, and the rest merely shifts to focus on different things in their work. It's still kind of mindblowing to me that that's how it is. Perhaps it gets "solved" once the entire stack, from CEO to PM to testers to programmers, is AIs - but then I guess they would also have to communicate via not-flawlessly-efficient means with each other (and sometimes themselves, until continual learning is solved), and would still run into these coordination overhead issues? But I guess all that overhead is less notable when the systems themselves run at 100x our speed and work 24h/day.
Even with a car, there are cases where traffic and/or finding a parking spot can cause huge variance. It really depends on the type of meeting / circumstances of the other people whether it's worth completely minimizing the risk of being late at the expense of potentially wasting a lot of your own time.
E.g., when I visit somebody at their home, then it will likely be bearable for them to welcome me 10 minutes later. Whereas if we meet at some public space, it may be very annoying for the person to stand around on their own (particularly if the person has social anxiety and gets serious disutility from the experience).
That all being said, probably the majority of minutes that people are late to things are self-inflicted, and I agree with OP it makes sense in general to reduce that part (and more generally striving to be a reliable person).
I can relate to a lot of this. But I think in my case the motivation for reinventing the wheel also comes down to fundamentally not enjoying activities like "reading documentation" or generally "understanding what another person has done". But implementing my own library is usually fun. And I can often justify it to myself (and sometimes others) because it will then match the given use case perfectly and will be exactly as big/complex as needed, rather than being some huge highly general universal solution full of bells and whistles we won't even need. Which can be a real advantage - but it's also just one side of a trade-off, and I tend to weigh that side more highly than others, for probably rather self-serving reasons.
I once heard from a developer friend that he sometimes just reads things like the Docker documentation for fun in his spare time. It gave me great appreciation for how different people can be and how difficult it really is to overcome the typical mind fallacy... :) I never would have thought people can enjoy that. And now I'm interested in somehow finding that same enjoyment in myself, because I think it would make many things much easier if I could overcome that aversion that keeps pushing me in the direction of reinventing all the wheels.
I'm not sure what you're hinting at, but in 99.9% of cases when I'm out of the house, I do carry a smartphone around. If you mean that it's annoying when the display gets confused by water, then I agree that's a real disadvantage (but I doubt people's attitude towards being exposed to rain changed that much between 2006 and today, so there certainly is some severe general dislike of rain independent from smartphones). If this is not what you mean, then please elaborate. :)
Agreed, that's one of the exceptions I was thinking of - if you're getting soaked and have no way to get into dry clothes anytime soon, there's little way around finding that rather unpleasant. But I'd say 95% of my rain encounters are way less severe than that, and in these cases, my (previous) attitude towards the rain really was the main issue about the whole situation.
People compare things that are close together in some way. You compare yourself to your neighbors or family, or to your colleagues at work, or to people that do similar work as you do in other companies.
Isn't one pervasive problem today that many people compare themselves to those they see on social media, often including influencers with a very different lifestyle? So it seems to me that comparisons that are not so local are in fact often made, it primarily depends on what you're exposed to - which to some degree is indeed the people around you, but nowadays more and more also includes the skewed images people on the internet, who often don't even know you exist, broadcast to the world.
But maybe this is also partially your point. Maybe it would theoretically help to expose people a lot to "the reality of the 90s" or something, but I guess it's a bit of an anti-meme and hence hard to do.
I agree that telling people how well off they are on certain scales is probably not super effective, but I'm still sometimes glad these perspectives exist and I can take them into consideration during tough times.
Relatedly, at some point as a teenager I realized that being exposed to rain is actually usually not that terrible, and I had just kind of been accidentally conditioned to dislike it because it's a normal thing to dislike and I never met anyone who appeared to enjoy the experience. But turns out, once you stop actively maintaining that resistance and welcome the rain, it can be pretty nice to walk around in rain while everyone around you tries to escape it. (Some exceptions apply, of course)
Yeah, fair enough. My impression has been that some people feel guilty about caring about themselves more than about others, or that it's seen as not very virtuous. But maybe such views are less common (or less pronounced) than the vibes I've often picked up imply. :)
Egoism has a bad reputation, but I think that doesn't do it justice. Some degree of egoism is likely very helpful, as it's a form of ensuring available local knowledge is taken into account. If people were not at least mildly egoistic, a great deal of local knowledge would be ignored, leading to everyone supposedly helping others in not-actually-helpful ways.
What I think is much more harmful overall is the distinction between valuing public goods[1] at least somewhat vs not valuing them at all in their decision-making. This is something I've in particular seen in some B2C companies I've been involved with: when things are going well for them, they're proud of the value they produce for the public (as in, typically, their paying + non-paying users). But when the market gets tough and their growth/existence is at risk, they often very quickly stop caring about public goods entirely and start making very one-sided trade-offs that are (supposedly) beneficial for them while often being incredibly annoying to many users, when other, similarly-beneficial-to-them solutions might exist that don't come at the expense of users. Two examples:
So, the problem in such cases is not so much that the company cares about their own growth, the problem is when they completely disregard the (potential for positive) externalities[1], compared to only mostly disregarding it, but at least having it represented in their model with a non-zero weight.
There are surely different reasons for why such absolute disregard for public goods can occur. Some speculation:
I don't pretend to have any solution for this. But my impression is that some of the decision-making, at least in the companies I've seen, tends to be highly path-dependent, and a good argument or suggestion made to the right people at the right point in time can make a huge difference. So I guess, even if this approach doesn't scale all that well, having well-meaning individuals within companies occasionally speak up and make productive proposals could move some needles.
I can imagine that I'm not using "public goods" and "externalities" in precisely the ways they're usually used. I hope the post makes some sense anyway. If you know of any simple ways to phrase things more precisely, please let me know.
I suspect this is why even many people who care about animals and dislike factory farming prefer to not think about the topic at all rather than making decisions case by case and trading off their comfort vs how much harm is caused. E.g., when you eat at a restaurant with a lot of veggy offers, it would (for most people) be very easy to eat something without meat. Whereas when friends invite you over and cook something with meat, it would be much more costly/unpleasant to refuse eating it. Still, I know only few people who are "vegetarian when it's easy", yet I know many people who dislike factory farming, but give it practically 0 weight in their decisions nonetheless.
It seems to me that narratives are skewed and highly simplified abstractions of (empirical) reality that then are subject to selection pressure, such that the most viral ones (within any subculture) dominate, where virality is often negatively correlated with accuracy. Yet, when hearing narratives from people we like and trust, we humans seem to have deeply ingrained urges to quickly believe them. This gets most apparent when you hear the narratives other subcultures are spreading that affect you or your beliefs negatively. Like, hearing the narratives of AI skeptics & ethicists (say about AI water usage, about AI not being "actually intelligent", or about all AI doomers secretly trying to inflate stock prices) really drove home a Gellman-Amnesia-style realization for me, how deeply flawed narratives tend to be, and that this is very likely true for the narratives I'm affected by (without even realizing these are narratives!).
Narratives are usually a combination of an overly simplistic conclusion about some part of the world paired with radically filtered evidence. (And I guess this claim in itself is a bit of a narrative about narratives)
I agree with you though that narratives may be required to actually do things in the world and pure empiricism will be insufficient.