Model to track: You get 80% of the current max value LLMs could provide you from standard-issue chat models and any decent out-of-the-box coding agent, both prompted the obvious way. Trying to get the remaining 20% that are locked behind figuring out agent swarms, optimizing your prompts, setting up ad-hoc continuous-memory setups, doing comparative analyses of different frontier models' performance on your tasks, inventing new galaxy-brained workflows, writing custom software, et cetera, would not be worth it: it would take too long for too little payoff.
There is an "LLMs for productivity!" memeplex that is trying to turn people into its hosts by fostering FOMO in those who are not investing tons of their time into tinkering with LLMs. You should ignore it. At best it would waste your time; at worst it would corrupt your priorities, convincing you that you should reorient your life around "optimizing your Claude Code setup" or writing productivity apps for yourself. LW regulars may be especially vulnerable to it: we know that AI is going to become absurdly powerful sooner or later, so it takes relatively little to sell to us the idea that it already is absurdly powerful – which may or may not be currently being exploited by analogues of crypto grifters.
(Not to say you mustn't be tinkering with LLMs and vibe-coding custom software, especially if you're having fun! But you should perhaps approach it in the spirit of a hobby, rather than the thing you should be doing.)
Well, at least, that's my takeaway from watching the current ideatic ecosystem around LLMs and trying that stuff for myself (one, two, three). I do have tons of ideas about custom software that perhaps could 1.1x my productivity... but it's too complex for the LLMs of today to vibe-code in a truly hands-off manner, and is not worth the time otherwise. Maybe in six more months.
Obviously "reverse any advice you hear" and "Thane has terminal skill issues and this post is sour grapes" may or may not