So this is a concept that I noticed while reviewing a preventative maintenance manual for the product I support at work. To give a little context, this is a manual that goes out to 500+ power plants around the world, to be used by operations and engineering teams to ensure proper functionality of their installed system and minimize downtime. This is what I would call a “good macro” skill. Being able to think about complex systems and how to create a document that applies to all of them despite their variations. Now, for the irony in all of this, the “bad micro” skill. I don’t have a maintenance schedule for my car.... (read 287 more words →)
I was thinking about non-obvious incorrect uses for LLMs, where the output is useable but not aligned to the target audience. For example, using an LLM to design a return to office distribution schedule seems like a good idea because it takes advantage of LLMs ability to solve “math” problems. In reality, however, this method is masking the actually difficult part of this task, human preferences. It won’t accurately factor in niche cultural imprints (lunch time/duration, bathroom usage, tardiness) nor will it be able to detect chaotic phenomena (traffic fluctuations, weather, sports game results).