In programming, some variables have local scope: they have meaning in the context of a particular function you're writing, or a loop, etc; outside of that context, they may have no meaning, or they may have an entirely different local meaning somewhere else. Others have global scope: they mean the same thing throughout a program.
The first thing I want to get across is that we can make a similar distinction with words.
The second thing I want to get across is that the distinction is fuzzy. Everything is really "local" scope, but for different values of "local".
The third thing I want to get across is that this is an instance of a more general pattern. The temptation to insist that all words are globally defined comes from a way of thinking about disagreements which gives rise to many similar mistakes.
First things first.
"Global scope" obviously refers to dictionary definitions and to common usage. The most obvious cases for "local scope" of words are things like "the school" (which refers to whichever school is most relevant to the people talking), "the office", etc. However, I think a lot more words are subtly re-defined based on what is most useful for the context.
In Distinctions of the Moment, I argued that I want to reserve certain words for on-the-fly redefinition. For example, "thought" and "feeling" are vague words which are often used to point at a dichotomy. However, the space of possible dichotomies here is large. In one conversation, I might want to use "thought" for "linguistic mental representation" (inner monologue in the phonological loop), and "feeling" for "inarticulate mental representation" (something which you can't put into words). In another conversation, "thought" might refer to "mental content" (including all of working memory, not only the phonological loop), and "feeling" to sensations in the body including touch and kinesthetic sense (proprioception). And so on.
If someone steps into the conversation and objects "that isn't what thought vs feeling means to me!" I'll listen to their proposed definition, but I'll evaluate its usefulness in terms of the conversation I'm having. Often I'll end up saying something like "you might be right about common usage, but we're talking about _____, so I don't see how your distinction is relevant here".
This practice risks equivocation, so it is important to make sure everyone is on the same page to the extent that sharp definitions are necessary in the conversation. However, as has been discussed in the sequence on words, the meaning of words is not given by explicit definitions; words are similarity clusters. So, for locally defined words, we refine the clusters enough for them to be useful in the context of our conversation.
This phenomenon extends beyond pairs of words like "thought" and "feeling" which represent (vague) dichotomies. We might just as easily need to make a local definition of a single word. Legal documents do this all the time for clarity. It is often a good idea to do this for philosophical arguments, to get clear on what is being discussed; and, to some extent, any intellectual work. It usually does not make sense to use the dictionary definitions in detailed arguments; not because the dictionary definitions are insufficiently precise in general (though they may be), but because they lack precision along the dimensions which are important for what is being locally discussed. These definitions can gain wider use if the concepts prove useful, so that the line between local and global scope becomes blurred.
Words mean different things to different people at different places and times. I imagine that someone who disagrees with the view on words which I'm arguing for (perhaps Said Achmiz) would argue that I'm pointing out special cases at best, and words should mostly follow common usage. I would argue that the more elegant view is the one which sees the "local-ness" of definitions extending all the way up to the language level -- the dividing lines between language family, language, local dialect, and slang/jargon are not so clean.
Finally, I think my real crux on this has to do with much deeper issues of rationality. There's a particular kind of mistake where you think the business of science/intellectualism/philosophy/rationality is to sort out what is objectively true based on rigorous arguments which everyone has to agree with. I call this "arguing with an imaginary objective judge rather than the person in front of you". There is no argument so compelling it could convince a rock. Instead, I would suggest that intellectual progress looks more like finding cruxes and resolving their status. One way the mistake I'm pointing at can manifest is insistence on objective, static definitions. Another example would be to argue from scientific consensus when the listener does not take scientific consensus as definitive (or, similarly, to argue from the bible or any authority when the listener is not already a person who is going to be moved by that authority).