We think that words mean things. We see the dictionary as something like API documentation: You go to look up the term and, ah, there's the implementation.
This is not how language works. This is not how the experts – linguists and copywriters – think about language.
I keep thinking about this conversation between Ramit Sethi and Patrick McKenzie. The best bit, the bit that my mind keeps returning to, is this quote from Patrick:
I think we don’t pay nearly enough attention to the exact words people use. Maybe we would if we came from a communications background. Nothing motivates people like having their own words repeated right back to them, which is something that you should try to do more often. It’s just an easy conversational hack to sound more persuasive.
I read that the first time and I thought, oh, that's pretty smart.
I came back a few weeks later and read it again and thought, oh my god, this is what's wrong with my life.
Because Patrick goes on to say this:
Engineers take heed: your clients will often make mistakes about engineering reality when talking to you. You will feel the urge to correct their misconceptions about engineering reality, perhaps by rewording their requests such that they match the way the world actually works. Don’t do this. You will never delight a client by teaching them what a web service actually is.
I like to think I'm pretty good about this with non-engineers, relative to the baseline, but I am absolutely miserable about this with other engineers.
Once upon a time I really cared about the distinction between "testing" and "checking." I would find people talking about "automated tests," and tell them, "ah, but you see, automated testing is impossible, what you mean is 'automated checks.'" I would correct my colleagues, or interject a little disclaimer into our conversations about testing that, of course, when we talked about the tests we were writing we properly ought to call them "checks."
This was a waste of time.
Then, for a while, I corrected people about the precise technical definition of "observability." I would impress upon anyone who would listen that observability is not "logs, metrics, and events and/or traces." I would interrupt conversations about my team was trying to make to explain this distinction, or snark on other team's marketing in Slack.
Again, waste of time.
There are differences between "a human evaluating software as they interact with it" and "a computer program exercising another program, for the purpose of confirming a set of discrete facts about that second program." It's useful to know about these distinctions, to be able to make them when it's appropriate.
But, in practice, people often use that one word, "test" to mean both things. If I want people to understand what I mean when I'm talking about "a computer program exercising another program, for the purpose of confirming a set of discrete facts about that second program," I should probably use the word "test."
I should especially use the word "test" if I'm trying to convince people to change their behavior. They'll like me better, they'll listen to me more carefully, because they'll believe that I understand them and care about their point of view.
Likewise, it's a real shame that people use "observability" to mean "telemetry" when the word "telemetry" is right there. It's especially a shame that "observability" has evolved to mean something like, "all the garbage the system produces that doesn't help us understand it, but at one URL, and maybe a little linear regression run against it on the side."
But, so what? Does it help me to use words that confuse people? Does it help me to spend time telling people what words they should be using instead, time I could be spending listening to them instead? Does it make them more likely to use the tools I want them to use, in the way that I want them to use them?
It does not.
I will never delight a coworker by teaching them what "observability" actually is.