How is it possible to tell the truth?
I mean, sure, you can use your larynx to make sound waves in the air, or you can draw a sequence of symbols on paper, but sound waves and paper-markings can't be true, any more than a leaf or a rock can be "true". Why do you think you can tell the truth?
This is a pretty easy question. Words don't have intrinsic ontologically-basic meanings, but intelligent systems can learn associations between a symbol and things in the world. If I say "dog" and point to a dog a bunch of times, a child who didn't already know what the word "dog" meant, would soon get the idea and learn that the sound "dog" meant this-and-such kind of furry four-legged animal.
As a formal model of how this AI trick works, we can study sender–receiver games. Two agents, a "sender" and a "receiver", play a simple game: the sender observes one of several possible states of the world, and sends one of several possible signals—something that the sender can vary (like sound waves or paper-markings) in a way that the receiver can detect. The receiver observes the signal, and makes a prediction about the state of the world. If the agents both get rewarded when the receiver's prediction matches the sender's observation, a convention evolves that assigns common-usage meanings to the previously and otherwise arbitrary signals. True information is communicated; the signals become a shared map that reflects the territory.
This works because the sender and receiver have a common interest in getting the same, correct answer—in coordinating for the signals to mean something. If instead the sender got rewarded when the receiver made bad predictions, then if the receiver could use some correlation between the state of the world and the sender's signals in order to make better predictions, then the sender would have an incentive to change its signaling choices to destroy that correlation. No convention evolves, no information gets transferred. This case is not a matter of a map failing to reflect the territory. Rather, there just is no map.
How is it possible to lie?
This is ... a surprisingly less-easy question. The problem is that, in the formal framework of the sender–receiver game, the meaning of a signal is simply how it makes a receiver update its probabilities, which is determined by the conditions under which the signal is sent. If I say "dog" and four-fifths of the time I point to a dog, but one-fifth of the time I point to a tree, what should a child conclude? Does "dog" mean dog-with-probability-0.8-and-tree-with-probability-0.2, or does "dog" mean dog, and I'm just lying one time out of five? (Or does "dog" mean tree, and I'm lying four times out of five?!) Our sender–receiver game model would seem to favor the first interpretation.
Signals convey information. What could make a signal, information, deceptive?
Traditionally, deception has been regarded as intentionally causing someone to have a false belief. As Bayesians and reductionists, however, we endeavor to pry open anthropomorphic black boxes like "intent" and "belief." As a first attempt at making sense of deceptive signaling, let's generalize "causing someone to have a false belief" to "causing the receiver to update its probability distribution to be less accurate (operationalized as the logarithm of the probability it assigns to the true state)", and generalize "intentionally" to "benefiting the sender (operationalized by the rewards in the sender–receiver game)".
One might ask: why require the sender to benefit in order for a signal to count as deceptive? Why isn't "made the receiver update in the wrong direction" enough?
The answer is that we're seeking an account of communication that systematically makes receivers update in the wrong direction—signals that we can think of as having been optimized for making the receiver make wrong predictions, rather than accidentally happening to mislead on this particular occasion. The "rewards" in this model should be interpreted mechanistically, not necessarily mentalistically: it's just that things that get "rewarded" more, happen more often. That's all—and that's enough to shape the evolution of how the system processes information. There need not be any conscious mind that "feels happy" about getting rewarded (although that would do the trick).
Let's test out our proposed definition of deception on a concrete example. Consider a firefly of the fictional species P. rey exploring a new area in the forest. Suppose there are three possibilities for what this area could contain. With probability 1/3, the area contains another P. rey firefly of the opposite sex, available for mating. With probability 1/6, the area contains a firefly of a different species, P. redator, which eats P. rey fireflies. With probability 1/2, the area contains nothing of interest.
A potential mate in the area can flash the P. rey mating signal to let the approaching P. rey know it's there. Fireflies evolved their eponymous ability to emit light specifically for this kind of sexual communication—potential mates have a common interest in making their presence known to each other. Upon receiving the mating signal, the approaching P. rey can eliminate the predator-here and nothing-here states, and update its what's-in-this-area probability distribution from { mate, predator, nothing} to { mate}. True information is communicated.
Until "one day" (in evolutionary time), a mutant P. redator emits flashes that imitate the P. rey mating signal, thereby luring an approaching P. rey, who becomes an easy meal for the P. redator. This meets our criteria for deceptive signaling: the P. rey receiver updates in the wrong direction (revising its probability of a P. redator being present downwards from to 0, even though a P. redator is in fact present), and the P. redator sender benefits (becoming more likely to survive and reproduce, thereby spreading the mutant alleles that predisposed it to emit P. rey-mating-signal-like flashes, thereby ensuring that this scenario will systematically recur in future generations, even if the first time was an accident because fireflies aren't that smart).
Or rather, this meets our criteria for deceptive signaling at first. If the P. rey population counteradapts to make correct Bayesian updates in the new world containing deceptive P. redators, then in the new equilibrium, seeing the mating signal causes a P. rey to update its what's-in-this-area probability distribution from { mate, predator, nothing} to { mate, predator}. But now the counteradapted P. rey is not updating in the wrong direction. If both mates and predators send the same signal, than the likelihood ratio between them is one; the observation doesn't favor one hypothesis more than the other.
So ... is the P. redator's use of the mating signal no longer deceptive after it's been "priced in" to the new equilibrium? Should we stop calling the flashes the "P. rey mating signal" and start calling it the "P. rey mating and/or P. redator prey-luring signal"? Do we agree with the executive in Moral Mazes who said, "We lie all the time, but if everyone knows that we're lying, is a lie really a lie?"
Some authors are willing to bite this bullet in order to preserve our tidy formal definition of deception. (Don Fallis and Peter J. Lewis write: "Although we agree [...] that it seems deceptive, we contend that the mating signal sent by a [predator] is not actually misleading or deceptive [...] not all sneaky behavior (such as failing to reveal the whole truth) counts as deception".)
Personally, I don't care much about having tidy formal definitions of English words; I want to understand the general laws governing the construction and perversion of shared maps, even if a detailed understanding requires revising or splitting some of our intuitive concepts. (Cailin O'Connor writes: "In the case of deception, though, part of the issue seems to be that we generally ground judgments of what is deceptive in terms of human behavior. It may be that there is no neat, unitary concept underlying these judgments.")
Whether you choose to describe it with the signal/word "deceptive", "sneaky", Täuschung, הונאה, 欺瞞, or something else, something about P. redator's signal usage has the optimizing-for-the-inaccuracy-of-shared-maps property. There is a fundamental asymmetry underlying why we want to talk about a mating signal rather than a 2/3-mating-1/3-prey-luring signal, even if the latter is a better description of the information it conveys.
Brian Skyrms and Jeffrey A. Barrett have an explanation in light of the observation that our sender–receiver framework is a sequential game: first, the sender makes an observation (or equivalently, Nature chooses the type of sender—mate, predator, or null in the story about fireflies), then the sender chooses a signal, then the receiver chooses an action. We can separate out the propositional content of signals from their informational content by taking the propositional meaning to be defined in the subgame where the sender and receiver have a common interest—the branches of the game tree where the players are trying to communicate.
Thus, we see that deception is "ontologically parasitic" in the sense that holes are. You can't have a hole without some material for it to be a hole in; you can't have a lie without some shared map for it to be a lie in. And a sufficiently deceptive map, like a sufficiently holey material, collapses into noise and dust.
Bibliography
I changed the species names in the standard story about fireflies because I can never remember which of Photuris and Photinus is which.
Fallis, Don and Lewis, Peter J., "Toward a Formal Analysis of Deceptive Signaling"
O'Connor, Cailin, Games in the Philosophy of Biology, §5.5, "Deception"
Skyrms, Brian, Signals: Evolution, Learning, and Information, Ch. 6, "Deception"
Skyrms, Brian and Barrett, Jeffrey A., "Propositional Content in Signals"

I'm glad you're bringing sender-receiver lit into this discussion! It's been useful for me to ground parts of my thinking. What follows is almost-a-post's worth of, "Yes, and also..."
Stable "Deception" Equilibrium
The firefly example showed how an existing signalling equilibrium can be hijacked by a predator. What once was a reliable signal becomes unreliable. As you let things settle into equilibrium, the signal of seeing a light should lose all informational content (or at least, it should not give any new information about whether or not the signal is coming from mate or predator).
Part of the what ensures this result is the totally opposed payoffs of P.rey and P.redator. In any signalling game where the payouts are zero-sum there isn't going to be an equilibrium where the signals conveys information.
More complex varied payouts can have more interesting results:
Again, at the level of the sender-receiver game this is deception, but it still feels a good bit different from what I intuitively track as deception. This might be best stated as an example of "equilibrium of ambiguous communication as a result of semi-adversarial payouts"
Intention
I want to emphasize that the sender-receiver model and Skyrms' use of "informational content" are not meant to provide an explanation of intention. Information is meant to be more basic than intent, and present in cases (like bacteria) where there seems to be no intent. Skyrms seems to be responding to some scholars who want to say "intent is what defines communication!", and like Skyrms, I'm happy to say that communication and signals seems to cover a broad class of phenomena, of which intent would be a super-specialized subset.
For my two-cents, I think that intent in human communication involves both goal-directedness and having a model of the signalling equilibrium that can be plugged into an abstract reasoning system.
In sender-receiver games, the learning of signalling strategy often happens either through replicator-dynamics or a very simple Roth-Erev reinforcement learning. These are simple mechanisms that act quite directly and don't afford any reflection on the mechanism itself. Humans can not only reliably send a signal in the presence of certain stimulus, but can also do "I'm bored, I know that if I shout 'FIRE!' Sarah is gonna jump out of her skin, and then I'll laugh at her being surprised." Another fun example is that seems to rely on being able to reason about the signalling equilibrium itself is "what would I have to text you to covertly convey I've been kidnapped?"
I think human communication is always a mix of intentional and non-intentional communication, as I explore in another post. When it comes to deception, while a lot of people seem to want to use intention to draw the boundary between "should punish" and "shouldn't punish", is see it more as a question of "what sort of optimization system is working against me?" I'm tempted to say "intentional deception is more dangerous because that means the full force of their intellect is being used to deceive you, as opposed to just their unconscious" but that wouldn't be quite right. I'm still developing thoughts on this.
Far from equilibrium
I expect it's most fruitful to think of human communication as an open system that's far from equilibrium, most of the time. Thinking of equilibrium helps me think of directions things might move, but I don't expect everyone's behavior to be "priced into" most environments.