(part 1)
The central point about the temperature example is that facts about which properties really exist and which are just combinations of others are mostly, if not entirely, epiphenomenal. For instance, we can store the momenta of the particles, or their masses and velocities. There are many invertible functions we could apply to phase space, some of which would keep the calculations simple and some of which would not, but it's very unclear, and for most purposes irrelevant, which is the real one.
So when you say that X is/isn't ontologically fundamental, you aren't doing so on the basis of evidence.
Temperature is an average. All individual information about the particles is lost, so you can't invert the mapping from exact microphysical state to thermodynamic state.
Most of the invertible functions you mention would reduce to one of a handful of non-redundant functions, obfuscated by redundant complexity.
Causality, if we have the true model of physics, is defined by counterfactuals. If we hold everything else constant and change X to Y, what happens? So if we have a definition of "everything else constant" wrt mental states, we're done. We certainly can construct one wrt temperature (linearly scale the velocities.)
Your model of physics has to have some microscopic or elementary non-counterfactual notion of causation for you to use it to calculate these complex macroscopic counterfactuals. Of course in the real world we have quantum mechanics, not the classical ideal gas we were discussing, and your notion of elementary causality in quantum mechanics will depend on your interpretation.
But I do insist there's a difference between an elementary, fundamental, microscopic causal relation and a complicated, fuzzy, macroscopic one. A fundamental causal connection, like the dependence of the infinitesimal time evolution of one basic field on the states of other basic fields, is the real thing. As with "existence", it can be hard to say what "causation" is. But whatever it is, and whether or not we can say something informative about its ontological character, if you're using a physical ontology, such fundamental causal relations are the place in your ontology where causality enters the picture and where it is directly instantiated.
Then we have composite causalities - dependencies among macroscopic circumstances, which follow logically from the fundamental causal model, and whose physical realization consists of a long chain of elementary causal connections. Elementary and composite causality do have something in common: in both cases, an initial condition A leads to a final condition B. But there is a difference, and we need some way to talk about it - the difference between the elementary situation, where A leads directly to B, and the composite situation, where A "causes" B because A leads directly to A' which leads directly to A'' ... and eventually this chain terminates in B.
Also - and this is germane to the earlier discussion about fuzzy properties and macroscopic states - in composite causality, A and B may be highly approximate descriptions; classes of states rather than individual states. Here it's even clearer that the relation between A and B is more a highly mediated logical implication than it is a matter of A causing B in the sense of "particle encounters force field causes change in particle's motion".
How does this pertain to consciousness? The standard neuro-materialist view of a mental state is that it's an aggregate of computational states in neurons, these computational states being, from a physical perspective, less than a sketch of the physical reality. The microscopic detail doesn't matter; all that matters is some gross property, like trans-membrane electrical potential, or something at an even higher level of physical organization.
I think I've argued two things so far. First, qualia and other features of consciousness aren't there in the physical ontology, so that's a problem. Second, a many-to-one mapping is not an identity relation, it's more suited to property dualism, so that's also a problem.
Now I'd add that the derived nature of macroscopic "causes" is also a problem, if you want to have the usual materialist ontology of mind and you also want to say that mental states are causes. And as with the first two problems, this third problem can potentially be cured in a theory of mind where consciousness resides in a structure made of ontologically fundamental properties and relations, rather than fuzzy, derived, approximate ones. This is because it's the fundamental properties which enter into the fundamental causal relations of a reductionist ontology.
In philosophy of mind, there's a "homunculus fallacy", where you explain (for example) the experience of seeing as due to a "homunculus" ("little human") in your brain, which is watching the sensory input from your eyes. This is held to be a fallacy that explains nothing and risks infinite regress. But something like this must actually be true; seeing is definitely real, and what you see directly is in your skull, even if it does resemble the world outside. So I posit the existence of what Dennett calls a "Cartesian theater", a place where the seeing actually happens and where consciousness is located; it's the end of the sensory causal chain and the beginning of the motor causal chain. And I further posit that, in current physical language, this place is a "quantum system", not just a classically distributed neural network; because this would allow me to avoid the problems of many-to-one mappings and of derived macroscopic causality. That way, the individual conscious mind can have genuine causal relations with other objects in the world (the simpler quantum systems that are its causal neighbors in the brain).
Anyway, the existence of a one-to-one mapping is a necessary but not a sufficient condition for a proposed identity statement to be plausible
What are the other conditions?
That's way too hard, so I'll just illustrate the original point: You can map a set of three donkeys onto a set of three dogs, one-to-one, but that doesn't let you deduce that a dog is a donkey.
...at least not if you accept a certain line of anthropic argument.
Thomas Nagel famously challenged the philosophical world to come to terms with qualia in his essay "What is it Like to Be a Bat?". Bats, with sensory systems so completely different from those of humans, must have exotic bat qualia that we could never imagine. Even if we deduce all the physical principles behind echolocation, even if we could specify the movement of every atom in a bat's senses and nervous system that represents its knowledge of where an echolocated insect is, we still have no idea what it's like to feel a subjective echolocation quale.
Anthropic reasoning is the idea that you can reason conditioning on your own existence. For example, the Doomsday Argument says that you would be more likely to exist in the present day if the overall number of future humans was medium-sized instead of humongous, therefore since you exist in the present day, there must be only a medium-sized number of future humans, and the apocalypse must be nigh, for values of nigh equal to "within a few hundred years or so".
The Buddhists have a parable to motivate young seekers after enlightenment. They say - there are zillions upon zillions of insects, trillions upon trillions of lesser animals, and only a relative handful of human beings. For a reincarnating soul to be born as a human being, then, is a rare and precious gift, and an opportunity that should be seized with great enthusiasm, as it will be endless eons before it comes around again.
Whatever one thinks of reincarnation, the parable raises an interesting point. Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal.
The phrase "for me to be an animal" may sound nonsensical, but "why am I me, rather than an animal?" is not obviously sillier than "why am I me, rather than a person from the far future?". If the doomsday argument is sufficient to prove that some catastrophe is preventing me from being one of a trillion spacefaring citizens of the colonized galaxy, this argument hints that something is preventing me from being one of a trillion bats or birds or insects.
And this could be that animals lack subjective experience. This would explain quite nicely why I'm not an animal: because you can't be an animal, any more than you can be a toaster. So Thomas Nagel can stop worrying about what it's like to be a bat, and the rest of us can eat veal and foie gras guilt-free.
But before we break out the dolphin sausages - this is a pretty weird conclusion. It suggests there's a qualitative and discontinuous difference between the nervous system of other beings and our own, not just in what capacities they have but in the way they cause experience. It should make dualists a little bit happier and materialists a little bit more confused (though it's far from knockout proof of either).
The most significant objection I can think of is that it is significant not that we are beings with experiences, but that we know we are beings with experiences and can self-identify as conscious - a distinction that applies only to humans and maybe to some species like apes and dolphins who are rare enough not to throw off the numbers. But why can't we use the reference class of conscious beings if we want to? One might as well consider it significant only that we are beings who make anthropic arguments, and imagine there will be no Doomsday but that anthropic reasoning will fall out of favor in a few decades.
But I still don't fully accept this argument, and I'd be pretty happy if someone could find a more substantial flaw in it.