This is to expand on my earlier comment. It really deserves to be part its own post, and in fact I was already in the process of writing that post when I came across Jessica's work here. Her explanation for the antilinear-linear inner product is interesting—and initially seemed like it was doing something right—but completely different from my own approach. Well actually, I didn't have an explanation for why the inner product was antilinear-linear, I had only gotten to the point where I knew there was a bilinear form for physical observables, so I was hoping to incorporate some of Jessica's ideas into my own post. However, there was one detail she was missing, and that was, "why is it a bilinear form in the first place?" I had the explanation, but I could not see it materializing from her approach. Ultimately, I've concluded that the reason her approach seems to work is coincidental, and the mirror, conjugate space is not related to the inner product.
So, why a bilinear form? It is because observations are gauge-invariant, and gauge-invariant things are generated by the field strength tensor , where is the contravariant derivative along the spacetime manifold. Why does rather than or ? Because .
The boundary or derivative operator comes from simplicial complexes:
These simplicial compelexes are in turn one piece of the tensor product. For example:
We can identify
which matches
It seems rather strange to do this, until you recall the Schur-Weyl duality. Consider the tensor space and group actions
These group actions commute, so they are mutual centralizers and decompose into irreducible representations together. We can decompose tensors into a direct sum of these irreducible representations, i.e. break them up into their "symmetric pieces":
The corresponding group actions for each piece are represented by . For the antisymmetric, or representation, is a one-dimensional vector space—it contains one basis vector, , and multiples of that vector. This means consists of matrices (or scalars), and has lots of nice properties due to being abelian. Rotations will be smooth and transitive, as speck1447 explains here. This is why the boundary operator typically uses this irreducible representation.
However, we can define a more general boundary as a map to a symmetric piece of one degree lower
where and . It looks exactly the same, but we use rather than to permute an index to the front and drop it. The reason for the representation is because there are permutations you can drop two indices in, and summing the different permutations gives you
In general, you will get when
The minimal is not always two, so other representations will ultimately lead to multilinear forms of different degrees. An equivalent way to look at it is from the side. Rotations are essentially continuous permutations, so the generalized idea of a rotation is an matrix where
(the kernel of ). Mirroring and other orthogonal matrices are generalized to
We are looking for a multilinear form that is invariant to orthogonal transformations, so for any and
(note: is applied diagonally). It is enough to find a single vector where for all . Then we can set and for non-multiples of , . The number of antisymmetrizers is the number of rows , so there will always be some where until . This gives the minimal .
This equivalent view is closer to what Jessica is doing. You can think of as the original vector space plus several "dual spaces". However, the "mirror images" come from , and
So, for any non-trivial and non-alternating representation,
There are way more mirror images than dual spaces! It is almost a lucky coincidence that
for the alternating representation, and in general, we should not expect to be able to find dual spaces by looking at mirror images.
In terms of goals of OP, I was taking standard QM notation at face value, and trying to peel back one layer of the onion: why antilinear-linear inner product, POVM observables as Hermitians implying quadratic forms, quadratic density matrices, etc? And I think real structure + conjugate spaces basically explains these dualities in the Hilbert space standard formulation, although that doesn't mean this is the deepest symmetry layer (it's not).
field strength tensor
This is not an area I understand well, but, either the field values are expressed as a real number or complex number. If real: OP framework doesn't apply, or applies at a lower level; perhaps applies indirectly through CPT. If complex: Then I'm guessing similar principles will apply, where for example you can "make a complex number real" as , yielding an inner product that looks like the Hilbert space inner product. The thing to look for would be, are complex conjugates showing up all over the place in the standard math? If so, it may perhaps be modeled with real structure & complex conjugate spaces.
we should not expect to be able to find dual spaces by looking at mirror images
For Hilbert space I'm applying Riesz representation for that; standard machinery for bra-ket. Not sure how far the principle generalizes. (Much of OP could work without Riesz representation, e.g. we can still define just fine, it's just that might not be an invertible map.)
We can identify
This is looking a bit like the 'swap' real structure on . The main difference is that it's dealing with real rather than complex numbers. With the swap real structure, the 'imaginary component' would be written as . This of course looks similar to .
Here's one place where the analogy with symmetric group permutations breaks down. Say our Hilbert space is . Now our inner product (written to be bilinear) is . We can see this as quadri-linear in . Notice with 'n-linearity' we always have n even, and the elements come in pairs. We pair with and with .
What the standard Hilbert space framework is going to do is, by bundling , make the and bilinearity explicit, with the and bilinearity implicit (through the inner product and bra/kets and so on). This is why we end up with quadratics all over the formalism: Born rule, inner product, density matrices, POVMs.
So for your analogy to hold, it's critical that be even. And what the 'swap' real structure on is going to do is swap elements with their conjugated elements ( with , and with ). This is going to be a specific involutive permutation.
So I guess if you wanted to get into deeper foundations, then a question to ask would be "why do the tensor products that POVM observables are linear in factor into components that come in pairs with their complex conjugate spaces (or something iso to their complex conjugate spaces)?". Which, again, OP isn't very much about.
Perhaps I'm missing something you're saying, or you're missing something I'm saying. In general, the process of finding a "conjugate space" is not an involution. We do not have inner products or Hilbert spaces. There are no pairs. We have to motivate the inner product, and it is motivated by first motivating multilinear forms, and then motivating bilinear forms. But bilinear forms only arise in the alternating representation.
There is a terminology issue here, because "dual" literally means a paired, mirror space, so treating the inner product as a bilinear form in a space its dual only works in the alternating representation. What you're actually doing to generate that dual space is to look at the connected components of
For the alternating representation, there is the component connected to the origin——as well as the mirrored component . But there are many more connected components for other representations, exactly of them. Maybe some of them have involutions to each other, but not all of them. Not all of them are "dual" or "conjugate" in the literal sense of the word. This is the terminology issue, and I think the main source of confusion.
Also, my analogy does not break down for odd . Note that is an irreducible module of , not the same space, and is an entirely new tensor product. I was trying to keep my comment from growing longer than it already was, so I may have left out many other little details like this that would help with interpretation.
To disambiguate terminology: By "dual space" I mean the standard meaning in QM; we go from a complex vector space V to a space of linear maps , with the bra/ket duality as a special case. Perhaps I should avoid using "dual" but this explains my earlier usage.
By "complex conjugate space" (sometimes abbreviated as "conjugate space") I mean specifically the construction, a completely formal mathematical operation. (To avoid confusion I could say "complex conjugate space")
Since "complex conjugate space" is an entirely formal construction, it doesn't necessarily have a physical meaning. Or it could have multiple physical meanings as different isos with V.
OP doesn't try to go lower-level than Hilbert space; what follows is my attempt to engage with the level you're talking about:
A "duality" like thing is "polarity of representation of " as implied by the category . Where to simplify in OP, I'm always using the polarity between V and in the representation, but this is not strictly necessary.
The correspondence with the SO(n) model might be: When you are representing an element of as a complex vector space ( means "delooping groupoid"), you are picking out a sub-group of SO(n) iso to . To get to a Hilbert space you clearly have to at least pick out a subgroup.
Another idea is to find a functor . This does not necessarily get you an O(2) subgroup, because J and in need not map to the same group element of O(n). (Hence spinors and so on)
If you have a representation and a groupoid functor you can trivially compose to get a polar representation . That gives you, as the positive component, something like a Hilbert space, and as the negative component, something formally iso to its complex conjugate space (though with a better physical interpretation).
At this point you can use the formal isos: (Riesz representation), and the iso from to the polar negative of your Hilbert-like space (where polar negation is from the representation). This gives you nice physical interpretations of bra/kets, density matrices, observables, and so on, through formal isos.
(I am not sure how much I'm understanding or how much is connecting; feel free to ignore irrelevant detail)
I disagree. In general, you need a multilinear form of degree where , while the number of mirror spaces is the degree of , You are very lucky that the rotations and mirroring you use come from the alternating representation of where these happen to match, since and .
I am in the process of writing a much longer comment, but I think the primary question your post leaves unanswered is, "why a bilinear form, not any other degree?" and pulling on that thread unravels the understanding this post allegedly gives.
The bilinear form is from the inner product. The inner product is generally defined as 'antilinear in first argument, linear in second'. If replacing the first argument with complex conjugate space, it is now 'linear in first argument, linear in second argument', i.e. bilinear. This is of course for a single Hilbert space (and its complex conjugate). When having multiple Hilbert spaces tensored together, the 'tensor product of Hilbert spaces' yields an inner product for that tensored space. That is implicitly multilinear, although decomposes as usual for tensor product, into multiple bilinearities.
Why does quantum mechanics use complex numbers extensively? Why is the inner product of a Hilbert space antilinear in the first argument? Why are Hermitian operators important for representing observables? And what is the i in the Schrödinger equation doing? This post explores these questions through the framework of groupoid representation theory. While this post assumes basic familiarity with complex vector spaces and quantum notation, it does not require much pre-existing conceptual understanding of QM.
Roughly, there are two kinds of complex numbers in physics. One is a phasor: something that has a phase. The other is a scalar: a unitless number representing a combined scale factor and phase-translation. Scalars act on phasors by translating their phases. Generally speaking, scalars are better understood as elements of an algebraic structure (groups, monoids, rings, fields), while phasors are better understood as vectors or components of vectors. We will informally use the term "multi-phasor" for a collection of phasors, such as an element of Cn.
An example of a phasor would be an ideal harmonic oscillator, which has a position given by x(t)=asin(ωt+ϕ). Its state is best thought of as also including its velocity x′(t)=aωcos(ωt+ϕ). Note that (x′(t),x(t)ω) has constant magnitude, corresponding to conservation of energy. Now over time, the normalized R2 point moves in a circle. Phase-translating the system would imply cyclical movement through phase space; a full cycle happens in time 2π/ω. A complex scalar specifies both how to phase-translate a phasor, and how to scale it (here, scaling would apply to both position and velocity). By representing the phasor (x′(t),x(t)ω) as a complex number such as x′(t)+x(t)ωi, multiplying by a complex scalar will phase-translate and scale. Here, multiplying by i represents moving forward a quarter of one cycle, though in other representations, -i would do so instead. The phasor is inherently more symmetric than the scalar; which phase to consider "1" in this complex representation, and whether multiplication by i steps time forwards or backwards, are fairly arbitrary.
To understand complex scalars group-theoretically, let us denote by C× the non-zero complex numbers, considered as a group under multiplication. An element of the group can be thought of as a combined positive scaling and phase translation. Let U(1) be the sub-group of C× consisting of unitary complex numbers (those with absolute value 1); see also unitary group. Let R+ be the positive reals considered as a group under multiplication. Now the decomposition C×≅R+×U(1) holds: multiplication by a non-zero complex number combines scaling and phase translation.
First attempt: C×-symmetric sets
If G is a group, let BG be the delooping groupoid: a groupoid with a single object (⋆), and one morphism per element of G. A convenient notation for the category of G-symmetric sets (and equivariant maps between them) is the functor category [BG, Set]. In this case, G=C×.
A C×-symmetric set is a set S with a group action λ∗s (λ∈C×,s∈S) satisfying 1∗s=s and a∗(b∗s)=ab∗s. We now have a set of elements that can be scaled and phase-translated. Hence, S conceptually represents a set of phasors (or multi-phasors), which are acted on by complex scalars.
Let S, T be C×-symmetric sets. A map f:S→T is equivariant iff f(λ∗x)=λ∗f(x) for all λ∈C×. This is looking a lot like linearity, though we do not have zero or addition. To handle additivity, it will help to factor out the R+ symmetry.
Second attempt: U(1)-symmetric real vector spaces
We will use real vector spaces to factor out the R+ symmetry. While we could use R≥0 semimodules (to model negation as action by −1∈U(1)), real vector spaces are mathematically nicer. Let VectR be the category of real vector spaces and linear maps between them.
To get at the idea of using real vector spaces to handle R+ symmetry, we consider the functor category [BU(1),VectR]. Each element is a real vector space with a U(1) action. We can write the action as a∗s for complex unitary a. Note s↦a∗s is linear for fixed a.
Let U, V be real vector spaces with U(1) symmetry. A linear map f:U→V is U(1)-equivariant iff f(a∗x)=a∗f(x) for all complex unitary a.
Now suppose we have opposite-phase cancellation: (−1)∗x=−x for x∈U, which is valid for ideal harmonic oscillators, and of course relevant to destructive interference. We now extend S to a complex vector space, defining scalar multiplication as (a+bi)⋅x=a⋅x+b⋅(i∗x) for real a, b. This is a standard linear complex structure with the linear automorphism x↦i∗x. The assumption of opposite-phase cancellation is therefore the only distinction between a U(1)-symmetric real vector space in [BU,VectR] and a proper complex vector space.
(an abstract representation of the double-slit experiment, depicting opposite-phase cancellation through representation of phasors as colors)
Third attempt: O(2)-symmetric real vector spaces
We see that [BU(1),VectR] is close to the category of complex vector spaces and linear maps between them. Note that U(1)≅SO(2), where SO(2) is the set of rotations in Euclidean space. This of course relates to visualizing phase translation as rotation, and seeing phasors as moving in a circle. While SO(2) gives 2D rotational symmetries of a circle, it does not give all symmetries of a circle. That would be the orthogonal group O(2), which includes both rotation and mirroring. We could conceptualize mirroring as a quasi-scalar action: if action by i rotates a wheel counter-clockwise 90 degrees, then mirroring is like turning the wheel to its opposite side, so clockwise reverses with counter-clockwise.
To make the application to phase translation more direct, we will present O(2) using unitary complex numbers. The group has the following elements (closed under group operations): M(a) for complex unitary a, meant to represent a phase translation, and J, meant to represent a distinguished mirroring. We have the following algebraic identities:
Note that since a is unitary, the conjugate ¯¯¯a=a−1 is an inverse. We can derive that JM(a)J=M(a)−1, so mirroring reverses the way phase translations go, as expected.
Now the category [BO(2),VectR], noting the previous correspondence with complex vector spaces, motivates the following definition. A real structure on a complex vector space V is a function σ:V→V that is an antilinear involution, i.e.:
For example, C has a real structure σ(λ)=¯¯¯λ. So the involution σ generalizes complex conjugate.
Now, if U and V are complex vector spaces with real structures σU,σV, then a linear map f:U→V is σ-linear iff it satisfies f(σu(x))=σV(f(x)) for all x∈U. This condition accords with O(2)-equivariance.
While real structures are useful in quantum mechanics (notably in the theory of C* algebras), they are not well suited for quantum states themselves. Imposing a real structure on the state space, and a corresponding σ-linearity condition on state transitions, is too restrictive for Schrödinger time evolution.
Fourth attempt: O(2) as a groupoid
Since O(2) has two topologically connected components (mirrored and un-mirrored), it is perhaps reasonable to separate them out, like two copies of U(1) glued together. Conceptually, this allows conceiving of two phase-translatable spaces that mirror each other, rather than treating mirroring as an action within any phase-translatable space. We consider a "polar unitary groupoid" U(1)±, which has two objects ⋆+,⋆−. For complex unitary a, we have morphisms M+(a):⋆+→⋆+ and M−(a):⋆−→⋆−, which compose and invert as usual for U(1). We also have an isomorphism J:⋆+→⋆− satisfying J∘M+(a)=M−(¯¯¯a)∘J. U(1)± relates to O(2) through a full and faithful functor (J↦J,M+(a)↦M(a),M−(a)↦M(a)). The only essential difference is in separating the two connected components of O(2) into separate objects of the groupoid U(1)±.
Now we can consider the functor category [U(1)±,VectR]. An object of this category picks out two U(1)-symmetric real vector spaces (of which complex vector spaces are an important class), and provides a real-linear isomorphism between them corresponding to J; this isomorphism is not, in general, complex-linear. Importantly, the groupoidal identity J∘M+(a)=M−(¯¯¯a)∘J yields a corresponding fact that the two U(1)-symmetric real vector spaces have opposite phase-translation actions.
To simplify, we can achieve opposite phase-translation actions as follows. Let V be a complex vector space. Let ¯¯¯¯V be a complex vector space with the same elements as V and the same addition function. The only difference is that scalar multiplication is conjugated; a⋅¯¯¯¯Vv=¯¯¯a⋅Vv. We call ¯¯¯¯V the complex conjugate space of V.
Improving the notation, if v∈V, we write ¯¯¯v∈¯¯¯¯V for the corresponding vector in the complex conjugate space. Note the following:
The choice of ¯¯¯v notation here is not entirely standard (although ¯¯¯¯V is standard), but is convenient in that, for example, ¯¯¯λ¯¯¯v looks like ¯¯¯¯¯¯λv, and they are in fact equal.
Let U and V be complex vector spaces. If f:U→V, define ¯¯¯f:¯¯¯¯U→¯¯¯¯V as ¯¯¯f(¯¯¯u)=¯¯¯¯¯¯¯¯¯¯f(u). This definition matches what we would expect from morphisms (natural transformations) in [U(1)±,VectR]. By treating f and ¯¯¯f as separate functions, we avoid the rigidity of σ-linearity.
What we have finally derived is a simple idea (complex vector spaces and their conjugates), but with a different groupoid-theoretic understanding. Now we will relate this understanding to quantum mechanics.
The inner product
In bridging from complex vector spaces to the complex Hilbert spaces used in quantum mechanics, the first step is to add an inner product, forming a complex inner product space. Traditionally, the inner product is a function ⟨−,−⟩:H×H→C, where H is a Hilbert space (or other complex inner product space). While the inner product is linear in its second argument, it is notoriously anti-linear in its first argument. So while on the one hand ⟨u,λv⟩=λ⟨u,v⟩, on the other hand, ⟨λu,v⟩=¯¯¯λ⟨u,v⟩. Also, the inner product is conjugate symmetric: ⟨u,v⟩=¯¯¯¯¯¯¯¯¯¯¯¯⟨v,u⟩.
The anti-linearity and conjugate symmetry properties are not initially intuitive. To directly motivate anti-linearity, let ψ∈H be a quantum state. Now the inner product ⟨ψ,ψ⟩ gives the square of the norm of the state ψ, as a non-negative real number. If the inner product were bilinear, then we would have ⟨iψ,iψ⟩=i2⟨ψ,ψ⟩=−⟨ψ,ψ⟩. But multiplying ψ by i is just supposed to change the phase, not change the squared norm. Due to antilinearity, ⟨iψ,iψ⟩=¯ii⟨ψ,ψ⟩=⟨ψ,ψ⟩ as expected.
Now, the notion of a complex conjugate space is directly relevant. We can take the norm as a bilinear map ⟨−,−⟩:¯¯¯¯V×V→C. The complex conjugate space ¯¯¯¯V gracefully handles antilinearity: ⟨¯¯¯¯¯¯λu,v⟩=⟨¯¯¯λ¯¯¯u,v⟩=¯¯¯λ⟨¯¯¯u,v⟩. And we recover conjugate symmetry as ⟨¯¯¯u,v⟩=¯¯¯¯¯¯¯¯¯¯¯¯⟨¯¯¯v,u⟩; the overlines make the conjugate symmetry more intuitive, as we can see parity of conjugation is preserved.
Using the universal property of the tensor product, we can equivalently see a bilinear map ¯¯¯¯V×V→C as a linear map ¯¯¯¯V⊗V→C; for the inner product, this corresponding map is ⟨¯¯¯u⊗v⟩=⟨¯¯¯u,v⟩. This correspondence motivates studying the complex vector space ¯¯¯¯V⊗V.
Real structure on tensor products
The space ¯¯¯¯V⊗V has a real structure, by swapping: σ(¯¯¯u⊗v)=(¯¯¯v⊗u). To check:
σ(λ(¯¯¯u⊗v))=σ(¯¯¯u⊗λv)=¯¯¯¯¯¯λv⊗u=¯¯¯λ(¯¯¯v⊗u)=¯¯¯λσ(¯¯¯u⊗v)
as desired. Of course, C also has real structure, so we can consider σ-linear maps ¯¯¯¯V⊗V→C. First we wish to check that the tensor-promoted inner product is σ-linear: ⟨σ(¯¯¯u⊗v)⟩=⟨¯¯¯v⊗u⟩=¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯⟨¯¯¯u⊗v⟩.
Noticing that the inner product is σ-linear of course raises the question of whether there are other interesting σ-linear maps ¯¯¯¯V⊗V→C. But we need to bridge to standard notation first.
Bra-kets and dual spaces
Traditionally, a 'ket' |ψ⟩ is notation for a vector in the Hilbert space H. A 'bra' ⟨ψ| is an element of the dual space of linear functionals of the form H→C; this dual space is called H∨. We convert between bras and kets as follows. Given a ket v=|ψ⟩, the corresponding bra is ⟨v,−⟩∈H∨, which linearly maps kets to complex numbers. The ket-to-bra mapping is invertible, and antilinear, due to Riesz representation.
In our alternative notation, we would like the dual V∨ to be linearly, not antilinearly, isomorphic with ¯¯¯¯V. This is straightforward: given ¯¯¯u∈¯¯¯¯V, we take the partial application ⟨¯¯¯u,−⟩∈V∨. This mapping from ¯¯¯¯V to V∨ is a linear isomorphism when V is a Hilbert space: ⟨λ¯¯¯v,−⟩=λ⟨¯¯¯v,−⟩ (note the non-standard notation!). As such, ¯¯¯¯V≅V∨; the dual space is isomorphic to the complex conjugate space.
Tensoring operators
We would now like to understand linear operators, which are linear maps A:V→V. Because V≅¯¯¯¯V∨, we can see the operator as a linear map V→¯¯¯¯V∨, or expanded out, V→(¯¯¯¯V→C). Tensoring up, this is equivalently a linear map ¯¯¯¯V⊗V→C. Of course, this is related to the standard operator notation ⟨ϕ|A|ψ⟩; we can see the operator as a quadratic form in a bra and a ket.
More explicitly, if A:V→V is linear, the corresponding tensored map is A⊗(¯¯¯u⊗v)=⟨¯¯¯u⊗Av⟩. We would like to understand real structure on linear operators through real structure on tensored maps of this type. If f:¯¯¯¯V⊗V→C is linear, we define the real structure σ(f)(¯¯¯u⊗v)=¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯f(σ(¯¯¯u⊗v))=¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯f(¯¯¯v⊗u). As a quick check:
σ(λf)(¯¯¯u⊗v)=¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯λf(¯¯¯v⊗u)=¯¯¯λ ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯f(¯¯¯v⊗u)=¯¯¯λσ(f)(¯¯¯u⊗v).
We can apply this real structure to A⊗:
σ(A⊗)(¯¯¯u⊗v)=¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯A⊗(¯¯¯v⊗u)=¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯⟨¯¯¯v,Au⟩=⟨¯¯¯¯¯¯¯Au,v⟩.
By definition, the Hermitian adjoint A†:V→V satisfies ⟨¯¯¯u⊗Av⟩=⟨¯¯¯¯¯¯¯¯¯A†u⊗v⟩; note (A†)†=A. As such,
(A†)⊗(¯¯¯u⊗v)=⟨¯¯¯u⊗A†v⟩=⟨¯¯¯¯¯¯¯Au,v⟩
Therefore, σ(A⊗)=(A†)⊗. This justifies the Hermitian adjoint as the canonical real structure on the linear operator space V→V, as is standard in operator algebra.
Now we can ask: When is A⊗ σ-linear?
A⊗(σ(x))=¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯A⊗(x)⇔¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯A⊗(σ(x))=A⊗(x)
⇔σ(A⊗)(x)=A⊗(x)⇔(A†)⊗(x)=A⊗(x)
Assuming V is a Hilbert space, this holds for all x∈¯¯¯¯V⊗V iff A=A†, i.e. A is Hermitian. This is significant, because Hermitians are often used to represent observables (such as in POVMs). It turns out that, among linear maps V→V, the Hermitians are exactly those whose corresponding tensored maps A⊗ are σ-linear.
Let a member of a complex vector space x∈X with a real structure σ be called self-adjoint iff σ(x)=x. As an important implication of the above, if A is Hermitian, then A⊗ maps self-adjoint tensors (such as those corresponding with density matrices) to self-adjoint complex numbers (i.e. real numbers). This is, of course, helpful for calculating probabilities, as probabilities are real numbers.
Unitary evolution and time reversal
While Hermitian operators are those for which A†=A, unitary operators are those for which B†=B−1. We will consider time evolution as a family of unitary operators U(t) for real t, which is group homomorphic as a family (U(0)=I,U(a+b)=U(a)U(b)).
A simple, classical example of unitary evolution is that of a phasor representation of a simple harmonic oscillator, x′(t)+ωx(t)i=aeωti+θ (for real a). The unitary evolution is given by u(t)=eωti∈U(1), a multiplicative factor on the phasor to advance it in time. By convention, we have decided that time evolves in the +i direction (multiplicatively), assuming ω>0. We can find this direction explicitly by differentiating: u′(0)=ωi.
With classical phasors, it is easy to see what physical quantities the representation corresponds to; here, the imaginary part of the phasor represents the position multiplied by the frequency ω. Interpreting quantum phasors is less straightforward. We can still take the derivative U′(t), which approximates U(t) as U(ϵ)≈I+ϵU′(0) as ϵ→0. We recover U(t) through the matrix exponential U(t)=etU′(0), which generalizes u(t)=etu′(0) in the single-phasor case.
Because the family U(t) is unitary, we have U′(0)=−iH for Hermitian H; note −H is Hermitian iff H is. In the specific case of the Schrödinger equation, H=ℏ−1^H where ^H is the Hamiltonian, and ℏ is the reduced Planck constant (a positive real number). The direction of the action of i in quantum state space is meaningful through the Schrödinger convention U′(0)=−iℏ−1^H (as opposed to U′(0)=iℏ−1^H).
Complex conjugate therefore relates to time reversal, though is not identical with it. U(t)=etU′(0)=e−itH, while U(−t)=e−itU′(0)=eitH. In the real structure on linear operator space V→V given by (−)†, a Hermitian is self-adjoint, like a real number in C, while U′(0) is skew-adjoint ((U′(0))†=−U′(0)), like an imaginary number in C (i.e. bi for real b).
To bridge to standard time reversal in physics, complex conjugates relate to time reversal operators in that a time reversal operator T:H→H (satisfying T−1U(t)T=U(−t)) is anti-linear, due to the relationship between time and phase. In the simpler case, T2=I, though in systems with half-integer spin, T2=−I; see Kramers' theorem for details. In the latter case, time reversal yields a quaternionic structure on Hilbert space (rather than a real structure). In relativistic quantum field theory, one may alternatively consider the combined CPT operation, which includes time reversal but typically squares to the identity. Like time reversal, CPT is anti-linear; either C or P on its own would be linear, so the anti-linearity of CPT necessarily comes from time reversal.
Complex conjugation is not itself time reversal, but any symmetry that reverses time must conjugate the complex structure; CPT is the physically meaningful anti-linear involution that accomplishes this. The sign applied to i in the Schrödinger equation is not an additional law of nature, but a choice of orientation. Nature respects the equivalence of both choices, while observables live in the self-adjoint subspace where the distinction disappears.
Conclusion
Groupoid representation theory helps to understand Hilbert spaces and their relation to operator algebras. It raises the simple question, "if action by a unitary complex number is like rotating a circle, what is like mirroring the circle?". This question can be answered precisely with a real structure, and less precisely with a conjugate vector space. The conjugate vector space helps recover a real structure, through the swap on the tensor product ¯¯¯¯V⊗V, which relates to the inner product and the Hermitian adjoint (−)†.
While on the one hand, complex conjugation is a simple algebraic isomorphism (if i is a valid imaginary unit, then so is -i), on the other hand it has a deep relationship with physics. The Schrödinger equation relates i to a direction of time evolution; complex conjugation goes along with time reversal. The Hermitian adjoint, as a real structure on linear operator space, generalizes complex conjugate; it keeps Hermitians (such as observables and density matrices) the same, while reversing unitary time evolution.
Much of the apparent mathematical complexity of quantum mechanics clicks when viewed through representation theory. Algebra, not just empirical reality, constrains the theoretical framework. Geometric representations of physical algebras serve both as shared intuition pumps and as connections with the (approximately) classical phenomenal space in which empirical measurements appear. Understanding the complex conjugate through representation theory is not advanced theoretical physics, but it is, I hope, illustrative and educational.