Think about this from a far-future perspective. For a few short generations of human history, photography and then film and video were evidence. Before that time period, they weren't because they didn't exist: the only way to get a picture of something was to make it by hand, drawing or painting. And after that time period, they weren't because they were too easily faked.
How it started: pics or it didn't happen.
How it's going: IRL or it didn't happen.
I think there is a window of opportunity for humans to create a reputation for legitimacy and a venue for official information. Consider Neil Degrassi Tyson and the recent flat earth fake. People know where to check to see if he really changed his mind. He has a valid place for things to appear, and a reputation.
Then consider a purported leaked recording of a politician. There's no way to validate or invalidate it. It is a leak, so you expect the politician to deny it whether or not it is true. The tendency is to just reinforce ones pre-existing priors rather than re-evaluation. Some people will be able to upgrade to two possible worlds based on true or not true, and some won't. And then the next time there are four possibilities, then it gets out of hand. No one can keep it straight.
Whatever else happens, there will be a rebirth in the value of in person contact to validate information. Until holograms, avatars and perfect disguises. I don't see any of them as coming soon.
So I see three things at once: some people are fortunate enough to have trust, and get it established before the fakes dominate, so can maintain trust. They can anchor other people the way Google's old search algorithm used to. Some people are screwed and there will always be doubt, trust will be impossible. Flesh connections will make a comeback for purposes of trust. We will muddle through as societies.
epistemic status: my thoughts, backed by some arguments
With the advent of deep fakes, it has become very hard to know which image / sound / video is authentic and which is generated by an AI. In this context, people have proposed using software to detect generated content, usually aided by some type of watermarking. I don't think this type of solution would work.
One idea is to add a watermark to all content produced by a generative model. The exact technique would depend on the type of media - e.g. image, sound, text.
We could discuss various techniques with their advantages and shortcomings, but I think this is beside the point. The fact is that this is an adversarial setting - one side is trying to design reliable, robust watermarks and the other side is trying to break them. Relying on watermarks could start a watermarking arms race. There are strong incentives for creating fakes so hoping that those efforts would fail seems like wishful thinking.
Then there is the issue of non-complying actors. One company could still decide not to put watermarks or release the weights of its model. This is next to impossible to prevent on a worldwide scale. Whoever wants to create fakes can simply use any generative model which doesn't add watermarks.
I don't think watermarking AI-generated content is a reasonable strategy.
Another idea is to make digital cameras add a watermark (or a digital signature) to pictures and videos. Maybe digital microphones can even do something similar for sound, although this would likely significantly increase the price of the cheapest ones. We should note that this technique cannot be applied for text.
I see several objections to this proposal:
I think we may need to accept that indistinguishable fakes are part of what's technologically possible now.
In such case, the best we could do is track the origin of content and then each person could decide which origins to trust. I am thinking of some decentralized append-only system where people can publish digital signatures of content they have generated.
If you trust your journalist friend Ron Burgundy, you could verify the digital signature of the photo in his news article against his public key. You could also assign some level of trust to the people Ron trusts. This creates a distributed network of trust.
With the right software, I can imagine this whole process being automated: I click on an article from a site and one of my browser plugins shows the content is 65% trustworthy (according to my trusted list). When I publish something, a hash of it signed with my private key is automatically appended to a distributed repository of signatures. Anybody can choose run nodes of the repository software in a way similar to how people can run blockchain or tor nodes. Platforms with user-generated content could choose to only allow signed content and the signer could potentially be held responsible. It's not a perfect idea, but it's the best I have been able to come up with.
I have seen attempts at something similar, but usually controlled by some company and requiring paid subscription (both of which defeat the whole purpose for wide adoption).