There's a counterargument-template which roughly says "Suppose the ground-truth source of morality is X. If X says that it's good to torture babies (not in exchange for something else valuable, just good in its own right), would you then accept that truth and spend your resources to torture babies? Does X saying it's good actually make it good?"
Applied to the most strawmannish version of moral realism, this might say something like "Suppose the ground-truth source of morality is a set of stone tablets inscribed with rules. If one day someone finds the tablets, examines them, and notices some previously-overlooked text at the bottom saying that it's good to torture babies, would you then accept this truth and spend your resources to torture babies? Does the tablets saying it's good actually make it good?"
Applied to a stronger version of moral realism, it might say something like "Suppose the ground-truth source of morality is game-theoretic cooperation. If it turns out that, in our universe, we can best cooperate with most other beings by torturing babies (perhaps as a signal that we are willing to set aside our own preferences in order to cooperate), would you then accept this truth and spend your resources to torture babies? Does the math saying it's good actually make it good?"
The point of these templated examples is not that the answer is obviously "no". (Though "no" is definitely my answer.) A true moral realist will likely respond by saying "yes, but I do not believe that X would actually say that". That brings us to the real argument: why does the moral realist believe this? "What do I think I know, and how do I think I know it?" What causal, physical process resulted in that belief?
(Often, the reasoning goes something like "I'm fairly confident that torturing babies is bad, therefore I'm fairly confident that the ground-truth source of morality will say it's bad". But then we have to ask: why are my beliefs about morality evidence for the ground-truth? What physical process entangled these two? If the ground-truth source had given the opposite answer, would I currently believe the opposite thing?)
In the strawmannish case of the stone tablets, there is pretty obviously no causal link. Humans' care for babies' happiness seems to have arisen for evolutionary fitness reasons; it would likely be exactly the same if the stone tablets said something different.
In the case of game-theoretic cooperation, one could argue that evolution itself is selecting according to the game-theoretic laws in question. On the other hand, thou art godshatter, and also evolution is entirely happy to select for eating other people's babies in certain circumstances. The causal link between game-theoretic cooperation and our particular evolved preferences is unreliable at best.
At this point, one could still self-consistently declare that the ground-truth source is still correct, even if one's own intuitions are an unreliable proxy. But I think most moral realists would update away from the position if they understood, on a gut level, just how often their preferred ground-truth source diverges from their moral intuitions. Most just haven't really attacked the weak points of that belief. (And in fact, if they would update away upon learning that the two diverge, then they are not really moral realists, regardless of whether the two do diverge much.)
Side-note: Three Worlds Collide is a fun read, and is not-so-secretly a great thinkpiece on moral realism.