Counterfactuals relevant to decision making in this context are not other MWI branches, but other multiverse descriptions (partial models), with different amplitudes inside them. You are not affecting other branches from your branch, you are determining which multiverse takes place, depending on what your abstract decision making computation does.
This computation (you) is not in itself located in a particular multiverse or on a particular branch, it's some sort of mathematical gadget. Which could be considered (reasoned about, simulated) from many places where it's thereby embedded, including counterfactual places where it can't rule out the possibility of being embedded for purposes of decision making.
With acausal trade, the trading partners (agents) are these mathematical gadgets, not their instances in their respective worlds. Say, there are agents A1 and A2, which have instances I1 and I2 in worlds W1 and W2. Then I1 is not an agent in this sense, and not a party to an acausal trade between them, it's merely a way for A1 to control W1 (in the usual causal sense). To facilitate acausal trade, A1 and A2 need to reason about each other, but at the end of the day the deal gets executed by I1 and I2 on behalf of their respective abstract masters.
This setup becomes more practical if we start with I1 and I2 (instead of A1 and A2) and formulate common knowledge they have about each other as the abstract gadget A that facilitates coordination between them, with the part of its verdict that I1 would (commit in advance to) follow being A1 (by construction), and the part that I2 would follow being A2. This shared gadget A is an adjudicator between I1 and I2, and it doesn't need to be anywhere near as complicated as them, it only gets to hold whatever common knowledge they happen to have about each other, even if it's very little. It's a shared idea they both follow (and know each other to be following, etc.) and thus coordinate on.
Yes, if you insist on causality, acausal trade does not make sense (it is in the name).
It might just confuse more but think of of the two-boxing situation and have the stipulation that a new party "friend Freddy" gets 1000$ if you one box and 0$ if you twobox.
It happens that Freddys planet has already passed the cosmological horizon and will not have causal contact with you anymore. Omega did the rewarding before the divergence while there was still contact (like they set up the boxes with bills you can eventually fiddle).
Why would you be disallowed to care about Freddys wellfare?
To the extent the objection is merely about making possibilities real it attacks a way more general phenomenon than acausal trade. There is no sense picking up a cup because the branches of picking-up and not-picking-up are still going to exist. That is ordinary causation is also undermined in the same go. The ink is already dry so no story can be motivated.
Additional complication to get explicit acausal trade going on.
Freddy also faces a two boxing problem. Assume that one boxing is the "smart move" in the single player game. Omega has the additional rule that if you both two box instead there is a moderate amount (less than one boxing but more than two boxing) in the boxes. Even if you just care about your own money you will care whether Freddy will cooperate or defect. (I am sorry, it is additionally a prisoners dilemma). If you know about Freddy and Freddy knows about you and you both know about the multiplayer rule, acausal trading logic exposes you to less risk than any single box strategy.
I'm pretty skeptical of acausal trade, so I don't think I'm the best one to answer this. But my understanding of decision theories that engage in it do so because they want to be in the universe/multiverse which contains this increased utility.