I think there are two ways that a reward function can be applicable:
1) For making moral judgements about how you should treat your agent. Probably irrelevant for your button presser unless you're a panpsychist.
2) If the way your agent works is by predicting the consequences of its actions and attempting to pick an action that maximises some reward (eg a chess computer trying to maximise its board valuation function). Your agent H as described doesn't work this way, although as you note there are agents which do act this way and produce the same behaviour ... (read more)
Fire Emblem: Heroes