You could also access the machine controls to change sensor sensitivity, ball #, points per game, etc. depending on the machine, and change it afterwards
Yes, there's a lot of computer-related ones depending on how finegrained you get. (There's a similar issue with my "Ordinary Life Improvements": depending on how you do it, you could come up with a bazillion tiny computer-related 'improvements' which sort of just degenerates into 'enumerating every thing ever involving a transistor in any way' and is not enlightening the same way that, say, 'no indoors smoking' or 'fresh mango' is.) So I would just lump that one under 'Machine Configuration/Administration § Software' as one of the too-obvious-to-be-worth-mentioning hacks.
I'll use the linkpost to use it as a jumping off point for a point about security mindset, and that's about how much cost/consequences of a failed attempt to hack into something matter a lot to how much we should expect intelligences to be able to hack something if the system isn't perfectly secure.
To be clear, I don't think the inference in the post by Jeffrey Heninger that ’it is impossible to get an arbitrarily high score because bouncing balls are chaotic is correct, even assuming no hacking is allowed because control and prediction difficulties are not related.
My comment on the post below:
But to return to the security mindset point, there's parables/stories by Eliezer who talk about security mindset as a willingness to challenge assumptions, and to make systems that always work regardless of how much optimization power is put into it (which most cryptography doesn't do, and even most idealized cryptography schemes can be broken if you can throw an exponential amount of compute/optimization power for all key lengths, simply by brute force).
See here for an example of a parable where you should write code that always works to be aligned to you, no matter how fast the AI self-improves:
It shouldn't be your job to guess how fast the AGI might improve! If you write a system that will hurt you if a certain speed of self-improvement turns out to be possible, then you've written the wrong code. The code should just never hurt you regardless of the true value of that background parameter.
And yet, despite not usually writing such code, systems are much more secure than you'd think, and systems aren't as insecure as often portrayed in Eliezer's parables.
A post below:
The reason why we can get away with imperfectly secure code that relies on more assumptions than Eliezer would like in the real world is because the cost of failing to attack something is actually pretty high, and you will go directly to jail/no longer have access to the system after a number of failed attempts (which is usually set to 10).
@maxnadeau has a tweet about this issue, and notes that assumptions around how costly failure is for an adversary is a huge determinant around how much we can rely on imperfect/swiss-cheese style security:
The important thing is that this paper rocks and I'd love to see bigger/better versions of it, but I'll use it as a jumping-off point for a take: The headline result is that their best defenses work 93% of the time. Is that anything? If you've been reading Simon Willison (whose blog I think is great!), you might say "that’s useless, because in application security 99% is a failing grade" (https://simonwillison.net/2025/Apr/11/camel/#the-best-part-is-it-doesn-t-use-more-ai…). And no wonder, because the techniques in this paper are just variations of "solve AI security problems with more AI", which Simon has emphasized are a dead end (https://simonwillison.net/2022/Sep/17/prompt-injection-more-ai/…). So why are the Redwood folks rosier about 93%? The crux here is whether your threat model assumes attackers can try over and over again. Simon is assuming they can, in which case 93% would be useless, but Redwood folks are assuming that there are major costs to the attacker trying and failing (https://lesswrong.com/posts/i2nmBfCXnadeGmhzW/catching-ais-red-handed…) This assumption is not at all unique to misalignment threats. It's the same assumption that Debenedetti, Carlini, and Tramer make here (https://arxiv.org/abs/2306.02895), which they call evasion "without breaking eggs". I think the right way to model a variety of security problems, e.g. insider threats from employees. One's assumptions about the costs of attacker failure have huge implications for which security measures look helpful, and I think this is an important factor to mentally track in conversations about these topics.
In general, I'd argue one of the biggest cruxes around how much we can get away with security/control of misaligned AIs to automate alignment is how costly a failed attempt to attack/launch a rogue deployment is, and I suspect a lot of the people who are attracted to security mindset tend to think that the cost of failure of an attack is essentially 0 (or believe they can 0-shot very, very complicated hacks).
This also means that in domains where hacking reality by doing experiments has a high cost if they fail, we should expect AI capabilities to improve way less rapidly than in domains where failure has 0 cost (unless it turns out that empirical tests aren't necessary, and you can 0-shot very complicated things from theory only).