A researcher in CS theory, AI safety and other stuff.
This brings up the concept of theory of mind for me, especially when thinking about how this applies differently to individual people, to positions/roles in society, and e.g. to corporations. In particular, I would need to have a theory of mind of an entity to ascribe "honor" to it and expect it to uphold it.
A person can convince me that their mind is built around values or principles and I can reasonably trust them to uphold them in the future more likely than not. I believe that for humans, pretending is usually difficult or at least costly.
What a corporation evokes, on the other hand, is not very mind-like. For example: I expect it to uphold bargains when it would be costly not to (in lawsuits, future sales, brand value, etc.), I expect leadership changes, and I expect it to be hard to understand its reasoning (office politics and other coordination problems). (The corporation is also aware of this and so may not even try to appear mind-like.)
An interesting exception may be companies that are directed by a well-known principled figure, where having a theory of their mind again makes some sense. (I suspect that is why many brands make effort to create a fictitious "principled leader figure" ).
"PR", as Anna describes it, seems to be a more direct (if hard) optimization process that is in principle available to all.
I want to note that I am still mildly confused how a reputation of a company that would be based solely on its track record fits into this - I would still expect it to (likely) continue in the trend even having no model of its inner workings or motivations.
Update: I believe the solution by @UnexpectedValues can be adapted to work for all natural numbers.
Proof: By Dirichlet prime number theorem, for every , there is a prime of the form as long as N-1 and N are co-prime, which is true whenever . Then and the solution by UnexpectedValues can be used with appropriate . Such p exists whenever which is always the case for . For , one can take e.g. (since ). And is trivial.
A beautiful solution! Some remarks and pointers
Note that your approach only works for target (that is ), as does not have a real solution. Open question: How about 1-coin solution emulating a 3-sided dice?
See my solution (one long comment for all 3 puzzles) for some thoughts on rational coins - there are some constructions, but more importantly I suspect that no finite set of rational coins without the 1/2-coin can generate any fair dice. (My lemma does not cover all rational coins yet, though; see by the end of the comment. I would be curious to see it turn out either way!)
So, I got nerd-sniped by this (together with my flatmate on P3) and we spent some delicious hours and days in various rabbit-holes. Posting some thoughts here.
First of all, a meta-comment (minor spoiler on the nature of best-known solutions):
The first puzzle got me into an optimization mindset -- and it seems to be the right mindset for that problem -- but the optimal solutions to P2 and P3 (which I missed before reading spoilers) are "crisp" or "a neat trick" rather that a result of whatever matches a (however clever) "optimization process" for me. (If this was intended, I really appreciate the subtlety of it, Scott!)
For P1, I got a bit over-board and found optimal* solutions for numbers up to 10000 (and some more, with some caveats -- see below). The spoiler-tags just below do not contain the solutions, but spoilery notes on my approach, actual solutions are lower down.
By "optimal*" I mean definitely cheapest formulas among all formulas using only natural numbers smaller than as intermediate values (and modulo any bugs in my code). You can find the code here and the solutions for all numbers 1..10000 here (obvious spoilers).
The largest intermediate value needed for all of those is below . Note the code also finds optimal* formulas for many more values (~100 million on 50GB RAM), just does not report them by default.
I have ran the code to also consider intermediate values up to and even and did not find any cheaper solution for numbers up to 10000. (In those runs my code got to the optimal solutions for only 98% and 88% (respectively) of the numbers up to 10000, not finding any better solutions, then ran out of memory. I still consider it a good indication that much larger values are likely not useful for numbers up to 10000.) I would be really interested in constructions or intuitions why this may or may not be the case!
I know about at least one case of a formula using real numbers as intermediates that is (very likely) strictly better that any formula with only natural intermediate values. That example is due to Scott -- leaving it up to him to disclose it or not.
My solution for 2021 and some notes on optimal* solutions for other numbers
2021 = (isqrt(isqrt((isqrt(isqrt(isqrt(isqrt((2 ** fact(fact(3))))))) // 2))) - (3 ** 3))
-- a cost-108 variation on that uses a clever cost-12 formula for . This uses as an intermediate value -- the highest (up to small variations) encountered in formulas for values under 10000. This sub-formula is used in 149 of the 10000 solutions. My code prefers solutions with the smallest intermediate values used as a secondary criterion, so these large intermediate values are necessary for optimal* solutions.
Other encountered sub-formulas using large intermediate values: 7381 = (fact((2 + fact(5))) // (2 * fact(fact(5))))
(needs , used 10 times in 1..10000), 3782 = (fact(((2 ** fact(3)) - 2)) // fact((fact(5) // 2)))
(needs 3e85, used once), 1189 = isqrt(((2 // 2) + (fact((fact(3) ** 2)) // fact((2 ** 5)))))
(needs 4e41, used once), and 32768 = isqrt(isqrt(isqrt((2 ** fact(5)))))
(needs 1e36, used 56 times). All other optimal* solutions for values up to 10000 only use intermediate values smaller than .
The distribution of used intermediate values -- only 3 between and , 2 more under , and no more under -- is my main argument for all the found solutions for 1...10000 being likely optimal. Anything to the contrary would be intriguing indeed!
And a universal cost-8 solution if was allowed:
using nested square-roots.
My (non-optimal) solution, more for reference and the algorithm that anything else. ("dN" denotes an N-sided fair die.)
After 2 dice rolls of , , partition the outcomes into equally-sized sets, and remaining outcomes. If you hit one of the sets, you are done. If not, you can use the remaining information as already having thrown a and only throw one additional dice (if C>2) before you do another partition of the outcomes into sets and a remainder (rather than doing a full restart). (Note that it is generally better to be left with a larger when the probability of spillover is comparable.)
This can be represented by a recursive formula for : "expected additional rolls if you already have thrown dA", the puzzle result is then . Rather than the full formula, here is a program that iteratively computes . My result is , matching other similar solutions here. (See Vaniver's comment for the optimal one)
While me and Adam did not find the best known solution for P3, we had a lot of fun trying to figure out various approaches to and properties of the problem. Here is a short summary, happy to share more details. The framing here is "emulating dN" (an N-sided fair dice).
There are some reductions between coins and emulated dice. (Note is trivially available.)
This then gives a few solutions to 2021 using only rational coins and a range of general solution schemas to explore:
I don't know about a fully-general (and guaranteed to halt) algorithm using just R1-R3, since it is not clear how high you need to go in the multiples to split via R3. There may also be other reduction rules even for just rational coins. I would be curious about any thoughts in that direction!
And few more theoretical observations.
Complexity indeed matters: the universe seems to be bounded in both time and space, so running anything like Solomonoff prior algorithm (in one of its variants) or AIXI may be outright impossible for any non-trivial model. This for me significantly weakens or changes some of the implications.
A Fermi upper bound of the direct Solomonoff/AIXI algorithm trying TMs in the order of increasing complexity: even if checking one TM took one Planck time on one atom, you could only check cca 10^250=2^800 machines within a lifetime of the universe (~10^110 years until Heat death), so the machines you could even look at have description complexity a meager 800 bits.
I think that sufficiently universally trusted arbiters may be very hard to find, but Alice can also refrain from that option to prevent the issue gaining more public attention, believing more attention or attention of various groups to be harmful. I can imagine cases, where more credible people (Carols) saying they are convinced that e.g. "it is really easily doable" would disproportionally give more incentives for misuse than defense (by the groups the information reaches, the reliability signals those groups accept etc).
Side-remark: Individual positions and roles in society seem to hold a middle ground here: When dealing with a concrete person who holds some authority (imagine a grant maker, a clerk, a supervisor, ...), modelling them internally as a person or as an institution brings up different expectations of motivations and values - the person may have virtues and honor where I would expect the institution to have rules and possibly culture (where principles may be a solid part of the rules or culture, but that feels somewhat less common, weaker or more prone to Goodharting as PR; I may be confused here, though).