TLDR; you probably already know that 2^10=1024, use this to derive powers of 10 instead of memorizing!
https://en.wikipedia.org/wiki/Renard_series, which were designed in the 1870s to be convenient for the officers and engineers of the French Army without a slide rule or a log table, are based on 5th and 10th roots of 10. 1024 being quite close to 1000 means that is very close to , and this allows you to quickly derive R10 numbers without pen and paper.
I have used the algorithm for so long that it has become almost unconscious so I had Gemini write it out:
Mental Algorithm for R10 Numbers (sorry for poor formatting, it doesn't copypaste neatly, I only fixed manually where it doesn't read well)
In the R10 series, every step increases the value by a factor of $\approx 1.26$.
However, since :
This gives you the Golden Rule of R10:
* Add 3 to the Index Multiply Value by 2
* Subtract 3 from the Index Divide Value by 2
### The Algorithm: The Three Strands
To find any R10 number mentally, you don't calculate them sequentially. Instead, you split the numbers 0–10 into three "strands" based on the anchors you already know: **1**, **8**, and **10**.
#### Strand A: The Powers of 2 (Indices 0, 3, 6, 9)
Start at **1** and double it every 3 steps.
* R10(**0**) = **1.0**
* R10(**3**) = **2.0**
* R10(**6**) = **4.0**
* R10(**9**) = **8.0**
#### Strand B: The Halving from 10 (Indices 10, 7, 4, 1)
Start at **10** and halve it every 3 steps (going backwards).
* R10(**10**) = **10.0**
* R10(**7**) = **5.0**
* R10(**4**) = **2.5**
* R10(**1**) = **1.25**
#### Strand C: The "80% Rule" (Indices 8, 5, 2)
This is the hardest strand because it doesn't land on a clean integer.
We derive this by starting at R10(9), which we know is **8.0**, and going **down 1 step**.
Mathematically, going down 1 step is dividing by $1.2589...$, which is almost exactly multiplying by **0.8**.
* Start at R10(9) = 8.0.
* **R10(8)** $\approx 8.0 \times 0.8 =$ **6.4** (Anchor)
* Now, apply the "Subtract 3 is Half" rule:
* **R10(5)** $\approx 6.4 / 2 =$ **3.2**
* **R10(2)** $\approx 3.2 / 2 =$ **1.6**
### Summary Table (Mental vs Actual)
By using this mental model (Doubling, Halving, and the 0.8 factor), your approximations are incredibly close to the standard values.
| Index | Mental Derivation | Approx Value | Actual R10 Value |
| :--- | :--- | :--- | :--- |
| **0** | Base | **1.00** | 1.00 |
| **1** | $10 \div 8$ | **1.25** | 1.25 |
| **2** | $3.2 \div 2$ | **1.60** | 1.60 |
| **3** | $1 \times 2$ | **2.00** | 2.00 |
| **4** | $5 \div 2$ | **2.50** | 2.50 |
| **5** | $6.4 \div 2$ | **3.20** | 3.15 |
| **6** | $2 \times 2$ | **4.00** | 4.00 |
| **7** | $10 \div 2$ | **5.00** | 5.00 |
| **8** | $8 \times 0.8$ | **6.40** | 6.30 |
| **9** | $4 \times 2$ | **8.00** | 8.00 |
| **10** | Base | **10.00** | 10.00 |
### Quick Reference for Your Brain
1. **0, 3, 6, 9:** Just say **1, 2, 4, 8**.
2. **1, 4, 7:** Start at 10 and halve backwards ($10 \to 5 \to 2.5 \to 1.25$).
3. **2, 5, 8:** Remember **6.4** (from $8 \times 8$), then halve backwards ().
For the 5th R10 number you can also use the coincidence that square root of 10 is close to pi (used in antiquity to approximate pi), or for the 8th number you can use (I personally have just memorized it in middle school), but neither is really necessary for mental calculations
As a side note, Secret Hitler is actually a tabletop game which is about convincingly lying and identifying lies, but also counting probabilities like in poker. If the post author hasn't tried it, could recommend
They are used for cybercrime and rumored to be deployed for state-funded espionage.
To make it a little more substantial, web browsing agents with some OSINT skills (and multimedia models already geolocate photos made in Western urban areas comparably with human experts) offer prospects of automatizing or at least significantly speeding up (making much cheaper) targeted attacks like spearphishing
A quick side note: in the 17 years which have passed since the post you cite had been written historiography of connectionism moved on, and we now know that modern backpropagation was invented as early as 1970 and first applied to neural nets in 1982 (technology transfer was much harder before web search!), see https://en.wikipedia.org/wiki/Backpropagation#Modern_backpropagation and references thereof
I think it does, among other things, actually investigate cross-border crime but just on a small scale due to limited resources, check this: https://www.frontex.europa.eu/what-we-do/operations/operations
police force
Actually, since 2016 EU has a relatively small (~3700 officers as of this writing, which is about 1/6 larger than police of Luxembourg) border police force called Frontex! EU president would like to increase it an order of magnitude in a few years, but member states are not very enthusiastic
Why would anyone want to pay a fortune for a system that is expected to let ~40 warheads through (assuming ~99% overall interception rate which will require average rate of 99.99+%), about the same as the number of ICBMs the Soviet Union had in service during the Cuban Missile Crisis? Unacceptable damage is the cornerstone of the nuclear deterrence, MAD or not (there is no MAD between India and Pakistan, for example).
The RV separation distance is normally around ~100 km (even up to 300 km in some cases) not 10 km, and the decoy dispersal might be expected on the same order of magnitude. It will be easy to ramp it up BTW with a cheap modernization.
None of the US adversaries really practice counterforce targeting, so the silo protection is moot.
lower EQ
I don't think it's relevant here: judging by the EQ-Bench leaderboard, GPT-5 is on par with GPT-4o and has far higher EQ than any of the Anthropic models!
Even if it has some influence, it should be much less than the emoji usage (remember the scandal about the Llama 4 on LMSys) and certainly incomparable to the sycophancy
I like to imagine the whole GPT-5 launch from the perspective of a cigarette company.
OpenAI is Philip Morris over here. Realized they make a product that addicts and hurts people. Instead of feeding it, they cut it off. The addicts went insane and OpenAI unfortunately caved.
— u/ohwut at https://www.reddit.com/r/OpenAI/comments/1mlzo12/comment/n7uko9n
Thanks, I have seen that and thought about that. The best explanation I have come up with is that Meta is not a really a competitor to Google anymore, they lag way behind (there's also a hypothesis of "offloading" TPU depreciation from GCP to clients, but that seems less important than profits from the GCP, and a possibility of Meta promising to integrate TPUs and PyTorch).
SemiAnalysis reported that a similar deal might have been considered with OpenAI and Anthropic is actually buying the hardware (in parallel to renting it), but this looks dubious for me, why would they do that, eating into the profit of their cloud services and competing with them? SemiAnalysis offers no explanation