(...) the term technical is a red flag for me, as it is many times used not for the routine business of implementing ideas but for the parts, ideas and all, which are just hard to understand and many times contain the main novelties.
- Saharon Shelah
As a true-born Dutchman I endorse Crocker's rules.
For my most of my writing see my short-forms (new shortform, old shortform)
Twitter: @FellowHominid
Personal website: https://sites.google.com/view/afdago/home
Reasons to think Lobian Cooperation is important
Usually the modal Lobian cooperation is dismissed as not relevant for real situations but it is plausible that Lobian cooperation extends far more broadly than what is proved currently.
It is plausible that much of cooperation we see in the real world is actually approximate Lobian cooperation rather than purely given by traditional game-theoretic incentives.
Lobian cooperation is far stronger in cases where the players resemble each other and/or have access to one another's blueprint. This is arguably only very approximately the case between different humans but it is much closer to be the case when we are considering different versions of the same human through time as well as subminds of that human.
In the future we may very well see probabilistically checkable proof protocols, generalized notions of proof like heuristic arguments, magical cryptographic trust protocols and formal computer-checked contracts widely deployed.
All these considerations could potentially make it possible for future AI societies to exhibit vastly more cooperative behaviour.
Artificial minds also have several features that make them intrinsically likely to engage in Lobian cooperation. i.e. their easy copyability (which might lead to giant 'spur' clans). Artificial minds can be copied, their source code and weight may be shared and the widespread use of simulations may become feasible. All these point towards the importance of Lobian cooperation and Open-Source Game theory more generally.
[With benefits also come drawbacks like the increased capacity for surveillance and torture. Hopefully, future societies may develop sophisticated norms and technology to avoid these outcomes. ]
The Galaxy brain take is the trans-multi-Galactic brain of Acausal Society.
IIRC according to gwern the theory that IQ variation is mostly due to mutational load has been debunked by modern genomic studies [though mutational load definitely has a sizable effect on IQ]. IQ variation seems to be mostly similar to height in being the result of the additive effect of many individual common allele variations.
I am a little confused about this. It was my understanding that exponential families are distinguished class of families of distributions. For instance, they are regular (rather than singular).
The family of mixed Gaussians is not an exponential family I believe.
So my conclusion would be that the while "being Boltzmann" for a distribution is trivial as you point out, "being Boltzmann" (= exponential) for a family is nontrivial.
If we solve the alignment problem than we solve alignment problem.
I agree with this true statement.
I read the first past of this post. It is quite interesting. Always thought Hedges' work should be more known on LessWrong.
Have you since thought about these topics? I'd be curious what your current take is.
A 0.01 m/s acceleration will displace a spaceship 50 meters over 100 seconds.
In 100 seconds
A missile moving at 10 km/s would move a 1000 km A missile moving at 100 km/s would move 10000 km A torch missile moving at 1000/s would move 100k km. 1/300th the speed of light. Not realistic with purely chemical propulsion, but could be reached by multistage ORION propulsion. At this point the missile is somewhat of an entire spaceship onto itself. To accelerate to this speed would take a considerable time: at an eye-watering 100g acceleration it would take a full 1000 seconds just to achieve top speed.
Engagement ranges of > 100k km could be realistic. Using selenic drones one could extend the effective range of laser weapons beyond this range.
Scott Garrabrant conceived of FFS as an extension & generalization of Pearlian causality that answers questions that are not dealt well with in the Pearlian framework. He is aware of Pearl's work and explicitly builds on it. It's not a distinct approach as much as an extension. The paper you mentioned discusses the problem of figuring out what the right variables are but poses no solution (as far as I can tell). That shouldn't surprise because the problem is very hard. Many people have thought about it but there is only one Garrabrant.
I do agree with your overall perspective that people in alignment are quite insular, unaware of the literature and often reinventing the wheel.
That's nice to hear. Could you say more on your update towards open games ?