Software engineering, parenting, cognition, meditation, other
Linkedin, Facebook, Admonymous (anonymous feedback)
There used to be a downloadable archive of all posts on the old LW 1.0 site. I think there was a SiteMeta post about it, but I can't find it, but anyway that link wouldn't work anymore. I think such an archive it's a nice thing to have and also avoids unnecessary scraping.
I agree that there are multiple interpretative challenges around agency that have a common root. However, I think decision theories and free will are distinct from Russel's paradox and mathematical consistency problems. I think PBR helps with the first but not with the second. At some level, there is a commonality to all "problems" in and with symbolic reasoning. But I think PBR will not help with it as it is itself symbolic.
I wonder whether there is a connection to reading. Some young children don't seem to have trouble stringing the phonemes of multiple letters into words, while others (including mine) seemed to take very long to go from reading single letters to reading words (reading letters with three and words with five).
Did you get any ideas about how the brain learns language from this? It seems to point to a pretty strong auditory loop (though I never understood how that could be implemented in the brain).
You use RLHF to try and avoid damaging answers to certain questions. In doing so, you necessarily reinforce against accurate maps and logical consistency in the general case, unless you do something highly bespoke to prevent this from happening.
This is a great application of the problem outlined by the Parable of Lightning. Including the "unless you do something highly bespoke" such as cities of academics.
I think that's maybe the point people can agree on: To build a machine that performs well. That goes beyond building a decision procedure that performs well in many specific situations (that would each correspond to observer moments) but not in a succession of them, or in situations that would require its own analyzability. Building such a machine requires specifying what it optimizes over, which will be potentially very many observer moments.
There is also Robin Hanson's Fifth Meta Innovation. In a comment on the old Disqus (since lost), I predicted that it would be efficient copying of acquired knowledge. We now have mechanisms to copy, transmit, and generate knowledge. But it still takes time to learn and understand knowledge, i.e., its application in the real world. That, so far, only happens in human brains, which take years to integrate knowledge. We can't copy brains, but we are able to copy and scale LLMs, and likely other systems that can apply knowledge in the world in the future. That will speed up putting knowledge into practice.
As much as I like PBR, in this post, I have the impression that treating the first-person perspective as axiomatic is taken took far into an objective of its own. I think that the goal should not be to give up on figuring out what the relations between different perspectives are or what physical structures give rise to them.
I don't think biological superintelligence completely avoids the safety problems. You can see it already with very powerful humans. Imagine a very powerful human dictator would live forever.