I don't see anything fundamentally wrong with Voldemort's approach. To identify and destroy those horcruxes, the protagonists surely did spend significant amount of time, at great personal expenses. To me it has already successfully achieved the intended effect.
In cryptography, Shamir's Secret Sharing Scheme (SSSS) is the same idea - this algorithm splits an encryption key into multiple shares, which then can be guarded by different trustees. The encryption key, hence the secret information, can only be unlocked when most or all trustees are compromised or agree to release their shares. This is certainly extremely useful for many problems, and it also foreshadowed a new cryptography subfield called Secure Multi-Party Computation (MPC). I think it's fair to call this a product of the "true deep security mindset".
Yudkowsky said "seven keys hidden in different places [in the filesystem]" is silly because they're not conditionally independent, the entire filesystem could be bypassed altogether. Also, the attacker who's able to find the first key is likely to be able to find the next key as well.
[...] the chance of obtaining the seventh key is not conditionally independent of the chance of obtaining the first two keys. If I can read the encrypted password file, and read your encrypted encryption key, then I've probably come up with something that just bypasses your filesystem and reads directly from the disk.
But speaking of Shamir's shares or Voldemort's horcruxes, they are basically all uncorrelated to each other and cannot be bypassed. I think the different shapes and forms of Voldemort's horcruxes are actually a good demonstration of "security through diversity" - intentionally decorrelate the redundant parts of the system, e.g. don't use the same operating system, don't trust the same people. Tor Project identified the Linux monoculture as a security risk and encourages people to run more FreeBSD and OpenBSD relays.
Thus, I think not mentioning Voldemort's horcruxes is a correct decision. While misguided reliance of redundancy is "ordinairy paranoia" and dangerous - attaching 7 locks to a breakable door, or adding secure secret sharing to a monolithic kernel probably does little on improving security (even with conditionally independent keys), and Tor Project's platform diversity attempt only has a small (but still useful) contribution to its overall network security since they all run the same Tor executable. Nevertheless, redundancy itself can be "deep security".
Thanks for the info. Your comment is the reason why I'm on LessWrong.
The lack of progress here may be a quite good thing.
Did I miss some subtle cultural changes at LW?
I know the founding principles of LW are rationalism and AI safety from the start. But in my mind, LW always has all types of adjacent topics and conversations, with many different perspectives. Or at least this was my impression of the 2010s LW threads on the Singularity and transhumanism. Did these discussions become more and more emphasized on AI safety and derisking over time?
I'm not a regular reader of LW, any explanation would be greatly appreciated.
Optogenetics was exactly the method proposed by David, I just updated the article and included a full quote.
I originally thought my post was already a mere summary of the previous LW posts by jefftk, excessive quotation could make it too unoriginal, interested readers could simply read more by following the links. But I just realized giving sufficient context is important when you're restarting a forgotten discussion.
David believed one can develop optogenetic techniques to do this. Just added David's full quote to the post.
With optogenetic techniques, we are just at the point where it's not an outrageous proposal to reach for the capability to read and write to anywhere in a living C. elegans nervous system, using a high-throughput automated system. It has some pretty handy properties, like being transparent, essentially clonal, and easily transformed. It also has less handy properties, like being a cylindrical lens, being three-dimensional at all, and having minimal symmetry in its nervous system. However, I am optimistic that all these problems can be overcome by suitably clever optical and computational tricks.
To see why this isn't enough, consider that nematodes are capable of learning. [...] For example, nematodes can learn that a certain temperature indicates food, and then seek out that temperature. They don't do this by growing new neurons or connections, they have to be updating their connection weights. All the existing worm simulations treat weights as fixed, which means they can't learn. They also don't read weights off of any individual worm, which means we can't talk about any specific worm as being uploaded.
If this doesn't count as uploading a worm, however, what would? Consider an experiment where someone trains one group of worms to respond to stimulus one way and another group to respond the other way. Both groups are then scanned and simulated on the computer. If the simulated worms responded to simulated stimulus the same way their physical versions had, that would be good progress. Additionally you would want to demonstrate that similar learning was possible in the simulated environment.
(just included the quotation in my post)
Did David build an automated device to collect data from living cells? If not, was the reason it wasn't done because of some sudden unexpected huge difficulty that 100+ people and multi-million dollar budget couldn't solve, or was it because...those people weren't there and neither was the funding?
Good points, I did more digging and found some relevant information I initially missed, see "Update". He didn't, and funding was indeed a major factor.