PhD student studying the epigenomics of ageing with bioinformatic methods. Former president of Humanist Students, the national umbrella group for humanist groups at universities in the UK.
I have a reading list and recommendations for sources of ongoing news in the longevity/immortality/life extension space in the show notes for the recent special episode of my podcast where my co-host Michael and I discuss ageing and immortality. We are both biology PhDs, my background is in the epigenetics of ageing and Michael's bone stem cells.
I should add "Immune: a Journey into the Mysterious System that Keeps You Alive" to that list actually.
In particular from that list I recommend these for 'coming at biology from a physics perspective':
To clarify it's the ability to lock you're bootloader that I'm saying is better protection from 3rd parties not the propriety nature of many of the current locks. The HEADs tools for example which allows you to verify the integrity of your boot image in coreboot would be a FOSS alternative that provides analogous protection. Indeed it's not real security if it's not out there in the open for everyone to hammer with full knowledge of how it works and some nice big bug bounties (intentional or unintentional) on the other side to incentivise some scrutiny.
Thanks for the link. The problem of how to have a cryptographic root of trust for an uploaded person and how to maintain an on going state of trusted operation is a tricky one that I'm aware people have discussed. Though it's mostly well over my cryptography pay grade. The main point I was trying to get at was not primarily about uploaded brains. I'm using them as an anchor at the extreme end of a distribution that I'm arguing we are already on. The problems of being able to trust its own cognition that an uploaded brain has we are already beginning to experience in the aspects of our cognition that we are outsourcing.
Human brains are not just general purpose CPUs much of our cognition is performed on the wetware equivalent of application-specific integrated circuits (ASICs). ASICs that were tuned for applications that are of waning relevance in the current environment. They were tuned for our environment of evolutionary adaptiveness but the modern world presents very different challenges. By analogy it's as if they were tuned for sha256 hashing but Ethereum changed the hash function so the returns have dropped. Not to mention that biology uses terrible, dirty hacky heuristics that would would make a grown engineer cry and statisticians yell WHY! at the sky in existential dread. These leave us wide open to all sorts of subtle exploits that can be utilised by those who have studied the systematic errors we make and if they don't share our interests this is a problem.
Note that I am regarding the specifics of an uploaded brain as personal data which should be subject to privacy protections (both at the technical and policy level) and not as code. This distinction may be less clear for more sophisticated mind upload methods which generate an abstract representation of your brain and run that. If, however, we take a conceptually simpler approach the data/code distinction is cleaner. let's say we have an 'image' of the brain which captures the 'coordinates' (quantum numbers) of all of the subatomic particles that make up your brain. We then run that 'image' in a physics simulation which can also emulate sensory inputs to place the uploadee in a virtual environment. The brain image is data, the physics and sensory emulation engine is code. I suspect a similar reasonable distinction will continue to continue to hold quite well for quite a while even once your 'brain' data starts being represented in a more complex data structure than and N dimensional matrix.
I actually think mind uploading is a much harder problem than many people seem to regard it as, indeed I think it is quite possibly harder than getting to AGI de novo in code. This is for reasons related to neurobiology, imaging technology and computational tractability of physics simulations and I can get into it at greater length if anyone is interested.
The fact that they exert some of that power, (an ever increasing amount), through software make the question of the freedom of that software quite relevant to your autonomy in relation to those factors. consider the G0v movement. When working with open government software or at least open APIs civic hackers have been able to get improvements in things like government budgetary transparency, the ease with which you can file your tax forms, the ability to locate retailers with face masks in stock etc. The ability to fork the software used by institutions, do better and essentially embarrass them into adopting the improvements because of how bad their versions are in comparison is surprisingly high leverage.
Data is its whole own complex problem especially personal data that warrants a separate discussion all of it's own. In relation to free software though the most relevant part is open data specifications for formats and data portability between applications so they you are free to take your data between applications.
Yes a lot of in house software has terrible UX, mostly because it is often for highly specialised applications, it may also suffer from limited budget, poor feedback cycles if it was made as a one off by an internal team or contractor, and the target user group is tiny, lack of access to UX expertise etc.
Companies will optimise for their own workflows no doubt but their is often substantial overlap with common issues. Consider the work the redhat/ibm did on pipewire and wire plumber which will soon be delivering a substantially improved audio experience for the Linux desktop as a result of work they were doing anyway for automotive audio systems
I'm not that current with blender but I'm given to under stand there have been some improvements in usability recently as it has seen wider industry adoption and efforts have been directed at improving UX. Large firms with may people using a piece of software are motivated to fund efforts to make using it easier as it makes on-boarding new employees easier. Though given that blender is a fairly technical and specialist application I would not be surprised if it remained somewhat hard to use it's not like there are not UX issues with similarly specialist proprietary apps.
I would regard the specifics of your brain as private data. The infrastructural code to take a scan of an arbitrary brain and run its consciousness is a different matter. It's the difference between application code and a config file / secrets used in deploying a specific instance. You need to be able to trust the app that running your brain e.g. to not feed it false inputs.
Maybe, but I would be interested to see that tested empirically by some major jurisdiction. I would bet that in the ascendance of an easy option to use propriety software many more firms would hire developers or otherwise fund the development of features that they needed for their work including usability and design coherence. There is a lot more community incentive to to make it easy to use if the community contains more business whose bottom lines depend on it being easy to use. I suspect propriety software may have us stuck in a local minimum, just because some of the current solutions produce partial alignments does not mean there aren't more optimal solutions available.
Yes, I'm merely using a emulated consciousness as the idealised example of a problem that applies to non-emulated consciousnesses that are outsourcing cognitive work to computer systems that are outside of their control and may be misaligned with their interests. This is a bigger problem for you if your are completely emulated but still a problem if you are using computational prostheses. I say it is bottle-necking us because even it's partial form seems to be undermining our ability to have rational discourse in the present.
Dan Dennet has an excellent section on a very similar subject in his book 'freedom evolves'. To use a computer science analogy true telepathy would be the ability for 2+ machines with different instruction set architectures being able to cross compile to code that is binary compatible with the other ISA and transmit the blob to the other machine. We have to serialise to a poorly defined standard and then read from the resulting file with with a library that is only a best guess at and implementation of the de facto spec.
I don't know I'd say that guy torched a lot of future employment opportunities when when he sabotaged his repos. Also obligatory: https://xkcd.com/2347/