J Thomas Moros

An aspiring rationalist who has been involved in the Columbus Rationality community since January 2016.

Wiki Contributions


This is not sound reasoning because of selection bias. If any of those predictions had been correct, you would not be here to see it. Thus, you cannot use their failure as evidence.

As someone who believes in moral error theory, I have problems with the moral language ("responsibility to lead ethical lives of personal fulfillment", "Ethical values are derived from human need and interest as tested by experience.").

I don't think that "Life’s fulfillment emerges from individual participation in the service of humane ideals" or "Working to benefit society maximizes individual happiness." Rather I would say some people find some fulfillment in those things.

I am vehemently opposed to the deathist language of "finding wonder and awe in the joys and beauties of human existence, its challenges and tragedies, and even in the inevitability and finality of death." Death is bad and should not be accepted.

I assume there are other things I would disagree with, but those are a few that stand out when skimming it.

I agree with your three premises. However, I would recommend using a different term than "humanism".

Humanism is more than just the broad set of values you described. It is also a specific movement with more specific values. See for example the latest humanist manifesto. I agree with what you described as "humanism" but strongly reject the label humanist because I do not agree with the other baggage that goes with it. If possible, try to come up with a term that directly states the value you are describing. Perhaps something along the lines of "human flourishing as the standard of value"?

I am signed up for cryonics with Alcor and did so in 2017. I checked and the two options you listed are consistent with the options I was given. I didn't have a problem with them, but I can understand your concern.

I have had a number of interactions with Alcor staff both during the signup process and since. I always found them pleasant and helpful. I'm sorry to hear that you are having a bad experience. My suggestion would be to get the representative on the phone and discuss your concerns. Obviously, final wording should be handled in writing but I think a phone conversation would help you both understand what would be acceptable to both of you.

In my opinion, the responses you have gotten probably arise from one of two sources. It is possible that she simply didn't read what you wrote carefully enough and fell back to boilerplate language that is closer to what their legal counsel has approved. She likely doesn't have the authority to accept major changes herself. If that is not what happened, then it is most likely that Alcor is trying to push the option they are pushing to avoid legal issues, the issues they have had with family in the past, and delay in cryopreservation. They want a clear-cut decision procedure that doesn't depend on too many third parties. If cryopreservation is to go well, it needs to be done in a timely fashion. Ideally, you want whoever is performing it to have a clear and immediate path to begin if it is warranted. Any judgment call or requirement to get consent could cause unnecessary delays. You might think it will be clear, but any chance your wife could claim that she should have been consulted and wasn't could cause legal problems. Thus, Alcor may be forced to consult her in all but the most clear-cut cases. Again, just schedule a call.

As a proponent of cryonics, I hope you will persist and work through this issue. Please message me if there are other questions I can answer for you. If you choose not to proceed, you can choose to keep the insurance policy and designate another recipient rather than canceling it.

P.S. Having researched all the Cryonics organizations, Alcor is by far the best. They are still small but they are working the hardest to become a fully professional organization. Their handling of the legal issues and financial structure is much better. The Cryonics Institute (CI) is run by well-meaning people who are less professional. They are more of a volunteer organization. Having attended a CI annual meeting I was disappointed by the insufficiently conservative and far-sighted investment strategy. I think CI may actually be underfunded for the goal of existing 100 years from now.

While I can understand why many would view advances toward WBE as an AI-safety risk, many in the community are also concerned with cryonics. WBE is an important option for the revival of cryonics patients. So I think the desirability of WBE should be clear. It just may be the case that we need to develop safe AI first.

As someone interested in seeing WBE become a reality, I have also been disappointed by the lack of progress. I would like to understand the reasons for this better. So I was interested to read this post, but you seem to be conflating two different things. The difficulty of simulating a worm and the difficulty of uploading a worm. There are a few sentences that hint both are unsolved, but they should be clearly separated.

Uploading a worm requires being able to read the synaptic weights, thresholds, and possibly other details from an individual worm. Note that it isn't accurate to say it must be alive. It would be sufficient to freeze an individual worm and then spend extensive time and effort reading that information. Nevertheless, I can imagine that might be very difficult to do. According to wormbook.org, C. elegans has on the order of 7,000 synapses. I am not sure we know how to read the weight and threshold of a synapse. This strikes me as a task requiring significant technological development that isn't in line with existing research programs. That is, most research is not attempting to develop the technology to read specific weights and thresholds. So it would require a significant well-funded effort focused specifically on it. I am not surprised this has not been achieved given reports of lack of funding. Furthermore, I am not surprised there is a lack of funding for this.

Simulating a worm should only require an accurate model of the behavior of the worm nervous system and a simulation environment. Given that all C. elegans have the same 302 neurons this seems like it should be feasible. Furthermore, the learning mechanism of individual neurons, operation of synapses, etc. should all be things researchers outside of the worm emulation efforts should be interested in studying. Were I wanting to advance the state of the art, I would focus on making an accurate simulation of a generic worm that was capable of learning. Then simulate it in an environment similar to its native environment and try to demonstrate that it eventually learned behavior matching real C. elegans including under conditions which C. elegans would learn. That is why I was very disappointed to learn that the "simulations are far from realistic because they are not capable of learning." It seems to me this is where the research effort should focus and I would like to hear more about why this is challenging and hasn't already been done.

I believe that worm uploading is not needed to make significant steps toward showing the feasibility of WBE. The kind of worm simulation I describe would be more than sufficient. At that point, reading the weights and thresholds of an individual worm becomes only an engineering problem that should be solvable given a sufficient investment or level of technological advancement.

A study by Alcor trained C. elegans worms to react to the smell of a chemical. They then demonstrated that the worms retained this memory even after being frozen and revived. Were it possible to upload a worm, the same exact test would show that you had successfully uploaded a worm with that memory vs. one without that memory.

Study here: Persistence of Long-Term Memory in Vitrified and Revived Caenorhabditis elegans

I think you are being overly optimistic about homomorphic encryption. The uFAI doesn't need to have absolute control over how the computation happens. Nor does it need to be able to perfectly predict the real-world results of running some computation. It only needs some amount of information leakage. The current best example I can think of is timing attacks on cryptographic protocols. The protocol itself should be secure, but a side-channel causes insecurity. Another example would be the meltdown and spectre vulnerabilities. How do you know your computational system doesn't have something like that?

The first idea that comes to mind is pausing. Assuming the uFAI has some way to pause computation (which is a very common operation but, an equivalent might be achieved by performing a specific computation in a tight loop). Then the uFAI might be able to create morse code by alternating steps of pause, long computation, and short computation. I imagine that exact thing is naive, but do you really believe the uFAI won't find any such vulnerability in your system?

I doubt the lack of 6-door cars has much to do with aesthetics. Doors and tight door seals are some of the more complex and expensive portions of the car body. Doors also pose challenges for crash safety as you have a large opening in the car body weakening the main body's structural integrity in an accident. I suspect that the reason there are so few cars with 6 doors is the extra cost of manufacturing cars. That would lead to increased car costs. Most purchasers don't value the extra convenience of the additional doors enough relative to the added price. Any company producing such a car would find a very small market which might make it not worth it to the manufacturer.

Recently many sources have reported a "CA variant" with many of the same properties as the English and South African strains. I haven't personally investigated, but that might be something to look into. Especially given the number of rationalists in CA.

Load More