harsimony

I am a longtime LessWrong and SSC reader who finally got around to starting a blog. I would love to hear feedback from you! https://harsimony.wordpress.com/

Comments

Predictive Coding has been Unified with Backpropagation

Good point!

Do you know of any work that applies similar methods to study the equivalent kernel machine learned by predictive coding?

Predictive Coding has been Unified with Backpropagation

Is it known whether predictive coding is easier to train than backprop? Local learning rules seem like they would be more parallelizable.

Every Model Learned by Gradient Descent Is Approximately a Kernel Machine seems relevant to this discussion:

We show, however, that deep networks learned by the standard gradient descent algorithm are in fact mathematically approximately equivalent to kernel machines, a learning method that simply memorizes the data and uses it directly for prediction via a similarity function (the kernel). This greatly enhances the interpretability of deep network weights, by elucidating that they are effectively a superposition of the training examples. The network architecture incorporates knowledge of the target function into the kernel.

How I come up with ideas

Do you have a good system for saving and prioritizing the ideas you have?

Building a habit of noticing a new idea and writing it down (even if it's silly) has increased my overall output dramatically.

Is Immortality Ethical?

Yes I definitely agree that this is not an argument against research on life extension.

Merging is very interesting, it seems that it would also prevent AI's or humans who are self-copying too much. Agents could also merge in a temporal sense by say, alternating the days that they run, or by running more slowly (and thus using less resources).

Is Immortality Ethical?

I completely agree and I support further research on radical life extension (we might as well have the option and decide on the ethics later).

This is not a near-term issue in any sense, rather, it is more of an "ethical brainteaser" to identify different intuitions people have.

Is Immortality Ethical?

I appreciate your feedback!

On reflection, I agree that this post could benefit from a clearer example framed as a decision. Of course, I have also sidestepped the discussion of ethical premises.

More context in population ethics would be nice but I wanted to avoid focusing on the Repugnant Conclusion because the goal was to examine a more neglected question: how do we trade off utility between existing lives and potentially-existing lives?

This is no different than any other resource allocation question, is it?

Sort of! It is mostly a resource allocation question if you are willing to trade utility between existing lives and potentially-existing lives (as I am). But many people disagree with this instinctively. Additionally, there are network effects to consider, in scenario 2 there are a lot more possible activities which are not possible in scenario 1.

I don't have concrete answers on all of these questions, but most people seem to have a strong presumption for a world more like scenario 1, which seems unjustified.

Grey Goo Requires AI

I agree. I think humanity should adopt some sort of grey-goo-resistant shelter and, if/when nanotechnology is advanced enough, create some sort of blue-goo defense system (perhaps it could be designed after-the-fact in the shelter).

The fact that these problems seem tractable, and that -in my estimation- we will achieve dangerous AI before dangerous nanotechnology suggests to me that preventing AI risk should have priority over preventing nanotechnology risks.

Grey Goo Requires AI

These are good points.

The scenarios you present would certainly be catastrophic (and are cause for great concern/research about nanotechnology), but could all of humanity really be consumed by self replicators?

I would argue that even if they were maliciously designed, self replicators would have to outsmart us in some sense in order to become a true existential risk. Simply grabbing resources is not enough to completely eliminate a society which is actively defending a fraction of those resources, especially if they also have access to self-replicators/nanotechnology (blue goo) and other defense mechanisms.

If we assume that a very smart and malevolent human is designing this grey goo, I suspect they could make something world ending.

I agree, conditional on the grey goo having some sort of intelligence which can thwart our countermeasures.

A parallel scenario occurs when a smart and malevolent human designs an AI (which may or may not choose to self replicate). This post attempts to point out that these two situations are nearly identical, and that the existential risk does not come from self-replication or nanotechnology themselves, but rather from the intelligence of the grey goo. This would mean that we could prevent existential risks from replication by copying over the analogous solution in AI safety: to make sure that the replicators decision-making remains aligned with human interests, we can apply alignment techniques; to handle malicious replicators, we can apply the same plan we would use to handle malicious AI's.

Grey Goo Requires AI

Is it fair to say that this is similar to Richard Kennaway's point? If so, see my response to their comment.

I agree with you and Richard that nanotechnology still presents a catastrophic risk, but I don't believe nanotechnology presents an existential risk independent of AI (which I could have stated more clearly!).

Grey Goo Requires AI

That's true. I guess I should have clarified that the argument here doesn't exclude nanotechnology from the category of catastrophic risks (by catastrophic, I mean things like hurricanes which could cause lots of damage but could not eliminate humanity) but rules out nanotechnology as an existential risk independent from AI.

Lots of simple replicators can use up the resources in a specific environment. But in order to present a true existential risk, nanotechnology would have to permanently out-compete humanity for vital resources, which would require outsmarting humanity in some sense.

Load More