PashaKamyshev

Wiki Contributions

Comments

Sorted by

I think this post suffers pretty badly from Typical Mind Fallacy. This thinking isn't alien to me. I used to think exactly like this 8 years ago, but since marriage and kid I now disagree with basically every point.

One claim that is hopefully uncontroversial: Humans are not literally optimizing for IGF,

I think this is controverisial because it's basically wrong :)

First, its not actually obvious what "definition" of IGF you are using. If you talk about animals, the definition that might fit is "number of genes in the next generation". However if you talk about humans, we care about both "number of genes in the next generation" and "resources given to the children". Humans can see "one step ahead" and know the rough prospects their children have in the dating market. "Resources" is not just money, it is also knowledge, beauty, etc. 

Given this, if someone decides to have two children instead of four, this might just mean they simply don't trust their ability to equip the kids with the necessary tools to succeed. 

Now, different people ALSO have different weights for the quantity vs quality of offspring. See Shoshannah Tekofsky's comment (unfortunately disagreed with) for the female perspective on this. Evolutionary theory might predict that males are more prone to maximize quantity and satisfice quality and female are more prone to satisfice quantity and maximize quality. That is, "optimization" is not the same as "maximization". There can also be satisfice / maximization mixes where each additional unit of quality or quantity still has value, but it falls off. 

 

 

Would you give up your enjoyment of visual stimuli then, like an actual IGF optimizer would?

If you give a choice between having 10 extra kids with my current wife painlessly + sufficient resources for a good head start for them, I would consider giving up my enjoyment of visual stimuli. The only hesitation is that i don't like "weird hypotheticals" in general and i potentially expect "human preference architectures" to not be as easily "modularizable" compared to computer architectures. This giving up can also have all sorts of negative effects beyond losing "qualia" of visualness, like losing capacity for spacial reasoning. However, if the "only" thing i lose is qualia and not any cognitive capacities, than this is an easy choice. 

But, do you really fundamentally care that your kids have genomes?

Yes, obviously i do. I don't consider "genomeless people" to be a thing, i dislike  genetic engineering and over-cyborgization, i don't think uploads are even possible. 

Or, an even sharper proposal: how would you like to be killed right now, and in exchange you'll be replaced by an entity that uses the same atoms to optimize as hard as those atoms can optimize, for the inclusive genetic fitness of your particular genes. Does this sound like practically the best offer that anyone could ever make you? Or does it sound abhorrent?

This hypothetical is too abstract to be answerable, but if i were to offer an answer to a hypothetical with a similar vibe: many people do in fact die for potential benefits to inclusive fitness for their families, we call those soldiers / warriors / heroes. Now, sometimes their government deceives them about whether or not their sacrifice is in fact helpful for their nation, however the underlying psychology seems be easily consistent with "IGF-optimization" 

My point today is that the observation “humans care about their kids” is not in tension with the observation “we aren't IGF maximizers”,

I think this is where the difference between the terms "optimizer" and "maximizer" is important. Also important to understand what sort of constraints most people in fact operate under. Most people seem to  they act AS IF they are IGF satisficers - they get up to a certain level of quantity / quality and seem to slow down after that. However, it's hard to infer the exact values because very specific subconscious /conscious beliefs could be influencing the strategy. 

For example, i could argue that secretly, many people want to be maximizer, however this thing we call civilization is effectively an agreement between maximizers to forgoe certain maximization tactics and stick to being a satisficers. So people might avoid "overly agressive" maximization because they are correctly worried this is perceived as "defection" and ends up backfiring. Given that the current environment is very different from the ancestral environment, this particular machinery might be malfunctioning and leading to people subconsciously perceive having any children as defection. However i suspect humanity will adapt in a small number of generations.

Humans are not literally optimizing for IGF, and regularly trade other values off against IGF.

Sort of true. The main value people seem to trade off is "physical pain." Humans are also resource and computation constrained and implementing "proper maximization" in a heavily resources constrained computation may not even be possible. 

 

Introspecting my thought before and after kids, I have a theory that the process of finding a mate prior to "settling down" tends to block certain introspection into one's motivations. It's easier to appreciate art if you are not thinking "oh i am looking at art i like because art provides baysean evidence on lifestyle choices to potential mates". Thinking this way can appear low status which is itself a bad sign. So the brain is more prone to lying to itself that "there is enjoyment for it's own sake." After having a kid, the mental "block" is lifted and it is sort of obvious this is what i was doing and why.

I generally don't think LLMs today are conscious, as far as i can tell neither does Sam Altman, but there is some disagreement. They could acquire some characteristics that could be considered conscious as scale increases. However merely having "qualia" and being conscious is not the same thing as being functionally equivalent a new human, let alone a specific human. The term "upload" as commonly understood is a creation of a software construct functionally and qualia-equivalent to a specific human.

  • a human brain in a vat wouldn't be so far from the experience of language models.

Please don't try to generalize over all human minds based on your experience. Human experience is more than just reading and writing language. Some people have a differing level of identification with their "language center," for some it might seem like the "seat of the self," for others it is just another module, some people have next to no internal dialogue at all. I suspect that these differences + cultural differences around "self-identification with linguistic experience" are actually quite large.

  • I personally want to maintain my human form as a whole but expect to drastically upgrade the micro-substrate beyond biology at some point

I suspect a lot of the problems described in this post also occur on the microscale level with that strategy as well.

Thanks for the first part of the comment. 

As mentioned in my above comment, the reason for mixing "can" and "should" problems is that they form a "stack" of sorts, where attempting to approximately solve the bottom problems makes the above problems harder and verification is important. How many people would care about the vision if one could never be certain the process succeeds?

Fixed the link formatting and added a couple more sources, thanks for the heads up. The temperature claim does not seem unusual to me in the slightest. I have personally tried to do a relatively cold bath and noticed my "perception" alter pretty significantly. 

The organ claim does seem more unusual, but I have heard various forms of it from many sources at this point. It does not however seem in any way implausbile. Even if you maintain that the brain is the "sole" source of cognition, the brain is still an organ and is heavily affected by the operation of other organs.

There is a lot to unpack. I have definitely heard from leaders of the community claims to the tune of "biology is over," without further explanation of what exactly that means or what specific steps are expected to happen when the majority of people disagree with this. The lack of clarity here makes it hard to find a specific claim of "I will forcefully do stuff to people they don't like," but me simply saying "I and others want to actually have what we think of as "humans" keep on living" is met with some pushback.

You seem to be saying that the "I" or "Self" of people is somehow static through large possible changes to the body. While on a social and legal level (family and friends recognize them), we need to have a simple shorthand for what constitutes the same person. The social level is not the same as the "molecular level."

On a molecular level, everything impacts cognition. Good vs bad food impacts cognition, taking a cold vs warm shower impacts cognition. If you read Impro, even putting on a complicated mask during a theater performance impacts cognition. 

"I am me," whatever you think of "as yourself" is a product of your quantum-mechanical state. The body fights really hard to preserve some aspects of said state to be invariant. If the temperature of the room increases 1C nothing much might change, however, if the body loses the battle and your core temperature increases 1C, you likely have either a fever or heat-related problems with the corresponding impact on cognition. Even if the room is dusty enough, people can become distressed from the sight or lack of oxygen.

So if you claim that a small portion of molecular information is relevant in the construction of self, you will fail to capture all the factors that are relevant in affecting cognition and behavior. Now only considering a portion of the body's molecules doesn't solve the physics problem of needing to have a molecular level info without destroying the body. You would also need to hope that the relevant information is more "macro-scale" than molecules to get around the thermodynamics issues. However, every approximation one makes away from perfect simulation is likely to drift the cognition and behavior further from the person, which makes the verification problem (did it actually succeed) harder.

This is also why it's a single post. The problems form a "stack" in which fuzzy or approximate solutions to the bottom of the stack make the problems above harder in the other layers of the stack. 

Now, there is a particular molecular level worth mentioning. The DNA of people is the most stable molecular construct in the body. This is preserved by the body with far more care than whatever we think of as cognition. How much cognition is shared between a newborn and the same 80-year old? DNA is also build with redundancies which means that the majority of the body remains intact after a piece of it is collected with DNA. However, i don't think that "write one's DNA to the blockchain" is what people think of when they say uploads.

The general vibe of the first two parts seems correct to me. Also, an additional point is that evolution's utility function of inclusive genetic fitness didn't completely disappear and is likely still a sub-portion of the human utility function. I suspect there is going to be disagreement on this, but it would also be interesting to do a poll on this question and break it down by people who do or do not have kids.

Yes I think we understand each other. One thing to keep in mind is that different stakeholders in AI are NOT utilitarians, they have local incentives they individually care about. Given the fact that COVID didn't stop gain-of-function research, this means that getting EVERYONE to care would require a death toll larger than COVID. However, getting someone like CEO of google to care would "only" require a half - a - trillion dollar lawsuit against Microsoft for some issue relating to their AIs. 

And I generally expect those - types of warning shots to be pretty likely given how gun-ho the current approach is.

I am mostly agreeing with you here, so I am not sure you understood my original point. Yes Reality is giving us things that for a set of reasonable people such as you and me should be warning shots. 

Since a lot of other people don't react to them, you might become pessimistic and extrapolate that NO warning shot is going to be good enough. However I posit that SOME warning shots are going to be good enough. An AI - driven bank run followed by an economic collapse is one example, but there could be others. Generally I expect that when warning shots reach "nation-level" socio-economic problems, people will pay attention. 

However, this will happen before doom.

Load More