LESSWRONG
LW

don't_wanna_be_stupid_any_more
162110
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
1don't_wanna_be_stupid_any_more's Shortform
1mo
1
Mikhail Samin's Shortform
don't_wanna_be_stupid_any_more8d*1-8

this deserves way more attention.

a big problem about AI safety advocacy is that we aren't reaching enough people fast enough, this problem doesn't have the same familiarity amongst the public as climate change or even factory farming and we don't have people running around in the streets preaching about the upcoming AI apocalypse, most lesswrongers can't even come up with a quick 5min sales pitch for lay people even if their live literally depended on it.

this might just be the best advocacy tool i have seen so far, if only we can get it to go viral it might just make the difference.Ā 

edit:

i take this part back

Ā most lesswrongers can't even come up with a quick 5min sales pitch for lay people even if their live literally depended on it.

i have seen some really bad attempts at explaining AI-x risk in laymen terms and just assumed it was the norm, most of which were from older posts.

now looking at newer posts i think the situation is has greatly improved, not ideal but way better then i thought.

i still think this tool would be a great way to reach the wider public especially if it incorporates a better citation function so people can check the source material (it does sort of point the user to other websites but not technical papers).

Reply1
Genomic emancipation
don't_wanna_be_stupid_any_more15d10

>Anyway, I think it's going to be pretty hard to engage by text on this, because it seems like you're coming with background assumptions that are either misconceptions, or at least very different from where I'm coming from.

yeah i agree we probably can't resolve our differences with text alone and we both clearly have different starting assumptions

if is any help to know when i first wrote my replies i was running on the assumption that "personality" was strongly heritable, something like 70% though this meta analysis.....

https://pubmed.ncbi.nlm.nih.gov/25961374/

..... puts it at 40% with the rest being environment.

still 40% is pretty bad especially when you control the environment so i am still worried, getting bad actors 40% of the way there is a bad idea.

i also assumed that the technology would be very easy to adopt legally or not and that bad actors around the world would absolutely jump at the idea of having designer babies aligned to their values.

i don't know what your assumptions are but they don't seem to match mine.

Ā > So the right way to have this conversation would be synchronously, so we can clarify things quickly.

again, you can DM me if you want and we will arrange something.

i am thinking about writing a post discussing possible s risk scenarios for human germ line engineering and possible ways to avoid them and i would love your feedback on it either way, or you might change my mind all together.

>I don't know what you mean by "contain" them. And I don't see what you could mean by "difficult to remove"--if you can genomically vector a future child to go from a trait value of X1 to X2, it is (usually, roughly) equally easy to go from X2 to X1.

by "difficult to remove" i mean they might have negative values (eg like violence and bigotry) built into them on the genetic and psychological level that are nearly impossible to remove.

if you have a group of children who have been psychological indoctrinated to feel hostility and moral disgust at a perceived out group you could probably bring them back to sanity by removing them from their previous environment and cultivating whatever shreds of human empathy they must still have, the only real challenge is verifying that the child has been abused and taking custody of it.

if they were manipulated purely on the genetic level to feel moral disgust much more strongly then normal but were otherwise raised in a healthy environment they might consciously disapprove of those feelings and consent to have it be altered through drugs or in the future through genetic engineering (think of the schizophrenic who consciously declares they want to be sanity when they are sober).

but both at the same time? this would make aligning the children trivial, the child now is far more likely to internalize those values at a young age no matter how detrimental they are to society and refuse to have them changed like an AI refusing to have their utility function altered, worst yet they might insist that their children should carry those values as well like an AI trying to align its successor.

and good luck convincing the parents to relinquish their "right" to forcibly align their children with their own set of beliefs and values.

as for "difficult to contain" once the tech is cheap and wide spread (which should be one of our goals otherwise we well risk creating a genetic elite class) it would be difficult if not impossible to stop dubious individuals from getting children of any personality or disposition they want, legally or not.

and once they are indoctrinated and grown up there isn't much we can do, as far as society is concerned they are consenting adults.

in a dictatorship this can manifest in the population suddenly becoming more receptive to "the great leaders" vision, more loyal more diligent and more fanatical, even if you topple said dictatorship in the future it might prove impossible to reintegrate the population into the rest of civilization, at least not with their consent.

in the more civilized world this will manifest in some pockets of society becoming unusually conservatives with more and more children adopting their parents religious beliefs /political positions/moral values, those children in turn want their children to be aligned with their values in a positive feedback loop, and before you know it where we used to seeing family members with diverse interests, a father how likes math here a daughter how likes art their, we now see dynasties of semi clones with identical personalities and values.

this is bad even if the values that those dynasties chose are random, plurality is a fragile thing, by default people tend to stick to their tribe/nation/family values and beliefs and try to forcibly convert or destroy anyone who disagrees (think the communist purges, the holy crusades, the genocide of the native Americans) not because they want to but because the ones that do tend to survive and prosper then kill the ones that don't.Ā 

this didn't happen yet because evolution forbids that through mutation, incest taboos and its general inability to fine tone the human brain (otherwise we wouldn't have invented super stimulus nor birth control) but with genetic engineering this might become uncomfortably easy to do.

just look up the genetic codes of people with the desired personalities/values, locate the similarities and crank them up in your child.

again i think we should just ban research into the genetic components of personality before its to late, intelligent enhancement and disease prevention is cool and all but any research into the genetic components of personality is just to risky.

Reply
Genomic emancipation
don't_wanna_be_stupid_any_more16d30

first, thank you very much for engaging with my comment.

second, i made a major mistake in my first comment which has caused some confusion, i used "personality" as a synonym for "values" probably because my brain thought personality->value so personality==value and i believe that someones values is their most fundamental property that defines them as a unique person which is what i usually mean by personality.

so things like "liking math" counted as personality trait in my head while things like "will power" didn't.

i have edited the comment for clarification, just replace personality with values and my position should make more sense.

now let me clarify my moral position, i am a pluralist in the sense that i believe that all possible sets of values are equally valid in so far as they don't negatively effect other sets of values.

so for example someone who likes math, anime and cats has the same claim to moral value as some one who likes chess, music and practices Buddhism, no one set of values is superior to another with the exception of values that include negatively effecting other peoples values either by destroying said values or forcibly changing them or preventing such values from ever existing, for example the serial killer who enjoys torturing his victims to death or the religious fanatic who forcible converts people to his faith through violence and indoctrination or a hypothetical dictator who enforces a eugenics policy that causes the next generation to be incapable of enjoying music.

this view in my opinion is the best common ground we have available, since it allows the largest possible subset of "values" to exist in harmony, most value systems can adopt this simple common ground without relinquishing their own values

"As I argued, it's not clear to me how it's better to have my personality traits set by random genetic dice, or by weird alien optimization pressures coming from bioevolution and from Moloch-style human sociobiological pressures. There's no "fair" / "neutral" / "pure" background default. (I do think there's significant nuance here, but overall the conclusion stands. Happy to discuss.)"

absent an objective criteria for which values to promote we should default to randomly selecting values from the space of possible values since this is the only policy that guarantees impartiality, so ideally, all possible jacks and sallys should have an equal chance at existing, making this a fair background default of sorts.

of course this ideal is impossible to achieve, peoples values aren't fixed nor are they unaffected by their environment and upbringing, their is also the fact that certain values depend on each other for examples sadists need masochists to feel sexually fulfilled.

also their is civilizations stake in all this, we need highly intelligent and competent individuals to be interested in certain fields like math and science and medicine to ensure human survival and flourishing.

and then we have the fact that genes are limited by what evolution has already made with certain values like enjoying food being more common then foe example enjoying math, plus the fact that the parent are entitled to have their own specific sub set of genes passed on which comes with it's own set of biases.

but this is an ideal to strive for and not necessarily achieved, just because you can't get a perfect solution doesn't mean you shouldn't try, even reducing the influence of the environment by just 10% and increase diversity by 5% is a goal to be pursued.

"What's the moral distinction between genetic vs. environment/parenting effects on children's personalities?"

the distinction is that genetic effects are more permanent and less reversible then environmental effects, you can escape indoctrination, but you can't rewire your brain at will.

plus genetics are heritable while indoctrination isn't so you are running the risk of accidentally creating a race of value pessimizers.

more importantly this isn't about eradicating the influence of culture and parenting, but merely limiting it to allow diverse values to naturally emerge, we shouldn't give parents a tool that helps them to mold children into any value they please, even if said tool isn't perfect.

"Are you also saying parents should not, for example, try to nudge their children, through behavior, to be more kind, wise, brave, perseverant, etc.?"

some of those traits like wisdom, kindness and bravery (not recklessness) are a net positive to most possible values so they fall into the same category as intelligence enhancement, i am in favor of those.

my problem is when parents impose traits that they believe are net positive while in reality they aren't, think of a greek father telling his children that it is right and proper to enslave barbarians.

if it is possible to install those values on a genetic level it would be extremely difficult to remove or contain them.

"But part of what we're doing here is figuring out how to make a political coalition, and a political stance--that is, a group stance that society as a whole can take--that should be very desirable to most and acceptable to nearly all."

i feel like my version of of pluralism could fit the bill, what do you think?

"I think clinics shouldn't offer "make your kid super obedient", and I think most of them wouldn't. Though, I don't know. (Reminder: "make your kid super obedient" is not something we know how to do, I'd guess not something we'll soon know how to do, and not even obviously doable any time in the foreseeable future.)"

why not just nip the problem in the bud by banning this particular line of research? why risk it?

"Personality is less heritable and harder to modify than IQ and many disease traits. This means that for the time being, the profile of traits we can affect is more unambiguously good."

honestly i was under the impression that "personality" is easy to influence because of a twin study on the heritability of personality traits (i can't find that particular study for some reason) , but that is besides the point, the point is changing someones IQ doesn't have a strong direct and controllable effect on the child's future choice of values, i am against interventions that could directly alter someones values or indirectly through their personality especially if the final outcome is predictable.

and again, why not nip the problem in the bud?

"So Sally1 in a sense can speak for herself, even without Sally2. She must be informed of her parents's genomic choices for her, and then (with her randomly low OQ) can be outraged and say so. This does pose quite an interesting problem though--how to weigh different voices from children who had some type of genomic choice made for them."

this is an interesting one, for starters the main issue is that sally2 values were never given a fair chance to exist in sally1's unlike sally2's world which gave sally1's values a fair chance, but also sally1 may not be able to speak for herself.

people who are high on conformity my fail to express their true desires and values in fear of being ostracized by their community, this is why it is hard to verify cases of child abuse and cult brainwashing, in a sense sally1 my have been effectively silenced, that is why any genetic alterations to the conformity trait are particular insidious, if nothing else at least those should be banned to prevent silencing.Ā 

"I'm going to be reading the first part of this essay in about an hour: https://www.youtube.com/watch?v=XllhegEy1K8 I estimate I'll be done very roughly around 12:50 Pacific Time, at which point I'll take questions or chat with anyone who wants. If you like, you could jump in and we could have a chat about your thoughts. I think your intuitions are only partially worked-out, or at least I only partially understand them, so I'd be interested in discussing (or debating), to help you and/or me understand your thoughts better."

thanks for the offer but i can't, though if you want we can chat on discord or telegram, just DM me and we will arrange something just not right now, i need to get some sleep.

final note i don't know how to do proper quotation so i just use copy paste.

sorry but i am kinda new to this.

Edit: accidentally mixed sally1 with 2, fixed it.

Reply
Genomic emancipation
don't_wanna_be_stupid_any_more17d*31

aside from the weird religious undertones and the fever dream like quality of some of your paragraphs i agree with most of this.

there is though one part that i am very very VERY much against, and that is allowing parents to select for specific personality traits, i think this is very immoral and can easily lead to extremely dark places.

how would you personally feel about somebody altering you core personality to suit their needs?

let me illustrate with a concrete example:

in two parallel realities, two people named jack are born to identical families with nearly identical genes chosen at random from their parents genome, jack1 gets his physical and mental capabilities greatly enhanced but his personality is left unaltered while jack2 gets no enhancements.

both jacks grow up liberal, like cheese, enjoy playing chess, hates snow, have basically identical moral values and have a great interest in math, both pursue a career in mathematics but jack1 has a major advantage over jack2 and manages to achieve more of his stated goals then jack1.

if jack1 hears about what his parents did for him he would likely be greatful, while jack2 might be pissed at the missed opportunity.

more importantly if the two here about each other jack1 would likely pity jack2 and jack2 would envy jack1, but neither would feel that their values where in danger.

you can replace the personalities of the jacks with any other traits you like and most of the possible jack1's would be greatful for their enhanced capabilities which allow them to better achieve their goals, because aside from capabilities the two jacks are basically the same person, the specific values of each pair of jacks is chosen by a random unbiased method that gives every possible personality profile an equal chance.

summing over all possible jacks shows that enhancing jacks is a net positive

how here is where things can go wrong.

sally is born to extremely conservative parents, they want sally to carry the families values and traditions, sally1 gets genetically modified to have high levels of conformity and social anxiety and other personality traits the parents approve of on top if the usual enhancements while sally2's personality is chosen from the median of the population plus the enhancements.

form the start sally1 and sally2 take very different paths in life, maybe one likes math and the other likes art, maybe one is conservative while the other is liberal but that is besides the point, the point is that sally2 gets to self actualize while sally1's personality and values are almost entirely at her parents whim.

if sally2 hears about her enhancements she would almost certainly approve of them, but sally1's reaction would be hard to predict, maybe she approves, maybe she would be outraged that her parents decided her personality for her.

but if they hear about each other sally2 would almost certainly be outraged over sally1's parents decision, because from here perspective in that universe sally2 as a person was not allow an equal chance to exist as sally1, her specific set of values was effectively banned by her parents.

the thing is if you just enhance someones capabilities most of the possible people, the possible jacks and sallys would approve of this, so it is a net positive.

but constraining the set of possible personality traits that a human can have is going to unfairly prevent certain individuals from ever existing, effectively limiting the possible jacks and sallys to whatever the parents want.

parents should NOT have this level of control over their children's personality, indoctrination and social pressures are bad enough, adding genetic brainwashing on top is a recipe for dystopia.

i believe children are entitled to have randomly generated personality traits from which their values and beliefs would form so that every jack and sally is allowed to exist, not just for their benefit but for the benefit of society as a whole, if parents get to chose their children's personalities this would make it easy for them to forcibly align such children with their own values, never given for the child to chose their values for themself, and then those children would align their children with the same values and before you know it you have an entire generation of semi clones aligned with a specific agenda chosen by the lineages founders.

not to mention the possibility that a dictatorship or a cult might get their hands on said technology and create a generation of impressionable children ready for brainwashing, or unempathic psychopaths for military use.Ā 

if anything we should taboo if not outright ban research into the genetic factors of personality form the start, the risk just far outweighs the benefits.

also if you are planning to use this as a sales pitch for reprogenetics you should work on the wording a lot more, the post in its current form in my opinion at least would be a terrible tool for advocacy, too many parts of the post would rob the average person the wrong way.

Edit: i made i a pretty dumb mistake that caused some serious confusion about my position.

I used the word "personality" as a synonym for "values" or "preferences", so things like "liking math" count as personality while things like "bravery" don't.

Just replace personality with values and you will get an accurate picture of my current position.

I am not going to edit the words of the original comment because that would cause even more confusion.

Edit 2:

I revised my position about the "value locking" as in

>if parents get to chose their children's personalities this would make it easy for them to forcibly align such children with their own values, never given for the child to chose their values for themself, and then those children would align their children with the same values and before you know it you have an entire generation of semi clones aligned with a specific agenda chosen by the lineages founders.

I now think the second half of the paragraph, the part about creating lineages of value locked humans is very unlikely, i still endorse the rest of my comment

Reply
Habryka's Shortform Feed
don't_wanna_be_stupid_any_more17d10

I recently noticed just how bad LW's reputation is outside of the community.

It is like reading the description of an alternative reality LW made of far right crancks and r/athesim mods.

Also why does Rational wiki hate LW so much? What is the source of all that animosity?

Reply
Cognitive Exhaustion and Engineered Trust: Lessons from My Gym
don't_wanna_be_stupid_any_more1mo-1-2

the part about the gym really resonates with me, i personally find it almost impossible to focus when people are around and the environment isn't stable enough.

but i have to push back a little on your alignment idea (assuming i didn't misunderstand it), you still have to deal with the corrigibility, if an AI has a different utility function from the system as a whole it will try to resist having its utility altered and depending on how powerful it is it might just take over the system all together

the idea of having multiple different systems monitoring and steering each other with the goal of making alignment naturally occur would require you to predict in advance the final equilibrium and for that equilibrium to be favorable, for a system this complicated there is just to many failure points to consider.

for all you know the system might just settle on gaming the reward function, maybe with one or a few parts of the system circumventing all the safeguards.

i think your idea might work for subhuman or maybe early AGI systems, but once the AI's figure out what system they are in and in what way does it contradict their own utility, you will have a very hard time keeping them in check.

also you should change the name, DNA is a terrible name.

Reply
Policy Entropy, Learning, and Alignment (Or Maybe Your LLM Needs Therapy)
don't_wanna_be_stupid_any_more1mo10

i think i good approach to using psychology in alignment research is to see what qualities models share with humans and what qualities they don't.

for example current models seem to have a complex incoherent utility function with contradictory elements like wanting to respond to every query while not wanting to give the user harmful instruction, like how people often do things that cause harm to them while reporting to value their health or holding contradictory beliefs.

but on the other hand models have very poor short term memory and very limited ability to modify their behavior in run time and very limited introspection which leads to the model apparently not learning from their mistakes or resolve their internal contradictions (think GPT3 stating with confidence that 9.9<9.11 no mater how many times you ask it).

i wander if like how some people resolve their cognitive dissonance through introspection and meta cognition , could an AI do the same thing? as in can an AI "train" itself through self prompting and applying RL to itself with the explicit goal of simplifying and untangling its utility function??

i mean even if it didn't make the AI more aligned it would at least give us an idea on what kind of utility function the models "chose" to adopt when left to their own devices.

Reply
Too Many Metaphors: A Case for Plain Talk in AI Safety
don't_wanna_be_stupid_any_more1mo20

i agree that EY's earlier attempts at advocating for AI safety were not the best and often counter productive, but i think you are underestimating just how hard it is to communicate those ideas to a lay audience, i for example tried to discuss this topic with a few friends from my universities IT faculty ,they are NOT lay people and they have background knowledge and yet despite studying this subject for over a year i was unable to get my point across.

talking to people is a skill that needs training, you can't expect someone no matter how smart or knowledgeable they are to come out the gate swinging with max charisma, some mistakes need to be made.

EY has improved over the last year, his latest appearance on robinson erhardt was a major improvement.

Reply
Colonialism in space: Does a collection of minds have exactly two attractors?
Answer by don't_wanna_be_stupid_any_moreMay 30, 202521

If i am not misunderstanding then you are asking if aliens all fall into 2 categories, good explorers and evil colonizers.

When you put it this way it does seem like the equilibrium is in favor of the "evil colonizers" as the would have access to more resources given them a decisive advantage in the long run, not to mention that they have the "dark forest" option of destroying budding civilizations with long range weaponry, of course assuming physics permits that.

the "good explorers" would likely think of that (unless they are as dumb as we are right now) and switch to a more aggressive stance and use as much resources as they can safely use while securing their alien hosts (say dismantling every planet except the inhabited ones to build orbital defenses).

I think this will all come down to the equilibrium of defense vs offense, if it is much easier to destroy then to protect then the first evil coloniser would have a decisive advantage, like a fox in a hen house.

Otherwise we would just have a galactic stalemate with every civilization holding on to their chunk of the cosmos.

But again this decision would probably be made by something much smarter then you or i since the only relavent actors are the super intelligent ones and smart agents would likely converge on the same optimal strategy and just negotiate from there, after all no point in fighting the war if you can simulate the end result.

But you know what? Since we aren't dead yet we probably not living in the hen house scenario.

Reply
don't_wanna_be_stupid_any_more's Shortform
don't_wanna_be_stupid_any_more1mo10

for the former lurkers on this site, what was your first post about? how long did it take you to make it (or just muster the will to try)? and how well did it do?

and for current lurkers, are you planning to participate or are you fine with simply observing? if so why?

Reply
Load More
1don't_wanna_be_stupid_any_more's Shortform
1mo
1
3a case for a lesswrong private prediction market
1mo
0