LukeOnline

Wiki Contributions

Comments

It's literally a semantical discussion. There is no true right or wrong here. If you want to use the word "male" to describe those born with male genitals only, excluding trans men - that's a valid definition. A lot of trans men want to be seen as male, and including them is a valid definition as well. IMHO, the latter definition is kinder. 

IMHO, modern medicine is quite limited. 

Our body and mind are super complex, and interconnected in a billion ways. Psychological stress can create physical issues. Physical issues can cause psychological stress. Your diet can cause autoimmune problems. Lack of exercise can cause disease. Even things like lack of cold exposure seem to be able to generate massive problems. 

We evolved to be suited to a hunter-gatherer environment, where we mostly lived active outdoor lives and always ate fresh food. Then we lived in pre-industrial agrarian societies for thousands of years. Our modern industrial environment is brand new and causes a lot of unexpected new problems. 

You mention anxiety, and taking medicine to treat that. Would it be possible to treat your anxiety without medicine? Is your job or something else in your life causing the anxiety, something that a lifestyle change could fix? 

I bought a new house last year, and it has an old empty workshop attached. I'd like to 'revive' it, but I've got little "workshop-experience". 

In preparation, I bought some books on woodworking and pottery. The woodworking books are very traditionally masculine. Men! Chest hair! Beer! BBQ! 

But the book on pottery is quite feminine. Bright photos, soft colors, everything is demonstrated by a female Instagram influencer. 

That doesn't matter in the slightest to me. I don't think my interest in pottery has any relationship to my gender. I don't feel less manly for it. I like making physical objects with my own hands and that's a human trait, not a gendered trait. 

You describe "female-attired trans women who write great Rust code" as worthy of being its own separate gender. Why? Why can't (trans) women who write great code just be... women? Is it impossible to simultaneously be defined as a woman and to be able to write great code? That feels sexist and backwards to me. 

IMHO, behaviours, skills and hobbies don't define your gender. Your genitals do. 

People born with male genitals are male - even if they like to wear dresses and do ballet. 
People born with female genitals are female - even if they wear pants and write great code. 
Trans women often want to be recognized as female - even if they are "demisexual" (people who want a strong emotional connection before they can feel sexual attraction) and write great Rust code. 
And trans men often want to be recognized as male - despite [stereotypical female activity]. 

Last but not least - aren't sexuality and gender completely detached? Why should a gay man be any less of a man? 

From a completely different angle: Nietzsche

We believe that we know something about the things themselves when we speak of trees, colors, snow, and flowers; and yet we possess nothing but metaphors for things — metaphors which correspond in no way to the original entities.

Every concept arises from the equation of unequal things. Just as it is certain that one leaf is never totally the same as another, so it is certain that the concept "leaf" is formed by arbitrarily discarding these individual differences and by forgetting the distinguishing aspects.

We obtain the concept, as we do the form, by overlooking what is individual and actual; whereas nature is acquainted with no forms and no concepts, and likewise with no species, but only with an X which remains inaccessible and undefinable for us.

One may certainly admire man as a mighty genius of construction, who succeeds in piling an infinitely complicated dome of concepts upon an unstable foundation, and, as it were, on running water. Of course, in order to be supported by such a foundation, his construction must be like one constructed of spiders' webs: delicate enough to be carried along by the waves, strong enough not to be blown apart by every wind.

When someone hides something behind a bush and looks for it again in the same place and finds it there as well, there is not much to praise in such seeking and finding. Yet this is how matters stand regarding seeking and finding "truth" within the realm of reason. If I make up the definition of a mammal, and then, after inspecting a camel, declare "look, a mammal' I have indeed brought a truth to light in this way, but it is a truth of limited value. 

I love this text and come back to it often. Isn't it true for everything? Reality is infinitely complex - the only way we can talk about it is by making abstractions that are very distant from reality as it 'really' is. Everything can be "zoomed in" upon and labeled with an infinite, expanding dictionary. 

I just call something a "plank", but a carpenter will know exactly from what tree it has come, how old it is, and what varnishes it has received. 

I talk about my "fingers", but a doctor knows the Latin names for all bones and tendons there. 

I notice "electral wires", an electrician says all kinds of complicated stuff about volts and amperes and types of wirings and grounding etcetera. 

When you delve into any subject, you will notice new distinctions, and gain new vocabulary to describe these distinctions. That's very helpful in many subjects! 

But do you want that with humans? Do you like it when somebody starts dividing up humans in "alphas" and "betas"? Should Facebook display your BMI? Do you want to make the near infinite depth of a human - of a mind, of a personality, of a complex genome, the unique set of things they've learned from their culture, their family and their friends - something that is easily legible to everyone? Something that ought to be legible? 

How we ought to behave, how we want to behave, studying humans and cultures, finding new norms for relationships and sex appropriate to the 21st century - these are fascinating subjects! They are worthy of attention, and I could understand the necessity to develop a deeper vocabulary to study them in detail. 

But I don't think creating some new boxes to fit people into, and demanding that special physical spaces are created for them, and making teens very confused about what box they ought to be in, is very helpful...

I'm from the Netherlands. We've always had fairly laid back attitudes towards gender. When I grew up, I didn't have hyperfeminine cheerleaders and hypermasculine bodybuilders in my class. Lots of girls wore pants and other 'non-feminine' clothing, and/or had short hair. 

Of course, there was a division in male/female, which 99% of the time, didn't matter. But things like showers and changing rooms were clearly separated between boys and girls. 

I notice that this post is a bit frustrating to me, because it feels regressive. 

But if society can go from 3 rainbow colors to 7 we can probably add at least a few genders as well. As soon as new categories become common knowledge people will shape themselves to match them

I grew up with two clear genders and infinite variation within those genders, and a general attitude against putting people in 'boxes'. It allows for individuality. I don't want to lose that variation and individuality and have it replaced by five or seven 'genders' which all have their own prescribed behavior, clothing, restroom and politics. 

I'm totally in favor of self-expression. If you're male and you want be emotionally sensitive, wear dresses and make-up and be quite flamboyant - that's fine. It's pretty close to Jack Sparrow! But that doesn't mean you've got to invent all kinds of special labels for it, and be 'confused' about your 'identity', or find a tribe to imitate. 

What is actually being proposed is more like a 20 year pause in AI research to let MIRI solve alignment

Isn't that insanely unrealistic? Both A.) unrealistic in achieving that pause, and B.) just letting MIRI solve alignment in 20 years? MIRI was formed back in 2000, 22 years ago, and now global AI research has to pause for two decades so MIRI can write more papers about Pearlian Causal Inference? 

Ok, really, all of this has already been answered. These are standard misconceptions about alignment, probably based on some kind of antropomorphic reasoning.

Where? By whom? 

Why would you possibly make this assumption?

Why would you possibly assume that deep, intelligent understanding of life, consciousness, joy and suffering has 0 correlation with caring about these things? 

All of the assumptions we make about biological, evolved life do not apply to AI.

But where do valid assumptions about AI come from? Sure, I might be antropomorphizing AI a bit. I am hopeful that we, biological living humans, do share some common ground with non-biological AGI. But you're forcefully stating the contrary and claiming that it's all so obvious, but why is that? How do you know that any AGI is blindly bound to a simple utility function that cannot be updated by understanding the world around it?

A paperclip-maximizer, or other AI with some simple maximization function, is not going to care if it's born in a nice world or a not-nice world. It's still going to want to maximize paperclips, and turns us all into paperclips, if it can get away with it.


Why would a hyperintelligent, recursively self-improved AI, one that is capable of escaping the AI Box by convincing the keeper to let him free, which the AI is capable of because of his deep understanding of human preferences and functioning, necessarily destroy the world in a way that is 100% disastrous and incompatible with all human preferences?

An AI, by default, does not have mirror neurons or care about us at all.

If you learn that there is alien life on Io, which has emerged and evolved separately and functions in unique ways distinct from life on earth, but it also has consciousness and the ability to experience pleasure and the ability to suffer deeply - do you care? At all? 

An AI, by default, has a fixed utility function and is not interesting in "learning" new values based on observing our behavior. 

Why? Us silly evolved monkeys try and modify our own utility functions all the time - why would a hyperintelligent, recursively self-improved AI with an IQ beyond 3000 be a slave to a fixed utility function, uninterested in learning new values? 

Why would we create a superintellingence that would want things that don't perfectly align with our interests?

Why do parents have children that are not perfect slaves but have their own independent ambitions? Why do we want freethinking partners that don't obey our every wish? Why do you even think you can precisely determine the desires of a being that surpasses us in both knowledge and intelligence?

The safe thing to do is not to create any AGIs at all until we are very certain we can do it safely, in a way that is perfectly aligned with human values.

Would the world have been a safer place if we had not invented nuclear weapons in WWII? If conventional warfare would still have been a powerful tool in the hands of autocrats around the world?

Why would a hyperintelligent, recursively self-improved AI, one that is capable of escaping the AI Box by convincing the keeper to let him free, which the AI is capable of because of his deep understanding of human preferences and functioning, necessarily destroy the world in a way that is 100% disastrous and incompatible with all human preferences?

I fully agree that there is a big risk of both massive damage to human preferences, and even the extinction of all life, so AI Alignment work is highly valuable, but why is "unproductive destruction of the entire world" so certain?

I fully agree here. This is a very valuable post. 

After all, if we agree that there is a set of values, a set of behaviors that we would want to a superintelligence acting in humanity's best interest to have, why wouldn't I myself choose to hold these values and do these behaviors?

I know Jordan Peterson is quite the controversial figure, but that's some core advice of his. Aim for the highest, the best you could possibly aim for - what else is there to do? We're bounded by death, you've got nothing to lose, and everything to gain - why not aim for the highest?

What’s quite interesting is that, if you do what it is that you’re called upon to do—which is to lift your eyes up above the mundane, daily, selfish, impulsive issues that might upset you—and you attempt to enter into a contractual relationship with that which you might hold in the highest regard, whatever that might be—to aim high, and to make that important above all else in your life—that fortifies you against the vicissitudes of existence, like nothing else can. I truly believe that’s the most practical advice that you could possibly receive.

I sincerely believe there is nothing more worthwhile for us humans to do than that: aim for the best, for ourselves, for our families, for our communities, for the world, in the now, in the short term, and in the long term. It seems... obvious? And if we truly work that out and act on it, wouldn't that help convince an AGI to do the same? 

(You might be interested in this recent post of mine)

Load More