LESSWRONG
LW

721
LWLW
1881490
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
1LWLW's Shortform
8mo
50
1LWLW's Shortform
8mo
50
LWLW's Shortform
LWLW26d10

Thank you so much! I will contact her.

Reply
LWLW's Shortform
LWLW1mo10

I am pretty good at math. At a T20 math program I was chosen for special mentorship and research opportunities over several people who made Top 500 on the Putnam due to me being deemed “more talented” (as nebulous as that phrase is, I was significantly faster in lectures than them and was able to digest graduate texts much quicker than them, I was also able to solve competition-style problems they couldn’t). My undergrad got interrupted by a health crisis so I never got a chance to actually engage in research or dedicated Putnam prep, but I believe most (maybe all if I’m being vain) of my professors would have considered me the brightest student in my year. I don’t know a lot about programming or ML at this point, but I am confident I could learn. I’m two years into my undergrad and will likely be returning next year.

Reply
LWLW's Shortform
LWLW1mo10

I’m weighing my career options, and the two issues that seem most important to me are factory farming and preventing misuse/s-risks from AI. Working for a lab-grown meat startup seems like a very high-impact line of work that could also be technically interesting. I think I would enjoy that career a lot.


However, I believe that S-risks from human misuse of AI and neuroscience introduce scenarios that dwarf factory-farming in awfulness. I think that there are lots of incredibly intelligent people working on figuring out how to align AIs to who/what we want. But I don’t think there’s nearly the same amount of effort being made towards the coordination problem/preventing misuse. So naturally, I’d really like to work on solving this, but I just don’t even know how I’d start tackling this problem. It seems much harder and much less straightforward than “help make lab-grown meat cheap enough to end factory farming.” So, any advice would be appreciated.

[This comment is no longer endorsed by its author]Reply
Should we aim for flourishing over mere survival? The Better Futures series.
LWLW1mo11

What do you mean by solve alignment? What is your optimal world? What you consider “near-optimal flourishing” is likely very different than many other people’s ideas of near-optimal flourishing. I think people working on alignment are just punting on this issue right now while they figure out how to implement intent and value alignment but I assume there will be a lot of conflict about what values a model will be aligned to and who a model will be aligned to if/when we have the technical ability to align powerful AIs.

Reply
Saying Goodbye
LWLW1mo2220

I think that the woman you met on FEELD was engaging in wishful thinking. I do not understand the line of reasoning that supports the conclusion that the concentration of power will stop at “people who work at a leading AI lab.” Why would it stop there?

Reply
Estrogen: A trip report
LWLW3mo20

But haven’t you read about the BSTc findings? It’s a sexually dimorphic region in the lizard brain and trans women’s BSTc regions were similar to cis women’s while trans men’s were similar to cis men’s. This was controlled for HRT as well.

There’s no solid proof for it yet, but the idea that something went wrong during fetal development where the body masculinized but the brain feminized or vice versa makes the most sense to me.

Reply
Estrogen: A trip report
LWLW3mo*60

i experienced a psychotic break that ended about 8 months before i started HRT, and i did notice some psychological changes after i started estrogen/spiro (not any psychotic symptoms though, which i am much more familiar with personally than i'd like to be). in particular my ability for self-care drastically improved (taking a shower every day became significantly easier after about 1.3 months on HRT) and my emotional regulation improved to the point that i became capable of holding a job. granted, the changes could have been primarily related to my brain continuing to recover from psychosis, but i guess i'm curious if you really had zero psychological changes. did you not feel slightly less anxious, or did you not sleep better once you started?

i guess i'm just curious because one would think that if trans women had brains closer to cis women than cis men do, that pre-HRT trans women would experience similar psychological symptoms that cis women with elevated testosterone do.

Reply
jenn's Shortform
[+]LWLW3mo*-11-8
Eli's shortform feed
LWLW3mo10

Not a real superintelligence because it can’t even understand the spirit of my question.

Reply
Eli's shortform feed
LWLW3mo40

This isn’t the most philosophically sophisticated idea, but basically the idea that the universe was created by “something” that desired a certain evolution of the universe. This as opposed to the most popular idea that the universe just sprang into existence randomly.

Basically proof of some sort of god. I wish I found STEM interesting, but the main pseudo-intellectual interests I have that bounce around my head are existential questions. I think answering existential questions would be what I would be most excited about an ASI coming into existence for. I think most people’s most burning question if they were talking to a superintelligence wouldn’t be “is the Riemann Hypothesis true?” I think it would be “is there a god? What was it thinking when it made the universe this way?”

Reply
Load More