LESSWRONG
LW

423
robm
6030
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
There should be more AI safety orgs
robm2y53

I have similar feelings, there's not a clear path for someone in an adjacent field. I chose my current role largely based on the expected QALYs, and I'd gladly move into AI Safety now for the same reason.

This post gives the impression that finding talent is not the current constraint, but I'm confused about why the listed salaries are so high for some of these roles if the pool is so large.

I've submitted applications to a few of these orgs, with cover letters that basically say "I'm here and willing if you need my skills". One frustration is recognizing Alignment as our greatest challenge, and not having a path to go work on it. Another is that the current labs look somewhat homogeneous and a lot like academia, which is not how I'd optimize for speed.

Reply
We don’t trade with ants
robm3y30

I once came home to finds ants carrying rainbow sprinkles across my apartment wall (left out from cake making). I thought it was entertaining once I understood what I was seeing.

Reply
Alexander and Yudkowsky on AGI goals
robm3y10

There's a difference between "what would you do to blend apples" and "what would you do to unbox an AGI". It's not clear to me if it is just a difference of degree, or something deeper.

Reply
No wikitag contributions to display.
No posts to display.