rockthecasbah

Wiki Contributions

Comments

Exactly, this is the ring everyone else is optimizing for. So it’s tough to get relative to the other interventions.

Bumble, Hinge and Tinder.

I averaged that last time I was single. Should be able to get back there.

There is a failure mode here of overinvesting in status signals and underinvesting in being a pillar of your friend group.

I already have a good "status" so it's not a priority anyway, relative to the other areas.

That's helpful, thank you.

Do you know a trustworthy and concise source about how to Keto? The time to find a non-terrible guide via google sucks.

Haha yeah status is sexy!

The main reason is just that status is ambiguous between a "trait" and a "proof". Status is attractive partly because it mentally healthy, socially intelligent men will rise in status faster. But there's also an element of status being intrinsically useful because it's a resource to provide for a family.

The most efficient status-increasing interventions are all about presentation. Like I could get a white-house job to increase my status, but that would be super hard work. Earning the respect of my friends and advertising my career successes would also increase my status and is way easier. So I'll address it in the "proofs" post.

This an interesting essay and seems compelling to me. Because I am insufferable, I will pick the world's smallest nit.

The Wright Brothers took 4 years to build their first successful prototype. It took another 23 years for the first mass manufactured airplane to appear, for a total of 27 years of R&D.

That's true but artisanal airplanes were produced in the hundreds of thousands before mass manufacture. 200k airplanes served in WW1 just 15 years in. So call it 15 years of R&D.

Apologies if this has been said, but the reading level of this essay is stunningly high. I've read rationality A-Z and I can barely follow passages. For example

This happens in practice in real life, it is what happened in the only case we know about, and it seems to me that there are deep theoretical reasons to expect it to happen again: the first semi-outer-aligned solutions found, in the search ordering of a real-world bounded optimization process, are not inner-aligned solutions.  This is sufficient on its own, even ignoring many other items on this list, to trash entire categories of naive alignment proposals which assume that if you optimize a bunch on a loss function calculated using some simple concept, you get perfect inner alignment on that concept.

I think Yud means here is our genes had a base objective of reproducing themselves. The genes wanted their humans to make babies which were also reproductively fit. But "real-world bounded optimization process" produced humans that sought different things, like sexual pleasure and food and alliances with powerful peers. In the early environment that worked because sex lead to babies and food lead to healthy babies and alliances lead to protection for the babies. But once we built civilization we started having sex with birth control as an end in itself, even letting it distract us from the baby-making objectives. So the genes had this goal but the mesa-optimizer (humans) was only aligned in one environment. When the environment changed it lost alignment. We can expect the same to happen to our AI.

Okay, I think I get it. But there are so few people on the planet that can parse this passage.

Has someone written a more accessible version of this yet?

Apologies if this has been said, but the reading level of this essay is stunningly high. I've read rationality A-Z and I can barely follow passages. For example

This happens in practice in real life, it is what happened in the only case we know about, and it seems to me that there are deep theoretical reasons to expect it to happen again: the first semi-outer-aligned solutions found, in the search ordering of a real-world bounded optimization process, are not inner-aligned solutions.  This is sufficient on its own, even ignoring many other items on this list, to trash entire categories of naive alignment proposals which assume that if you optimize a bunch on a loss function calculated using some simple concept, you get perfect inner alignment on that concept.

I think Yud means here is our genes had a base objective of reproducing themselves. The genes wanted their humans to make babies which were also reproductively fit. But "real-world bounded optimization process" produced humans that sought different things, like sexual pleasure and food and alliances with powerful peers. In the early environment that worked because sex lead to babies and food lead to healthy babies and alliances lead to protection for the babies. But once we built civilization we started having sex with birth control as an end in itself, even letting it distract us from the baby-making objectives. So the genes had this goal but the mesa-optimizer (humans) was only aligned in one environment. When the environment changed it lost alignment. We can expect the same to happen to our AI.

Okay, I think I get it. But there are so few people on the planet that can parse this passage.

Has someone written a more accessible version of this yet?

Okay, let's do that backwards planning exercise.

In the long run, I want to do my research but live a low stress and financially comfortable lifestyle. The traditional academic path won't achieve that because I will end up doing my research but leading a high-stress and financially fraught lifestyle. There are three possible solutions to the problem, in rough order of preference A Pick a research agenda that is lucrative, so that I can supplement my income with lucrative consulting gigs and have a strong exit option B Learn to code and get a data science job, then do my research as a hobby C Get a government job related to my field (intelligence or aid)

Path A seems like the best one for both personal and EA reasons. Right now I split my time between writing on foreign investment and cabinet formation. But only the foreign investment work might pay the bills, the cabinet work ends with me in the brutal academia rat race. However, the foreign investment research might or might not succeed depending on contextual factors like competition, my ability to build a brand and the value of academic prestige in the field. So I should first try and figure out if the investment-academia path is satisfying.

I want to find out if that works over the next 6 months or so while in my academic program.

If the returns are too small and the competition too stressful, I should pivot toward a programming career. It's a well-payed 40-hour industry, and I can do my research as a hobby for 8 hours a week. That sounds like a lovely life too. So if I pick that, I would deemphasize my research and focus on coding skills for interviews and building career capital there.

I'm satisfied with that plan. The next question is, how do I stick to it? More on this later.

Load More