Zach Stein-Perlman

Undergraduate at Williams. Currently thinking about the interaction between government and AI, especially how government could affect an intelligence explosion. I'm just starting to do research; comments and suggestions are greatly appreciated.

Some things I'd be excited to talk about:

  • What happens after an intelligence explosion
  • What happens if most people appreciate AI
  • International relations in the context of powerful AI
  • Policy responses to AI — what's likely to happen and what would be good

Wiki Contributions

Comments

Omicron Variant Post #2

Thanks, Zvi, for these updates. They have quite high counterfactual impact in educating me, and presumably the same is true for many others.

Other recent blogposts for those who haven't seen them yet: Noah Smith and Scott Alexander.

Zach Stein-Perlman's Shortform

I agree that near-optimal is unlikely. But I would be quite surprised by 1%-99% futures because (in short) I think we do better off we optimize for good and do worse if we don’t. If our final use of our cosmic endowment isn’t near-optimal, I think we failed to optimize for good and would be surprised if it’s >1%.

Christiano, Cotra, and Yudkowsky on AI progress

since you disagree with them eventually, e.g. >2/3 doom by 2030

This apparently refers to Yudkowsky's credences, and I notice I am surprised — has Yudkowsky said this somewhere? (Edit: the answer is no, thanks for responses.)

Discussion with Eliezer Yudkowsky on AGI interventions

we don't have enough time

Setting aside this proposal's, ah, logistical difficulties, I certainly don't think we should ignore interventions that target only the (say) 10% of the probability space in which superintelligence takes longest to appear.

Split and Commit

I'm curious what examples you or others who found the opening examples distracting would prefer. Something like those examples is standard for describing moral progress, at least in my experience, so I'm curious if you would frame moral progress differently or just use other examples.

Split and Commit

Or! This idea sounds superficially reasonable and even (per the appendix) gets praise from a few people, but is actually useless or harmful. Currently working out a hypothesis for how that could be the case...

Study Guide

Thank you for writing this! I once thought about asking LW for something like this but never got around to it.

I'm an undergraduate; I expect to take several more late-undergraduate- to early-graduate-level math courses. Presumably some will turn out to be much more valuable to me than others, and presumably this is possible to predict better-than-randomly in advance. Do you [or anyone else] have thoughts on how to choose between math courses other than those you mention, either specific courses (and why they might be valuable) or general principles (any why they seem reasonable)? (I don't have any sense of what the math of agency and alignment is like, and I hope to get a feel for it sometime in the next year, but I can't right now — by the way, any recommendations on how to do that?)

Consequentialism may cost you

Consequentialism might harm survival

In general, the correctness of [a principle] is one matter; the correctness of accepting it, quite another. I think you conflate the claims consequentialism is true and naive consequentialist decision procedures are optimal. Even if we have decisive epistemic reason to accept consequentialism (of some sort), we may have decisive moral or prudential reason to use non-consequentialist decision procedures. So I would at least narrow your claims to consequentialist decision procedures.

evolution as a force typically acts on collectives, not individuals.

I'm not sure what you're asserting here or how it's relevant. Can you be more specific?

Rationalism for New EAs

Interesting.

My hot take is that you might want to be careful about how much of Less Wrong you throw at people right away.

I hadn't thought about it this way before and don't have a great model of how new people might respond to LW. Would the same apply to SSC, or is Scott Alexander less "weird in a way that repulses a decent number of people"? (I'll strongly consider putting more emphasis on Scout Mindset-ish stuff regardless, and would appreciate suggestions for more readings "that teach[] a generalizable core rationality skill.")

Your Time Might Be More Valuable Than You Think

I mostly agree. Two thoughts:

  • Rather than thinking in terms of wages, w(t), I think we should just think in terms of time-value or marginal utility, u(t). Clearly everything you say applies to all value-we-can-get-from-time, not just wages.
  • Some of your conclusions (e.g., "if you would be willing to trade an hour for $1000 in the future [i.e., and gain the hour], you should also be willing to do so now") only apply when the following is true: for the rest of the person's life, their value-from-time is a (nondecreasing) function of their time spent working in the past. This is a plausible approximation in many cases. But both of the following are plausible:
    • working/experience will have a greater effect on u in the future, after I'm better-credentialed (I'm currently an undergraduate; the same number of hours experience counts for more if it's done in a higher-status position at a higher-status organization; first working in a lower-status way has some career benefits but its not like the next t hours of my work experience will have the same effect on my long-term prospects regardless of whether I get a low-status job now or a high-status job after college)
    • I will have higher direct time-value in the future due to reasons that are nearly independent of how I spend my marginal time now.

More generally, it seems sometimes true that e.g. doing 60 hrs/wk for 1 year is less valuable for u than 30 hrs/wk for 2 years (because value from work is generally a function not just of experience/productivity but also legibility-of-experience). If I could save up marginal time now to spend later, I would (not just because I expect some future time to be higher-leverage because of TAI, but also because I expect to have higher direct time-value in the future in a way that I largely can't affect by spending more time productively now); I'm not sure where I'd draw the line, but I'm pretty sure I'd save up time even if it cost 2 hours to give a single hour to future me.

(I'm not great at Markdown; please let me know if there's a way to make the above paragraph part of the second bullet point)

Load More