This is the kind of content I've missed from LW in the past couple of years. Reminded me of something on old LW a while back that is a nice object level complement to this post. I saved it and look at it occasionally for inspiration (I don't really think it's a definitive list of 'things to do as a superhuman', or even a good list of things to do at all, but just as a nice reminder that ambitious people are interesting and fun):
(Not sure who the author is, if anyone finds the original post please link to it! I'll try to find it when I get the time)
Looks like it's from here:
I also distinctly remember that post.
I exhaled shortly through my nose at the irony in this one:
I do strongly recommend at least visiting the wilderness, and spending time moving around in it. Particularly at night. Walking around in the woods is one of the most impactful experiences I have had of noticing new details, while having a clear memory of not noticing those details before, in a way which was immediately useful.
Hello my values a decade ago, it's so nice to see you publicly documented! In retrospect & in particular, the level of paranoia imbued here will serve you well against incentive hijacking, and will serve as a foundational stone in goal stability.
There is one particular policy here, where my thinking has changed significantly since then; and I'd love to check against Time whether it makes sense, or has my values been shifted:
| Reject invest-y power. Some kinds of power increase your freedom. Some other kinds require an ongoing investment of your time and energy, and explode if you fail to provide it. The second kind binds you, and ultimately forces you to give up your values. The second kind is also easier, and you'll be tempted all the time.
| Optimization never stops. Avoid one-time effort if at all possible. Aim for long-term stability of the process that generates improvements. There is no room for the psychological comfort of certainty.
So, the operative word above is "freedom" (personally, I've used "possibility space maximization"), and it's super useful to run a conceptually exhaustive search across surface-y options . But.
You probably have goals of interest, that you wish to achieve (eg "long-term future of humanity"). Some of these might require banging at stuff for an extended period of time. You have behaviours (eg your meta-policies), which you do for an extended period of time. Whether you recognize it as such, or not, you are also vesting into these; and by way of the forgetting curve, and blog readership, they also require ongoing maintenance. And yes, there might come future technological change which will make them obsolete, and put you into the decision between "your values" & "rolling with changes".
So, my counter to this is, _Anything which does not take into consideration the passage of time, gets eaten by it._ Your Time is a super scarce resource -probably the scarcest of them all. One way to turn this liability into an asset is by vesting into stuff (projects, startups, skills, people, ideas, what have you), and riding the compounding interest across time. This is, to my knowledge, the only way one can scale scarce resources into epic levels of task-specific utility.
(Relatedly, it seems to me, that there is a sliding scale between the need for change in the face of future changes and vesting into things, that most people tend to shift through as they age. Obvious problem here is simulated annealing being susceptible to fixation on phantom (local) maxima by way of changing environment.)
So, unpacking the desiredata from above, the model I'd offer for consideration is the Affordable Loss Principle, with a side dish of Avoiding Infinite Optimizers:
* The affordable-loss principle: prescribes committing in advance to what one is willing to lose rather than investing in calculations about expected returns to the project. Key to affordable loss policies is generation of Next-best-alternatives, such so when it comes to move, there is something to seamlessly move forward to.
Or, in the wise words of Zvi: https://www.lesserwrong.com/posts/ENBzEkoyvdakz4w5d/out-to-get-you
In conclusion, I'd suggest that yes, run a freedom-maximizing circle, because it eliminates conceptual blindsight, and there is a lot of low-hanging fruit you can pick up on your way. But additionally, be on the lookout for opportunities that are compact, low-hanging, and compounding across time, such so that linear investments today leads to incremental & compounding utility for tomorrow.
This is very good. I don't think I disagree with anything you wrote. In practice, I recognize that most things which are dropped explode at least a little bit, and my implementation of "reject invest-y power" attempts to make sure these explosions are small enough that I can take them without significant damage (not literally zero damage).
Indeed, compounding interest is juicy, and I have also noticed biologically programmed annealing in myself.
I really like the general idea of this. Would have loved to see an example with for instance, making decisions over a second, day, week, month, and year, to get a better concrete idea of how this actually cashes out in terms of decision making, planning, and motivational processes.
That would be a very cool post to write, if I ever got around to writing it :)
One quick remark is that because the process is implemented by updating the way I think, it feels completely transparent from the inside (until I go to the meta level to check what's on track). Mostly I don't notice what the system is doing until I reflect on it later. Meanwhile, any new metacognitive content which I'm importing goes through explicit channels and gets lots of attention.
This isn't really a "process". Maybe it could be "guidelines"? Either way, some of them are pretty good. Some of them are pretty bad (No falling in love). Some are just weird (Beware of consumer electronic devices).
(Beware of consumer electronic devices).
This seemed straightforward to me: if you are serious about security, most consumer electronics are not going to be secure enough for your purposes. Don't write up anything on a computer that would be bad if the wrong people knew, eventually.
Two issues. First, are you serious about security? Should you be? What is the bad outcome you're trying to protect yourself from? It's possible that OP has good reasons to want security, but it's also possible that they are paranoid. Note, OP didn't say "if". Presumably they think that everyone always needs security.
Second, what is better than a computer? Surely not paper. Don't post your secrets to facebook in plain text. Anything smarter than that is probably going to work fine for you.
The point is well taken, but I disagree with your default position. It is important to at least understand enough about security to make an informed choice - if you don't have any methods available, by the time you know you need them it will be irrevocably too late. Some common activities in this community which have strong security implications:
The don't-post-everything-on-Facebook heuristic is not satisfactory in any of those cases.
The "Don't write up anything on a computer that would be bad if the wrong people knew, eventually." heuristic is pretty impractical for any of your three cases too tho.
I agree that some of them are pretty good. I find the whole thing both inspiring and intruiging.
Not falling in love was shocking to see. I find it interesting... would be curious to hear other people's thoughts on it.
I figured "someone else would express shock and confusion at not falling in love for the obvious reasons", and I figure the obvious counters were pretty obvious. Falling in love is a serious distraction that can compromise your values if your core values don't actually include falling in love.
If your core values do include falling in love I assume you would develop a different set of meta principles. I wouldn't want SquirrelInHell's own system for myself but I do respect it.
The thing that intrigued me, from SquirrelInHell's own value system, was:
Being attracted to someone is a sign that your mental security is compromised, and that they are more adequate than you in some respect.
This seemed odd to me - it does seem like an obvious security vulnerability, but the specifics mechanism of "a sign they are more adequate than you in some respect" does not seem obvious, although plausibly an artifact of either Squirrel's particular psychology, or the effects of employing the rest of the meta system.
Yeah, the reasons are obvious.
I think what goes on in my head when I hear that is how it doens't seem to go along with the rationalist discourse. Total self-sacrifice isn't actually popular, rather I see a lot of trying to be reasonable, optimizing everything persistently without being extreme. That, and people have posted about how to optimize dating aswell. This is particualrly true on SSC, but SSC also seems to be functioning as a bridge between rationalists and other very smart people, so I guess that's to be expected.
In any case, calling love "a sign that your mental security is compromised" is exactly the kind of extreme statement that most rationalists seem to want to avoid, and that would immediately turn off any normal person. Hence why I'm curious about reactions, particularly on LW.
But none of this necessarily means anything. I am actually sympathetic to this view. Falling in love does take away resources, and any happiness anyone experiences before something goes foom can probably be rounded to zero.
I would worry if it was taken as default-value on LW that you're not supposed to fall in love, but I think a lot of value of the site is being able to seriously entertain counterinuititive ideas.
That said, I also worry that this concept is going to be The Concept That Gets Talked about in a gigantic sprawling thread, ignoring a lot of important substance in the rest of Squirrel's post. (I'm committing to not further discussing this particular subtopic to avoid contributing to that)
Half way through I started to wonder if it is satire.
I think I would have wondered that myself if I hadn't had serious conversations with people who use similar thinking systems, which initially I parsed as very outlandish, but after a lot of in depth discussion came to respect. I don't think the current post is optimized for persuading anyone who's not generally on board with the project, but it wasn't trying to be.
(I currently classify things along a broader distinction of "self improvement that's attempting to involve rigorous integration of your goals" and "self improvement that's just trying to hack together something that works." The former seems harder to do. The people who do advocate for it say that the effort is worth it. I'm currently mulling it over a bit and seeing if it's worth it to me)