This is helpful pushback. You're right that the distillation takes too much for granted. Compressing the 100k+ word framework into ~1000 words lost the load-bearing bits.
On Dalio: agreed, and I should engage his empirical work more explicitly.
On the Horsemen: I think we may be agreeing more than it appears (e.g., on bureaucratic complexity being inevitable), but the post failed to show that.
I failed to calibrate entirely to the audience here, being inside my work for too long. I'll reconsider my approach.
Regarding these AI failure modes, they emerge systematically as violations of one or more of the Four Virtues (Integrity, Fecundity, Harmony, Synergy), which are themselves derived as the optimal solutions to the Four Axiomatic Dilemmas of SORT axes. This was intended as evidence that the framework is something real and useful.
I will attend and will attempt to have some sort of mini-presentation for this week on some topic, and if not for this one, then for some later one.
Yeah we did meet. A bit of a coordination failure in following up on it here, but anyway (and this is of general interest):
I posted this originally in the discussion section but deleted it since JGWeissman suggested that the meetup post be in the front page. Sark reposted this since I haven't been posting almost anything on LW, I didn't have enough karma to put the post on the front page.
To reiterate, I will be there and sark will almost certainly be there.
Something I wonder about just how is how many people on LW might have difficulties with the metaphors used.
An example: In http://lesswrong.com/lw/1e/raising_the_sanity_waterline/, I still haven't quite figured what a waterline is supposed to mean in that context, or what kind of associations the word has, and neither had someone else I asked about that.
Though I'm not quite sure if I'm adding anything new or useful, here's my thoughts:
I had followed your luminosity sequence with great interest when you first started it, but with the amount of work I put into reading the individual posts (moderate?), I felt I only ended up with disconnected pieces of information that I couldn't really apply to my life. I would have possibly gained more if I had put forth more effort looking for the connections and the ways in which the techniques could've been applicable to me. These stories effectively convey what I'd hoped to find, probably with much greater effect than I could've achieved on my own.
AFAICS, you basically compressed the whole sequence to this one post (given that I had read the previous posts) making each of the posts you referred to grounded to certain 'concrete' things to do, for me anyway. I feel that with this post, I am very much more likely to actually implement many of these luminosity-increasing techniques and hopefully achieve a lasting change.
Unless I'm prematurely attributing much greater impact to this post than I should given my future behaviour (i.e. whether I will continue behaving like I always have or not), I would say this has definitely been one of the (personally) most useful posts in Lesswrong that I've read so far (I've followed OB/LW since 2007 or so).
You're right, I overstated and compressed/simplified too much with that sentence. Dalio isn't listed in the influences section of the full work explicitly but Turchin is.
The more precise claim: we have maps, but we lack the underlying physics from which those maps can be derived. What's been missing is a substrate-independent generative model that explains why these patterns recur across different substrates and civilizations. I think this is neglected and needed to make it more legible and thereby eventually engineer the dynamics.
These models are not wrong. The Aliveness framework attempts to provide a deeper, shared set of generative principles (the Four Axiomatic Dilemmas) from which these different, domain-specific patterns can be derived.