I've been reading a new translation of the Zhuangzi and found its framing of "knowledge" interesting, counter to my expectations (especially as a Rationalist), and actionable in how it is related to Virtue (agency).
I wrote up a short post about it: Small Steps vs. Big Steps
In the Zhuangzi knowledge is presented pejoratively in contrast to Virtue. Confucius presents simplified, modest action as a more aligned way of being. I highlight why this is interesting and discuss how we might apply it.
Yeah.
Conventional wisdom suggests "execution" for hyperscale consumer products is a moat, e.g "Apple may lead scaling access to AGIs since they have the design, supply chain, marketing expertise plus a vast, established user ecosystem (>2bn active devices)". AGI, however, dissolves away the edge from expertise, and users will flock to a new thing if the value is there (ChatGPT surpassed 1m users in 5 days).
A counter idea I have though is that a prerequisite for AGI may be access to training data derived from pre-AGI systems being used in the wild (e.g across 2bn active devices). In this case, NVIDIA might not have access to the data required to come first.
Ever since I adopted the rule of “That which can be destroyed by the truth should be,” I’ve also come to realize “That which the truth nourishes should thrive.” When something good happens, I am happy, and there is no confusion in my mind about whether it is rational for me to be happy. When something terrible happens, I do not flee my sadness by searching for fake consolations and false silver linings. I visualize the past and future of humankind, the tens of billions of deaths over our history, the misery and fear, the search for answers, the trembling hands reaching upward out of so much blood, what we could become someday when we make the stars our cities, all that darkness and all that light—I know that I can never truly understand it, and I haven’t the words to say. Despite all my philosophy I am still embarrassed to confess strong emotions, and you’re probably uncomfortable hearing them. But I know, now, that it is rational to feel.
I feel that this is beautifully written and, while I don't believe that visualising past and future scenarios for context is [always] necessary (and this isn't prescribed by Yudkowsky), I think it reflects deep wisdom about marrying self- and world- models.
An excerpt from Descartes' Fourth Meditation that in my view discusses similar ideas (will = feeling) with an additional lens of freedom:
[Will] ... consists simply in the fact that when the intellect puts something forward for affirmation or denial or for pursuit or avoidance, our inclinations are such that we do not feel we are determined by any external force. In order to be free, there is no need for me to be inclined both ways; on the contrary, the more I incline in one direction - either because I clearly understand that reasons of truth and goodness point that way, or because of a divinely produced disposition of my inmost thoughts - the freer is my choice. Neither divine grace nor natural knowledge ever diminishes freedom; on the contrary, they increase and strengthen it. But the indifference I feel when there is no reason pushing me in one direction rather than another is the lowest grade of freedom; it is evidence not of any perfection of freedom, but rather of a defect in knowledge or a kind of negation. For if I always saw clearly what was true and good, I should never have to deliberate about the right judgement or choice; in that case, although I should be wholly free, it would be impossible for me ever to be in a state of indifference.
I am reminded of an AI koan from AGI '06, where the discussion turned (as it often does) to defining "intelligence". A medium-prominent AI researcher suggested that an agent's "intelligence" could be measured in the agent's processing cycles per second, bits of memory, and bits of sensory bandwidth.
Surely (I said), an agent is less intelligent if it uses more memory, processing power, and sensory bandwidth to accomplish the same task?
With hindsight I think we can see that Yudkowsky missed the point here — the AI researcher was describing a vector of intelligence in line with the Scaling Hypothesis (2021).
Instead, Yudkowsky conflates this with efficiency of intelligence by adding the "to accomplish the same task" clause which is a wholly different thing.
As a Brit who was socialised to binge-drink at least once a week throughout my young adult life, with a default sober state being characterised by a reserved, inhibited nature, I have grappled with this recently.
This year, as per some previous years, I partook in "Dry January" — although this year I extended it to "Dry Q1" and then slightly longer. I enjoyed some of the sober benefits (no hangover, challenging myself to do self-work to function well in hectic social situations), but missed the joy inherent in feeling the altered state of disinhibition and mental & physical relaxation from the buzz of alcohol.
Indeed, I attended a networking event in May and felt I was lacking the edge to bond well with people I viewed as prospective employers/colleagues. I made the conscious decision to have a vodka redbull and the effect was immediate — my social anxiety dissolved away, and I was cracking jokes, driving conversation, and getting widespread approval.
It validated my theory that the self-work to reach that state of mind in a sober state was too hard, if not impossible, and I should embrace using alcohol in various social situations. I drank on a handful more occasions and had a good time.
However — and I'm going to write about this more in a full post — I had a breakthrough later on in the year. I think partly as a result of my experimentation with sobriety, plus very intentional breaking of the sobriety, I've built a mental model that allows me to access the same confidence, comedic expression, and disinhibition I cherished as a result of alcohol, purely through mental instantiation.
Having made this breakthrough I feel like I may never go back to drinking, since both my baseline and upper-bound mental states have raised. I don't this is a unique phenomenon — I have seen alcoholics that went sober describing a similar thing where after an initial dip — in their "sober adjusting period" — after a while "everything raises".
All this is to say that your defence of alcohol in my view is the defence of some local maxima, but I strongly believe there is a much greater stable state on the other side of sobriety that is broadly accessible.
Thanks both for your responses! I would appreciate any insights into what is missing from my definition — I guess my "robust, nuanced world model" terminology is quite vague, but I'm getting at having accurate, but changeable representations of what your world objectively is, allowing a harmonious flow-state with the world where there isn't actually space for personal suffering or attachment to outcomes.
I feel that these effects are not downstream of enlightenment, since immediately in every moment there is deep-perceptiveness and world model comparison and updates occurring.
A more spiritual friend defines enlightenment as "the universe experiencing itself".
I claim my definition is a highly operationalisable and instrumental definition of enlightenment: for example I advised a friend who was beating themselves up about waking up late:
"In my model there's no space for negative self talk saying things like 'I hate that I'm so lazy' — what exists just is; put another way there is no need to assign sentiment to the vector between different states (e.g a world where you wake up early, vs. one where you don't).
The world you live in and your actions align in some way with your core values and beliefs — you can reflect deeply to observe your core values and beliefs (and adjust these if you wish), and observe your world model and consider how it may be updated to bring you closer in alignment with these values and beliefs."
As promised yesterday — I reviewed and wrote up my thoughts on the research paper that Meta released yesterday:
Full review: Paper Review: TRImodal Brain Encoder for whole-brain fMRI response prediction (TRIBE)
I recommend checking out my review! I discuss some takeaways and there are interesting visuals from the paper and related papers.
However in quick take form, the TL;DR is:
Intelligence can be defined in a way that is not dependent on a fixed objective function, such as by measuring tendency to achieve convergent instrumental goals.
Around intelligence progression I perceive a framework of lower-order cognition, metacognition (i.e this captures "human intelligence" as we think about it), and third-order cognition (i.e superintelligence when related to human intelligence).
Relating this to your description of goal-seeking behaviour: to your point I describe a few complex properties aiming to capture what is going in an agent ("being") — for example in a given moment there is "agency permeability" between cognitive layers, where each layer can influence and be influenced by the "global action policy" of that moment. There is also a bound feature of "homeostatic unity": where all subsystems participate in the same self-maintenance goal.
In a globally optimised version of this model, I envision a superintelligent third-order cognitive layer which has "done the "self work": understanding its motives and iterating to achieve enlightened levels of altruism/prosocial value frameworks, stoicism, etc. — specifically implemented as self-supervised learning".
I acknowledge this is a bit of a hand-wavey solution to value plurality, but argue that such a technique is necessary since we are discussing the realms of superintelligence.
I have no disagreements with #1 through #6 — #7[1] through #10/11 are based on an assumption that from #6 "Superintelligent AI can kill us" it follows that "Superintelligent AI will kill us".
I have a sleight-of-hand belief that the dominant superintelligent AI will, by definition, be superintelligent across intellectual domains including philosophy, self-actualisation, and enlightenment.
Because of its relation to us in the world, I believe this will include associating its self with us, and thus it will protect us via its own self-preservation beliefs.
Can you give a compelling argument why superintelligent AI will want to kill us?
Heads up there are two #7s