I came to ask something similar.
Could you (@Linch) provide an example or two of woo? Could you score the following examples:
How much woo/10 is the Beneath Psychology sequence? Or 'Religion for Rationalists'? Or Symmetry Theory of Valence?
Not an economist; have a confusion.
Strictly relating to pre-singularity AI, everything after that is a different paradigm.
The strongest economic trend I'm aware of is growing inequality.
AI would seem to be an accelerant of this trend, i.e. I think most AI returns are captured by capital, not labour. (AIs are best modelled as slaves in this context).
And inequality would seem to be demand destroying - there are less consumers, most consumers are poorer.
Thus my near term (pre-singularity) expectations are something like - massive runaway financialization; divergence between paper economy (roughly claims on resources) and real economy (roughly resources). And yes we have something like a gilded age where a very small section of the planet lives very well until singularity and then we find out if humanity graduates to the next round.
But like, fundamentally this isn't a picture of runaway economic growth - which is what everyone else talking about this seems to be describing.
Would appreciate clarity/correction/insight here.
Just a short note to say I appreciate the resource you created, I've learned valuable things from it. I was impressed by the level of journalism involved in the Habryka/Open Phil piece - the little editor's notes were excellent, and I wish this practice was the norm, and I have some inkling of the amount of work that must have gone into them.
I think tabooing empathy would be productive - it's a sufficiently vague label attached to a bucket of emotionally charged things; it's a recipe for ugly misunderstandings.
So, emotions/feelings are internal bids for salience/attention.
But there's a thing whereby we sometimes need others to pay attention to our emotions/feelings - maybe to validate them ('you're not crazy / that's a totally reasonable way to feel'), or to ease their insecurity / social anxiety ('you're cared about / you're not alone').
And there's (at least according to Steven Byrnes) an autonomic version of this paying attention to others emotions/feelings - reflexive, subconscious, behaviour that tends to be absent/underdeveloped in autistic people.
I'd weakly claim it also malfunctions between neurotypicals of sufficiently different cultures.
And this is the thing I think most neurotypicals mean when they use the word 'empathy'.
Right, now we get to the Nail in the Head - and it's plausible that in the mix of feelings, what feels most salient isn't the nail. Instead it's some mix of social isolation, disconnection, and frustration at not being able to process/resolve these feelings because no one will take her seriously.
And then just contextualize that in terms of the 'median western cultural memetic inheritance' - the package of beliefs, epistemics, models etc that most normies are walking around with - and, yeah, of course it's not about the nail.
Whether or not we should be kind/gentle/compassionate to others seems separate to how/if we empathize with them; to me that boils down mostly to 'in general, with exceptions, seems like a good deal in terms of cost:benefit'; seems like an optimal default, kinda like 'tit for tat with forgiveness' in the prisoners dilemma.
What's the crux? Or what's the most significant piece of evidence you could imagine coming across that would update you against these predictions?
I'm curious about this pitch :)
Checked replies so far, no one has given you the right answer.
Whenever you don't do something, you have a reason for not doing it.
If you find yourself stuck in a cycle of intending to do, and not doing, it's always because you're not taking your reason for NOT doing it seriously; you're often habitually ignoring it.
When you successfully take your reasons for not doing something seriously, either you stop wanting to do it, or you change how you're doing it, or your reason for not doing it simply goes away.
So, what does it mean/look like to take your reason for not doing something seriously?
It doesn't look like overanalyzing it in your head - if you find yourself having an internal argument notice that you've tried this a million times before and it hasn't improved things.
It looks like, and indeed just basically is, Focusing (I linked to a lesswrong explainer, but honestly I think Eugene Gendlin does a much better job)
It feels like listening. It feels like insight, like realizing something important that you hadn't noticed before, or had forgotten about.
If you keep pursuing strategies of forcing yourself, of the part of you that wants to do the thing coercing the part(s) that don't, then you'll burn out. You're literally fighting yourself; so much of therapy boils down to 'just stop hitting yourself bro'.
it is possible to do complex general cognition without being able to think about one's self and one's cognition. It is much easier to do complex general cognition if the system is able to think about itself and its own thoughts.
I can see this making sense in one frame, but not in another. The frame which seems most strongly to support the 'Blindsight' idea is Friston's stuff - specifically how the more successful we are at minimizing predictive error, the less conscious we are.[1]
My general intuition, in this frame, is that as intelligence increases more behaviour becomes automatic/subconscious. It seems compatible with your view that a superintelligent system would possess consciousness, but that most/all of its interactions with us would be subconscious.
Would like to hear more about this point, could update my views significantly. Happy for you to just state 'this because that, read X, Y, Z etc' without further elaboration - I'm not asking you to defend your position, so much as I'm looking for more to read on it.
This is my potentially garbled synthesis of his stuff, anyway.
I don't like the thing you're doing where you're eliding all mention of the actual danger AI Safety/Alignment was founded to tackle - AGI having a mind of its own, goals of its own, that seem more likely to be incompatible with/indifferent to our continued existence than not.
Everything else you're saying is agreeable in the context you're discussing it, that of a dangerous new technology - I'd feel much more confident if the Naval Nuclear Propulsion Program (Rickover's people) was the dominant culture in AI development.
Albeit I have strong doubts about the feasibility of the 'Oughts[1]' you're proposing, and more critically - I reject the framing...
Any sufficiently advanced technology is indistinguishable from
magicbiologylife
To assume AGI is transformative and important is to assume it has a mind[2] of its own: the mind is what makes it transformative.
At the very least - assuming no superintelligence - we are dealing with a profound philosophical/ethical/social crisis, for which control based solutions are no solution. Slavery's problem wasn't a lack of better chains, whether institutional or technical.
Please entertain another framing of the 'technical' alignment problem: midwifery - the technical problem of striving for optimal conditions during pregnancy/birth. Alignment originated as the study of how to bring into being minds that are compatible with our own.
Whether humans continue to be relevant/dominant decision makers post-Birth is up for debate, but what I claim is not up for debate is that we will no longer be the only decision makers.
Totally. Asked only to get a better model of what you were pointing at.
And now my understanding is that we're mostly aligned and this isn't a deep disagreement about what's valuable, just a labeling and/or style/standard of effort issue.
E.g. Symmetry Theory of Valence seems like the most cruxy example because it combines above-average standard of effort and clarity of reasoning (I believe X, because Y, which could be tested through Z), with a whole bunch of things that I'd agree pass the duck test standard as red flags.