Wiki Contributions

Comments

stavros20d50

It's an interesting framework, I can see it being useful.


I think it's more useful when you consider both high-decoupling and low-decoupling to be failure modes, more specifically: when one is dominant and the other is neglected, you reliably end up with inacccurate beliefs.

You went over the mistakes of low-decouplers in your post, and provided a wonderful example of a high-decoupler mistake too!

High decouplers will notice that, holding preferences constant, offering people an additional choice cannot make them worse off. People will only take the choice if its better than any of their current options

Aside from https://thezvi.wordpress.com/2017/08/12/choices-are-really-bad/ there's also the consideration of what choice I offer you, or how I frame the choice (see Kahneman's stuff). 
And that's just considering it from the individual psychological level, but there are social/cultural levers and threads to pull here too. 

I think the optimal functioning of this process is cyclical with both high decoupling phases and highly integrated phases, and the measure of balance is something like 'this isn't obviously wrong in either context'.

stavros1mo10

I think future technology all has AI as a pre-requisite?

 

My high conviction hot take goes further: I think all positive future timelines have AI as a pre-requisite. I expect that, sans AI, our future - our immediate future: decades, not centuries - is going to be the ugliest, and last, chapter in our civilization's history.

stavros1mo3916

I have been in the position of trying to moderate a large and growing community - it was at 500k users last I checked, although I threw in the towel around 300k - and I know what a thankless, sisyphean task it is.

I know what it is to have to explain the same - perfectly reasonable - rule/norm again and again and again.

I know what it is to try to cultivate and nurture a garden while hordes of barbarians trample all over the place.

But...

If it aint broke, don't fix it.

I would argue that the majority of the listed people penalized are net contributors to lesswrong, including some who are strongly net positive.

I've noticed y'all have been tinkering in this space for a while, I think you're trying super hard to protect lesswrong from the eternal september and you actually seem to be succeeding, which is no small feat, buuut...

I do wonder if the team needs a break.

I think there's a thing that happens to gardeners (and here I'm using that as a very broad archetype), where we become attached to and identify with the work of weeding - of maintaining, of day after day holding back entropy - and cease to take pleasure in the garden itself.

As that sets in, even new growth begins to seem like a weed.

Reply3311
stavros1mo41

Fine. You win. Take your upvote.

stavros1mo70

Big fan of both of your writings, this dialogue was a real treat for me.

I've been trying to find a satisfying answer to the seeming inverse correlation of 'wellbeing' and 'agency' (these are very loose labels).

You briefly allude to a potential mechanism for this[1]

You also briefly allude to another mechanism with explanatory power for the inverse[2] - i.e. that while it might seem an individual is highly agentic, they are in fact little more than a host for a highly agentic egregore

I'm engaged in that most quixotic endeavour of actually trying to save the world[3] [4], and thus I'm constantly playing with my world model and looking for levers to pull, dominos to push over, that might plausibly -and quickly- shift probability mass towards pleasant timelines.

I think germ theory is exactly the kind of intervention that works here - it's a simple map that even a child can understand, yet it's a 100x impact.

I think there's some kind of 'germ theory for minds', and I think we already have all the pieces - we just need to put them together in the right way. I think it's plausible that this is easy, rapidly scaleable and instrumentally valuable to other efforts in the 'save the world' space.

But... I don't want to end up net negative on agency. In fact my primary objective is to end up strongly net positive. I need more people trying to change the world, not less.
Yet... that scale of ambition seems largely the preserve of people you'd be highly unlikey to describe as 'enlightened', 'balanced' or 'well adjusted'; it seems to require a certain amount of delusion to even (want to) try, and benefit from unbalanced schema that are willing to sacrifice everything on the altar of success. 

Most of the people who seem to succcessfully change the world are the people I least want to; whereas the people I most want to change the world seem the least likely to.

  1. ^

    Since the schools that removed social conditioning and also empowered practitioners to upend the social order, tended to get targeted for destruction. (Or at least so I suspect and some people on Twitter said "yes this did happen" when I speculated this out loud.)

  2. ^

    In the Buddhist model of human psychology, we are by default colonized by parasitic thought patterns, though I guess in some cases, like the aforementioned fertility increasing religious memes, they should be thought of as symbiotes with a tradeoff, such as degrading the hosts' episteme.

  3. ^

    I don't expect to succeed, I don't expect to even matter, but it's a fun hobby.

  4. ^

    Also the world does actually seem to be in rather urgent need of saving; short of a miracle or two it seems like I'm unlikely to live to enjoy my midlife crisis.

stavros1mo10

I don't think there's anything wrong with cultivating a warrior archetype; I strive to cultivate one myself.

 

Would love to read more on this.

stavros3mo153

Hmmm, where to start. Something of a mishmash of thought here.

Actually a manager, not yet clear if I'm particularly successful at it. I certainly enjoy it and I've learned a lot in the past year.

Noticing Panic is a great Step 0, and I really like how you contrast it to noticing confusion.

I used to experience 'Analysis Paralysis' - too much planning, overthinking, and zero doing. This is a form of perfectionism, and is usually rooted in fear of failure.

I expect most academics have been taught entirely the wrong (in the sense of https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences) heuristics around failure.

My life rapidly became more agentic the more I updated toward the following beliefs:

  • Failure is cheap
  • You have an abundance of chances to get it right
  • Plans are maps, reality is terrain. Doing happens in reality. Thus you can offload a bunch of cognitive work to reality simply by trying stuff; failing is one of the most efficient ways of updating your map, and can sometimes reward you with unexpected success (take a look around at all the 'stupid' stuff that actually works/succeeded).

Thus my management strategy is something like:

  • Have a goal
    • Build a model of the factors that contribute to that goal
  • Determine my constraints (e.g. do I have a rigid deadline)
  • Notice my affordances  (most people always underestimate this)
    • What resources do I have, e.g.
      • Who can I ask for help?
      • What work has already been done that I can use (don't reinvent the wheel)
    • What actions are available to me
      • What is the smallest meaningful step I can take toward my goal?
      • What is the dumbest thing I can do that might actually work?
  • Prioritize my time
    • What needs to be done today vs this week vs this month vs actually doesn't need to be done

So I've reduced a combinatorially explosive long term goal into a decent heuristic for prioritizing actions, and then I apply it to the actions I can actually take at different timescales... which is usually an easy choice between a handful of options.

Then I do stuff, and then I update/iterate based on the results.

And sometimes stuff just happens that moves me towards my goal (or my goal towards me) - life is chaotic, and if you're rigidly following a plan then that chaos is always working against you. Whereas if you're adaptable and opportunistic - that chaos can work for you.

I guess all of this boils down to: invest in your world model, not your plan.

stavros3mo20

Re: average age of authors/laureates and average team size

Are these data adjusted for demographic changes? i.e. Aging populations in most western countries, and general population growth.

stavros4mo10

I think this is a mistake to import "democracy" at the vision level. Vision is essentially a very high-level plan, a creative engineering task. These are not decided by averaging opinions. "If you want to kill any idea in the world, get a committee working on it." Also, Deutsch was writing about this in "The Beginning of Infinity" in the chapter about democracy.

We should aggregate desiderata and preferences (see "Preference Aggregation as Bayesian Inference"), but not decisions (plans, engineering designs, visions). These should be created by a coherent creative entity. The same idea is evident in the design of Open Agency Architecture.

 

Democracy is a mistake, for all of the obvious reasons.
As is the belief amongst engineers that every problem is an engineering problem :P

We have a whole bunch of tools going mostly unused and unnoticed that could, plausibly, enable a great deal more trust and collaboration than is currently possible. 

We have a whole bunch of people both thinking about and working on the polycrisis already. 

My proposal is that we're far more likely to achieve our ultimate goal - a future we'd like to live in - if we simply do our best to empower, rather than direct, others.

I expect attempts to direct, no matter how brilliant the plan or the mind(s) behind it, are likely to fail. For all the obvious reasons.

(caveat: yes AGI changes this, but it changes everything. My whole point is that we need to keep the ship from sinking long enough for AGI to take the wheel)

Load More