The Power of Agency

by lukeprog1 min read7th May 201178 comments

69

RationalizationRobust AgentsMotivationsAdaptation Executors
Frontpage

You are not a Bayesian homunculus whose reasoning is 'corrupted' by cognitive biases.

You just are cognitive biases.

You just are attribution substitution heuristics, evolved intuitions, and unconscious learning. These make up the 'elephant' of your mind, and atop them rides a tiny 'deliberative thinking' module that only rarely exerts itself, and almost never according to normatively correct reasoning.

You do not have the robust character you think you have, but instead are blown about by the winds of circumstance.

You do not have much cognitive access to your motivations. You are not Aristotle's 'rational animal.' You are Gazzaniga's rationalizing animal. Most of the time, your unconscious makes a decision, and then you become consciously aware of an intention to act, and then your brain invents a rationalization for the motivations behind your actions.

If an 'agent' is something that makes choices so as to maximize the fulfillment of explicit desires, given explicit beliefs, then few humans are very 'agenty' at all. You may be agenty when you guide a piece of chocolate into your mouth, but you are not very agenty when you navigate the world on a broader scale. On the scale of days or weeks, your actions result from a kludge of evolved mechanisms that are often function-specific and maladapted to your current environment. You are an adaptation-executor, not a fitness-maximizer.

Agency is rare but powerful. Homo economicus is a myth, but imagine what one of them could do if such a thing existed: a real agent with the power to reliably do things it believed would fulfill its desires. It could change its diet, work out each morning, and maximize its health and physical attractiveness. It could learn and practice body language, fashion, salesmanship, seduction, the laws of money, and domain-specific skills and win in every sphere of life without constant defeat by human hangups. It could learn networking and influence and persuasion and have large-scale effects on societies, cultures, and nations.

Even a little bit of agenty-ness will have some lasting historical impact. Think of Benjamin Franklin, Teddy Roosevelt, Bill Clinton, or Tim Ferris. Imagine what you could do if you were just a bit more agenty. That's what training in instrumental rationality is all about: transcending your kludginess to attain a bit more agenty-ness.

And, imagine what an agent could do without the limits of human hardware or software. Now that would really be something.

(This post was inspired by some conversations with Michael Vassar.)

69

78 comments, sorted by Highlighting new comments since Today at 8:08 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Even a little bit of agenty-ness will have some lasting historical impact. Think of Benjamin Franklin, Teddy Roosevelt, Bill Clinton, or Tim Ferris.

Can you go into more detail about how you believe these particular people behaved more agenty than normal?

I radically distrust the message of this short piece. It's a positive affirmation for "rationalists" of the contemporary sort who want to use brain science to become super-achievers. The paragraph itemizing the powers of agency especially reads like wishful thinking: Just pay a little more attention to small matters like fixity of purpose and actually acting in your own interest, and you'll get to be famous, rich, and a historical figure! Sorry, that is entirely not ruthless enough. You also need to be willing to lie, cheat, steal, kill, use people, betray them. (Wishes can come true, but they usually exact a price. ) It also helps to be chronically unhappy, if it will serve to motivate your extreme and unrelenting efforts. And finally, most forms of achievement do require domain-specific expertise; you don't get to the top just by looking pretty and statusful.

The messy, inconsistent, and equivocating aspects of the mind can also be adaptive. They can save you from fanaticism, lack of perspective, and self-deception. How often do situations really permit a calculation of expected utility? All these rationalist techniques themselves are fuel for rationalization: I'm emplo... (read more)

9Tyrrell_McAllister10yHow could you reliably know these things, and how could you make intentional use of that knowledge, if not with agentful rationality?
6Mitchell_Porter10yYou can't. I won't deny the appeal of Luke's writing; it reminds me of Gurdjieff, telling everyone to wake up. But I believe real success is obtained by Homo machiavelliensis, not Homo economicus.
8NancyLebovitz10yThis is reminding me of Steve Pavlina's [http://www.stevepavlina.com/blog/] material about light-workers and dark-workers. He claims that working to make the world a better place for everyone can work, and will eventually lead you to realizing that you need to take care of yourself, and that working to make your life better exclusive of concern for others can work and will eventually convince you of the benefits of cooperation, but that slopping around without being clear about who you're benefiting won't work as well as either of those.
4NancyLebovitz10yHow can you tell the ratio between Homo machiavelliensis and Homo economicus, considering that HM is strongly motivated to conceal what they're doing, and HM and HE are probably both underestimating the amount of luck required for their success?
4Mitchell_Porter10yfMRI? Also, some HE would be failed HM. The model I'm developing is that in any field of endeavor, there are one or two HMs at the top, and then an order-of-magnitude more HE also-rans. The intuitive distinction: HE plays by the rules, HM doesn't; victorious HM sets the rules to its advantage, HE submits and gets the left-over payoffs it can accrue by working within a system built by and for HMs.
3NancyLebovitz10yMy point was that both the "honesty is the best policy" and the "never give a sucker an even break" crews are guessing because the information isn't out there. My guess is that different systems reward different amounts of cheating, and aside from luck, one of the factors contributing to success may be a finely tuned sense of when to cheat and when not.
3cousin_it10yYeah, and the people who have the finest-tuned sense of when to cheat are the people who spent the most effort on tuning it!
5NancyLebovitz10yI suspect some degree of sarcasm, but that's actually an interesting topic. After all, a successful cheater can't afford to get caught very much in the process of learning how much to cheat.
2wedrifid10yLove the expression. :)
9lukeprog10yThis if false. Yes. And domain-specific expertise is something that can be learned and practiced, by applying agency to one's life. I'll add it to the list.

You also need to be willing to lie, cheat, steal, kill, use people, betray them.

This is false.

If we are talking about how to become rich, famous, and a historically significant person, I suspect that neither of us speaks with real authority. And of course, just being evil is not by itself a guaranteed path to the top! But I'm sure it helps to clear the way.

7lukeprog10ySure. I'm only disagreeing with what you said in your original comment.
8wedrifid10yI would say 'overstated'. I assert that most people who became famous, rich and a historical figure used those tactics. More so the 'use people', 'betray them' and 'lie' than the more banal 'evils'. You don't even get to have a solid reputation for being nice and ethical without using dubiously ethical tactics to enforce the desired reputation.
4katydee10yPersonally, I find that being nice and ethical is the best way to get a reputation for being nice and ethical, though your mileage may vary.

Personally, I find that being nice and ethical is the best way to get a reputation for being nice and ethical, though your mileage may vary.

I don't have a personal statement to make about my strategy for gaining a reputation for niceness. Partly because that is a reputation I would prefer to avoid.

I do make the general, objective level claim that actually being nice and ethical is not the most effective way to gain that reputation. It is a good default and for many, particularly those who are not very good at well calibrated hypocrisy and deception, it is the best they could do without putting in a lot of effort. But it should be obvious that the task of creating an appearance of a thing is different to that of actually doing a thing.

7gwern9yAgency is still pretty absent there too. As it happens, I have something of an essay on just that topic: http://www.gwern.net/on-really-trying#on-the-absence-of-true-fanatics [http://www.gwern.net/on-really-trying#on-the-absence-of-true-fanatics]
7Kaj_Sotala10yInteresting. Personally I read it as a kind of "get back to Earth" message. "Stop pretending you're basically a rational thinker and only need to correct some biases to truly achieve that. You're this horrible jury-rig of biases and ancient heuristics, and yes while steps towards rationality can make you perform much better, you're still fundamentally and irreparably broken. Deal with it." But re-reading it, your interpretation is probably closer to the mark.
4MinibearRex10yI don't think anyone's arguing that "reason" is synonymous with winning. There are a lot of people, however, arguing that "rationality" is systematized winning [http://lesswrong.com/lw/7i/rationality_is_systematized_winning/]. I'm not particularly interested in detaching from life and moderating my emotional response to failure. I have important goals that I want to achieve, and failing is not an acceptable option to me. So I study rationality. Honestly, EY said it best:

In case anybody asks how I was able to research and write two posts from scratch today:

It's largely because I've had Ray Lynch's 'The Oh of Pleasure' on continuous repeat ever since 7am, without a break.

(I'm so not kidding. Ask Jasen Murray.)

If this actually works reliably, I think it is much more important than anything in either of the posts you used it to write - why bury it in a comment?

4Larks10yI don't know if it's the song or the placebo effect, but it's just written my thesis proposal for me.
2lukeprog10yCongrats!
2Miller10yWell there's a piece of music easy to date to circa Blade Runner.
4lukeprog10yMaybe tomorrow I will try the Chariots of Fire theme [http://grooveshark.com/#/s/Chariots+Of+Fire/FCqzy?src=5], see what it does for me. :) Hmmm. I wonder what else I've spent an entire day listening to over and over again while writing. Maybe Music for 18 Musicians [http://grooveshark.com/#/album/Music+For+18+Musicians/1416531?src=5], Tarot Sport [http://grooveshark.com/#/album/Tarot+Sport/3467715?src=5], Masses [http://www.amazon.com/Masses-Spring-Heel-Jack-Continuum/dp/B00005K2YV], and Third Ear Band [http://grooveshark.com/#/album/Third+Ear+Band/910249?src=5].
0curiousepic10yI just came across Tarot Sport; it's the most insomnia-inducing trance I've ever heard.
1Louie10yI liked that song but then ended up listening to the #2 most popular song [http://grooveshark.com/#/s/The+Lazy+Song/3dwZ4K] on that site instead. It provided me with decidedly less motivation. ;)
1lukeprog10yI just listened to four seconds of that song and then hit 'back' in my browser to write this comment. 'Ugh' to that song.
1wedrifid10yJust listened to it. The first minute or so especially gave an effect about as much as a strong coffee. A little agitating but motivating.
1Cayenne10yHow do we know that it's you writing, and not the music? (Just kidding, really.) Edit - please disregard this post
0[anonymous]10yThat is strange. I like the song though, thanks for passing it along. Like one of the other commenters, I will be testing out its effects. Do you think if you listened to the song every day, or 3 days a week, or something, the effect on your productivity or peace of mind would dwindle? If not, do you plan to continue listening to it a disproportionate amount relative to other music? ETA random comment: Something about it reminds me of the movie Legend.
0jimrandomh10yI don't believe this is really the cause, but I'm going to listen to it at work tomorrow just in case.

real agent with the power to reliably do things it believed would fulfill its desires. It could change its diet, work out each morning, and maximize its health and physical attractiveness. It could learn body language and fashion and salesmanship and seduction and the laws of money and win in every sphere of life without constant defeat by human hangups. It could learn networking and influence and persuasion and have large-scale effects on societies, cultures, and nations.

A lot of body language, fashion, salesmanship, seduction, networking, influence, a... (read more)

2lukeprog10ySure. But are you denying these skills can be vastly improved by applying agency? You mention severe autistics. I'm not sure how much an extra dose of agency could help a severe autistic. Surely, there are people for whom an extra dose of agency won't help much. I wasn't trying to claim that agency would radically improve the capabilities of every single human ever born. Perhaps you are reacting to the idea that heuristics are universally bad things? But of course I don't believe that. In fact, the next post in my Intuitions and Philosophy [http://wiki.lesswrong.com/wiki/Intuitions_and_Philosophy] sequence is entitled 'When Intuitions are Useful.'
0JohnH10yThis is what I am reacting to, especially when combined with what I previously quoted.
2lukeprog10yOh. So... are you suggesting that a software agent can't learn body language, fashion, seduction, networking, etc.? I'm not sure what you're saying.
0JohnH10yI am saying that without heuristics or intuitions what is the basis for any desires? If an agent is a software agent without built in heuristics and intuitions then what are its desires, what are its needs, and why would it desire to survive, to find out more about the world, to do anything? Where do the axioms it uses to think that it can modify the world or conclude anything come from? Our built in heuristics and intuitions are what allow us to start building models of the world on which to reason in the first place and removing any of them demonstrably makes it harder to function in normal society or to act normally. Things that appear reasonable to almost everyone are utter nonsense and seem pointless to those that are missing some of the basic heuristics and intuitions. If all such heuristics (e.g. no limits of human hardware or software) are taken away then what is left to build on?
2byrnema10yI'll jump in this conversation here, because I was going to respond with something very similar. (I thought about my response, and then was reading through the comments to see if it had already been said.) I sometimes imagine this, and what I imagine is that without the limits (constraints) of our hardware and software, we wouldn't have any goals or desires. Here on Less Wrong, when I assimilated the idea that there is no objective value, I expected I would spiral into a depression in which I realized nothing mattered, since all my goals and desires were finally arbitrary with no currency behind them. But that's not what happened -- I continued to care about my immediate physical comfort, interacting with people, and the well-being of the people I loved. I consider that my built-in biological hardware and software came to the rescue. There is no reason to value the things I do, but they are built into my organism. Since I believe that it was being an organism that saved me (and by this I mean the product of evolution), I do not believe the organism (and her messy goals) can be separated from me. I feel like this experiment helped me identify which goals are built in and which are abstract and more fully 'chosen'. For example, I believe I did lose some of my values, I guess the ones that are most cerebral. (I only doubt this because with a spiteful streak and some lingering anger about the nonexistence of objective values, I could be expressing this anger by rejecting values that seem least immediate). I imagine with a heightened ability to edit my own values, I would attenuate them all, especially wherever there were inconsistencies. These thoughts apply to humans only (that is, me) but I also imagine (entirely baselessly) that any creature without hardware and software constraints would have a tough time valuing anything. For this, I am mainly drawing on intuition I developed that if a species was truly immortal, they would be hard pressed to think of anything
2lukeprog10yDepends what kind of agent you have in mind. An advanced type of artificial agent has its goals encoded in a utility function. It desires to survive because surviving helps it achieve utility. Read chapter 2 of AIMA [http://www.amazon.com/Artificial-Intelligence-Modern-Approach-3rd/dp/0136042597/] for an intro to artificial agents.
-5JohnH10y
[-][anonymous]10y 5

You do not have the robust character you think you have, but instead are blown about by the winds of circumstance.

Am I wrong in taking this to be a one-liner critique of all virtue ethical theories?

I've been thinking about this with regards to Less Wrong culture. I had pictured your "deliberative thinking" module as more of an "excuse generator" - the rest of your mind would make its decisions, and then the excuse generator comes up with an explanation for them.

The excuse generator is primarily social - it will build excuses which are appropriate to the culture it is in. So in a rationalist culture, it will come up with rationalizing excuses. It can be exposed to a lot of memes, parrot them back and reason using them without actua... (read more)

7lessdazed10y...act as predicted by the model.
1Error8yI know this is a year or two late, but: I've noticed this and find it incredibly frustrating. Turning introspection (yes, I know) on my own internally-stated motivations more often than not reveals them to be either excuses or just plain bullshit. The most frequent failure mode is finding that I did , not because it was good, but because I wanted to be seen as the sort of person who would do it. Try though I might, it seems incredibly difficult to get my brain to not output Frankfurtian Bullshit. I sort-of-intend to write a post about it one of these days.

I loved this, but I'm not here to contribute bland praise. I'm here to point out somebody who does, in fact, behave as an agent as defined by the italicized statement: "reliably do things it believed would fulfill its desires" that continues with "It could change its diet, work out each morning, and maximize its health and physical attractiveness. " I couldn't help but think of Scott H Young, a blogger I've been following for months. I really look up to that guy. He is effectively a paragon of the model that you can shape your life to l... (read more)

6arundelo9yhttp://lesswrong.com/user/ScottHYoung [http://lesswrong.com/user/ScottHYoung]
2adamisom9yThanks, I should've known

Coming back to this post, I feel like it's selling a dream that promises too much. I've come to think of such dreams as Marlboro Country ads. For every person who gets inspired to change, ten others will be slightly harmed because it's another standard they can't achieve, even if they buy what you're selling. Figuring out more realistic promises would do us all a lot of good.

Excellent clarion call to raise our expectation of what agency is and can do in our lives, as well as to have sensible expectations of our and others' humble default states. Well done.

[-][anonymous]10y 1

One way of thinking about this:

There is behavior, which is anything an animal with a nervous system does with its voluntary musculature. Everything you do all day is behavior.

Then there are choices, which are behaviors you take because you think they will bring about an outcome you desire. (Forget about utility functions -- I'm not sure all human desires can be described by one twice-differentiable convex function. Just think about actions taken to fulfill desires or values.) Not all behaviors are choices. In fact it's easy to go through a day without... (read more)

5Swimmer96310yThere is a reason for this. Making choices constantly is exhausting, especially if you consider all of the possible behaviours. For me, the way to go is to choose your habits. For example: I choose not to spend money on eating out. This a) saves me money, and b) saves me from extra calories in fast food. When pictures of food on a store window tempt me, I only have to appeal to my habit of not eating out. It's barely conscious now. If I forget to pack enough food from home and I find myself hungry, and the ads unusually tempting, I make a choice to reinforce my habit by not buying food, although I am hungry and there is a cost to myself. The same goes for exercising: i maintain a habit of swimming for an hour 3 to 5 times a week, so the question "should I swim after work?" becomes no longer a willpower-draining conscious decision, but an automatic response. If I were willing to put in the initial energy of choosing to start a new arbitrary habit, I'm pretty sure I could. As my mother has pointed out, in the past I've been able to accomplish pretty much everything I set my mind on (with the exception of becoming the youngest person to swim across Lake Ontario and getting into the military, but both of those plans failed for reasons pretty much outside my control.)
4wedrifid10yPart of the modelling of everything as choices is that for their purposes they don't care whether the choice happens to be conscious or not. That is an arbitrary distinction that matters more to us for the purpose of personal development and so we can flatter each other's conscious selves by pretending they are especially important.

I want to upvote this again.

1Elo3ydone for you
[-][anonymous]9y 0

It might be simply structural that the LessWrong community tends to be about armchair philosophy, science, and math. If there are people who have read through Less Wrong, absorbed its worldview, and gone out to "just do something", then they probably aren't spending their time bragging about it here. If it looks like no one here is doing any useful work, that could really just be sampling bias.

Even still, I expect that most posters here are more interested to read, learn, and chat than to thoroughly change who they are and what they do. Reading, ... (read more)

[This comment is no longer endorsed by its author]Reply

The real question is: how big of an impact can this stuff make, anyway? And how much are people able to actually implement it into their lives?

Are there any good sources of data on that? Beyond PUA, The Game, etc?

-1calcsam10yBesides, in theory we want to discuss non-Dark Arts topics...
4wedrifid10yThere are many topics that are relevant here that some have labelled 'Dark Arts'.

Think of Benjamin Franklin, Teddy Roosevelt, Bill Clinton, or Tim Ferris.

It's Tim Ferriss.

0sanddbox8yEither way, the guy's a moron. He's basically a much better packaged snake oil salesman.
1CWG6yHe's a very effective snake oil salesman.
[-][anonymous]6y -2

People don't change their sense of agency because they read a blog post.

"In alien hand syndrome, the afflicted individual’s limb will produce meaningful behaviors without the intention of the subject. The affected limb effectively demonstrates ‘a will of its own.’ The sense of agency does not emerge in conjunction with the overt appearance of the purposeful act even though the sense of ownership in relationship to the body part is maintained. This phenomenon corresponds with an impairment in the premotor mechanism manifested temporally by the appearan... (read more)