You are not a Bayesian homunculus whose reasoning is 'corrupted' by cognitive biases.

You just are cognitive biases.

You just are attribution substitution heuristics, evolved intuitions, and unconscious learning. These make up the 'elephant' of your mind, and atop them rides a tiny 'deliberative thinking' module that only rarely exerts itself, and almost never according to normatively correct reasoning.

You do not have the robust character you think you have, but instead are blown about by the winds of circumstance.

You do not have much cognitive access to your motivations. You are not Aristotle's 'rational animal.' You are Gazzaniga's rationalizing animal. Most of the time, your unconscious makes a decision, and then you become consciously aware of an intention to act, and then your brain invents a rationalization for the motivations behind your actions.

If an 'agent' is something that makes choices so as to maximize the fulfillment of explicit desires, given explicit beliefs, then few humans are very 'agenty' at all. You may be agenty when you guide a piece of chocolate into your mouth, but you are not very agenty when you navigate the world on a broader scale. On the scale of days or weeks, your actions result from a kludge of evolved mechanisms that are often function-specific and maladapted to your current environment. You are an adaptation-executor, not a fitness-maximizer.

Agency is rare but powerful. Homo economicus is a myth, but imagine what one of them could do if such a thing existed: a real agent with the power to reliably do things it believed would fulfill its desires. It could change its diet, work out each morning, and maximize its health and physical attractiveness. It could learn and practice body language, fashion, salesmanship, seduction, the laws of money, and domain-specific skills and win in every sphere of life without constant defeat by human hangups. It could learn networking and influence and persuasion and have large-scale effects on societies, cultures, and nations.

Even a little bit of agenty-ness will have some lasting historical impact. Think of Benjamin Franklin, Teddy Roosevelt, Bill Clinton, or Tim Ferris. Imagine what you could do if you were just a bit more agenty. That's what training in instrumental rationality is all about: transcending your kludginess to attain a bit more agenty-ness.

And, imagine what an agent could do without the limits of human hardware or software. Now that would really be something.

(This post was inspired by some conversations with Michael Vassar.)

76 comments, sorted by
magical algorithm
Highlighting new comments since Today at 12:36 PM
Select new highlight date
Moderation Guidelinesexpand_more

Even a little bit of agenty-ness will have some lasting historical impact. Think of Benjamin Franklin, Teddy Roosevelt, Bill Clinton, or Tim Ferris.

Can you go into more detail about how you believe these particular people behaved more agenty than normal?

I radically distrust the message of this short piece. It's a positive affirmation for "rationalists" of the contemporary sort who want to use brain science to become super-achievers. The paragraph itemizing the powers of agency especially reads like wishful thinking: Just pay a little more attention to small matters like fixity of purpose and actually acting in your own interest, and you'll get to be famous, rich, and a historical figure! Sorry, that is entirely not ruthless enough. You also need to be willing to lie, cheat, steal, kill, use people, betray them. (Wishes can come true, but they usually exact a price. ) It also helps to be chronically unhappy, if it will serve to motivate your extreme and unrelenting efforts. And finally, most forms of achievement do require domain-specific expertise; you don't get to the top just by looking pretty and statusful.

The messy, inconsistent, and equivocating aspects of the mind can also be adaptive. They can save you from fanaticism, lack of perspective, and self-deception. How often do situations really permit a calculation of expected utility? All these rationalist techniques themselves are fuel for rationalization: I'm employing all the special heuristics and psychological tricks, so I must be doing the right thing. I've been so focused lately, my life breakthrough must be just around the corner.

It's funny that here, the use of reason has become synonymous with "winning" and the successful achievement of plans, when historically, the use of reason was thought to promote detachment from life and a moderation of emotional extremes, especially in the face of failure.

The paragraph itemizing the powers of agency especially reads like wishful thinking: Just pay a little more attention to small matters like fixity of purpose and actually acting in your own interest, and you'll get to be famous, rich, and a historical figure! Sorry, that is entirely not ruthless enough. You also need to be willing to lie, cheat, steal, kill, use people, betray them. (Wishes can come true, but they usually exact a price. ) It also helps to be chronically unhappy, if it will serve to motivate your extreme and unrelenting efforts. And finally, most forms of achievement do require domain-specific expertise; you don't get to the top just by looking pretty and statusful.

How could you reliably know these things, and how could you make intentional use of that knowledge, if not with agentful rationality?

You can't. I won't deny the appeal of Luke's writing; it reminds me of Gurdjieff, telling everyone to wake up. But I believe real success is obtained by Homo machiavelliensis, not Homo economicus.

This is reminding me of Steve Pavlina's material about light-workers and dark-workers. He claims that working to make the world a better place for everyone can work, and will eventually lead you to realizing that you need to take care of yourself, and that working to make your life better exclusive of concern for others can work and will eventually convince you of the benefits of cooperation, but that slopping around without being clear about who you're benefiting won't work as well as either of those.

How can you tell the ratio between Homo machiavelliensis and Homo economicus, considering that HM is strongly motivated to conceal what they're doing, and HM and HE are probably both underestimating the amount of luck required for their success?

How can you tell the ratio

fMRI? Also, some HE would be failed HM. The model I'm developing is that in any field of endeavor, there are one or two HMs at the top, and then an order-of-magnitude more HE also-rans. The intuitive distinction: HE plays by the rules, HM doesn't; victorious HM sets the rules to its advantage, HE submits and gets the left-over payoffs it can accrue by working within a system built by and for HMs.

My point was that both the "honesty is the best policy" and the "never give a sucker an even break" crews are guessing because the information isn't out there.

My guess is that different systems reward different amounts of cheating, and aside from luck, one of the factors contributing to success may be a finely tuned sense of when to cheat and when not.

Yeah, and the people who have the finest-tuned sense of when to cheat are the people who spent the most effort on tuning it!

I suspect some degree of sarcasm, but that's actually an interesting topic. After all, a successful cheater can't afford to get caught very much in the process of learning how much to cheat.

But I believe real success is obtained by Homo machiavelliensis, not Homo economicus.

Love the expression. :)

You also need to be willing to lie, cheat, steal, kill, use people, betray them.

This if false.

most forms of achievement do require domain-specific expertise; you don't get to the top just by looking pretty and statusful.

Yes. And domain-specific expertise is something that can be learned and practiced, by applying agency to one's life. I'll add it to the list.

You also need to be willing to lie, cheat, steal, kill, use people, betray them.

This is false.

If we are talking about how to become rich, famous, and a historically significant person, I suspect that neither of us speaks with real authority. And of course, just being evil is not by itself a guaranteed path to the top! But I'm sure it helps to clear the way.

I'm sure it helps to clear the way.

Sure. I'm only disagreeing with what you said in your original comment.

You also need to be willing to lie, cheat, steal, kill, use people, betray them.

This if false.

I would say 'overstated'. I assert that most people who became famous, rich and a historical figure used those tactics. More so the 'use people', 'betray them' and 'lie' than the more banal 'evils'. You don't even get to have a solid reputation for being nice and ethical without using dubiously ethical tactics to enforce the desired reputation.

Personally, I find that being nice and ethical is the best way to get a reputation for being nice and ethical, though your mileage may vary.

Personally, I find that being nice and ethical is the best way to get a reputation for being nice and ethical, though your mileage may vary.

I don't have a personal statement to make about my strategy for gaining a reputation for niceness. Partly because that is a reputation I would prefer to avoid.

I do make the general, objective level claim that actually being nice and ethical is not the most effective way to gain that reputation. It is a good default and for many, particularly those who are not very good at well calibrated hypocrisy and deception, it is the best they could do without putting in a lot of effort. But it should be obvious that the task of creating an appearance of a thing is different to that of actually doing a thing.

I radically distrust the message of this short piece. It's a positive affirmation for "rationalists" of the contemporary sort who want to use brain science to become super-achievers.

Interesting. Personally I read it as a kind of "get back to Earth" message. "Stop pretending you're basically a rational thinker and only need to correct some biases to truly achieve that. You're this horrible jury-rig of biases and ancient heuristics, and yes while steps towards rationality can make you perform much better, you're still fundamentally and irreparably broken. Deal with it."

But re-reading it, your interpretation is probably closer to the mark.

Sorry, that is entirely not ruthless enough. You also need to be willing to lie, cheat, steal, kill, use people, betray them.

Agency is still pretty absent there too. As it happens, I have something of an essay on just that topic: http://www.gwern.net/on-really-trying#on-the-absence-of-true-fanatics

It's funny that here, the use of reason has become synonymous with "winning"

I don't think anyone's arguing that "reason" is synonymous with winning. There are a lot of people, however, arguing that "rationality" is systematized winning. I'm not particularly interested in detaching from life and moderating my emotional response to failure. I have important goals that I want to achieve, and failing is not an acceptable option to me. So I study rationality. Honestly, EY said it best:

There is a meme which says that a certain ritual of cognition is the paragon of reasonableness and so defines what the reasonable people do. But alas, the reasonable people often get their butts handed to them by the unreasonable ones, because the universe isn't always reasonable. Reason is just a way of doing things, not necessarily the most formidable; it is how professors talk to each other in debate halls, which sometimes works, and sometimes doesn't. If a hoard of barbarians attacks the debate hall, the truly prudent and flexible agent will abandon reasonableness.

No. If the "irrational" agent is outcompeting you on a systematic and predictable basis, then it is time to reconsider what you think is "rational"

In case anybody asks how I was able to research and write two posts from scratch today:

It's largely because I've had Ray Lynch's 'The Oh of Pleasure' on continuous repeat ever since 7am, without a break.

(I'm so not kidding. Ask Jasen Murray.)

If this actually works reliably, I think it is much more important than anything in either of the posts you used it to write - why bury it in a comment?

I don't know if it's the song or the placebo effect, but it's just written my thesis proposal for me.

Well there's a piece of music easy to date to circa Blade Runner.

Maybe tomorrow I will try the Chariots of Fire theme, see what it does for me. :)

Hmmm. I wonder what else I've spent an entire day listening to over and over again while writing. Maybe Music for 18 Musicians, Tarot Sport, Masses, and Third Ear Band.

I just came across Tarot Sport; it's the most insomnia-inducing trance I've ever heard.

I liked that song but then ended up listening to the #2 most popular song on that site instead. It provided me with decidedly less motivation. ;)

I just listened to four seconds of that song and then hit 'back' in my browser to write this comment. 'Ugh' to that song.

It's largely because I've had Ray Lynch's 'The Oh of Pleasure' on continuous repeat ever since 7am, without a break.

Just listened to it. The first minute or so especially gave an effect about as much as a strong coffee. A little agitating but motivating.

How do we know that it's you writing, and not the music?

(Just kidding, really.)

Edit - please disregard this post

That is strange. I like the song though, thanks for passing it along. Like one of the other commenters, I will be testing out its effects.

Do you think if you listened to the song every day, or 3 days a week, or something, the effect on your productivity or peace of mind would dwindle? If not, do you plan to continue listening to it a disproportionate amount relative to other music?

ETA random comment: Something about it reminds me of the movie Legend.

In case anybody asks how I was able to research and write two posts from scratch today:

It's largely because I've had Ray Lynch's 'The Oh of Pleasure' on continuous repeat ever since 7am, without a break.

I don't believe this is really the cause, but I'm going to listen to it at work tomorrow just in case.

real agent with the power to reliably do things it believed would fulfill its desires. It could change its diet, work out each morning, and maximize its health and physical attractiveness. It could learn body language and fashion and salesmanship and seduction and the laws of money and win in every sphere of life without constant defeat by human hangups. It could learn networking and influence and persuasion and have large-scale effects on societies, cultures, and nations.

A lot of body language, fashion, salesmanship, seduction, networking, influence, and persuasion are dependent entirely on heuristics and intuition.

In the real world those that have less access to these traits (being people of the autistic spectrum, for example) tend to have a much harder time learning how to accomplish any of the named tasks. They also, for most of those tasks, have a much harder time seeing why one would wish to accomplish those tasks.

Extrapolating to a being that has absolutely no such intuition or heuristics then one is left with the question of what it is that they wish to actually do? Perhaps some of the severely autistic really are like this and never learn language as it never occurs to them that language could be useful and so have no desire to learn language.

With no built in programing to determine what is to be desired and what is not to be desired and no built in programing as to how the world works or does not work then how is one to determine what should be desirable or how to accomplish what is desired? As far as I can determine an agent without human hardware or software may be left spending its time attempting to figure out how anything works and figuring out what, if anything, it wants to do.

It may not even attempt to figure out anything at all if Curiosity is not rational but a built in heuristic. Perhaps someone has managed to build a rational AI but has neglected to give it built in desires and/or built in Curiosity and it did nothing so was assumed to not have worked.

Isn't even the desire to survive a heuristic?

A lot of body language, fashion, salesmanship, seduction, networking, influence, and persuasion are dependent entirely on heuristics and intuition.

Sure. But are you denying these skills can be vastly improved by applying agency?

You mention severe autistics. I'm not sure how much an extra dose of agency could help a severe autistic. Surely, there are people for whom an extra dose of agency won't help much. I wasn't trying to claim that agency would radically improve the capabilities of every single human ever born.

Perhaps you are reacting to the idea that heuristics are universally bad things? But of course I don't believe that. In fact, the next post in my Intuitions and Philosophy sequence is entitled 'When Intuitions are Useful.'

And, imagine what an agent could do without the limits of human hardware or software. Now that would really be something.

This is what I am reacting to, especially when combined with what I previously quoted.

Oh. So... are you suggesting that a software agent can't learn body language, fashion, seduction, networking, etc.? I'm not sure what you're saying.

I am saying that without heuristics or intuitions what is the basis for any desires? If an agent is a software agent without built in heuristics and intuitions then what are its desires, what are its needs, and why would it desire to survive, to find out more about the world, to do anything? Where do the axioms it uses to think that it can modify the world or conclude anything come from?

Our built in heuristics and intuitions are what allow us to start building models of the world on which to reason in the first place and removing any of them demonstrably makes it harder to function in normal society or to act normally. Things that appear reasonable to almost everyone are utter nonsense and seem pointless to those that are missing some of the basic heuristics and intuitions.

If all such heuristics (e.g. no limits of human hardware or software) are taken away then what is left to build on?

I'll jump in this conversation here, because I was going to respond with something very similar. (I thought about my response, and then was reading through the comments to see if it had already been said.)

And, imagine what an agent could do without the limits of human hardware or software.

I sometimes imagine this, and what I imagine is that without the limits (constraints) of our hardware and software, we wouldn't have any goals or desires.

Here on Less Wrong, when I assimilated the idea that there is no objective value, I expected I would spiral into a depression in which I realized nothing mattered, since all my goals and desires were finally arbitrary with no currency behind them. But that's not what happened -- I continued to care about my immediate physical comfort, interacting with people, and the well-being of the people I loved. I consider that my built-in biological hardware and software came to the rescue. There is no reason to value the things I do, but they are built into my organism. Since I believe that it was being an organism that saved me (and by this I mean the product of evolution), I do not believe the organism (and her messy goals) can be separated from me.

I feel like this experiment helped me identify which goals are built in and which are abstract and more fully 'chosen'. For example, I believe I did lose some of my values, I guess the ones that are most cerebral. (I only doubt this because with a spiteful streak and some lingering anger about the nonexistence of objective values, I could be expressing this anger by rejecting values that seem least immediate). I imagine with a heightened ability to edit my own values, I would attenuate them all, especially wherever there were inconsistencies.

These thoughts apply to humans only (that is, me) but I also imagine (entirely baselessly) that any creature without hardware and software constraints would have a tough time valuing anything. For this, I am mainly drawing on intuition I developed that if a species was truly immortal, they would be hard pressed to think of anything to do, or any reason to do it. Maybe, some values of artistry or curiosity could be left over from an evolutionary past.

Depends what kind of agent you have in mind. An advanced type of artificial agent has its goals encoded in a utility function. It desires to survive because surviving helps it achieve utility. Read chapter 2 of AIMA for an intro to artificial agents.

Precisely, that utility function is heuristic or intuition. Further survival can only be desired according to prior knowledge of the environment, so again a heuristic or intuition. It is also dependent on the actions that it is aware that it can perform (intuition or heuristic). One can only be an agent when placed in an environment, given some set of desires (heuristic) (and ways to measure accomplishing those desires), and given a basic understanding of what actions are possible (intuition), as well as whatever basic understanding of the environment is needed to be able to reason about the environment (intuition).

I assume chapter 2 of the 2nd edition is sufficiently close to chapter 2 of the 3rd edition?

I don't understand you. We must be using the terms 'heuristic' and 'intuition' to mean different things.

A pre-programed set of assumptions or desires that are not chosen rationally by the agent in question.

edit: perhaps you should look up 37 ways that words can be wrong

Also, you appear to be familiar with some philosophy so one could say they are A Priori models and desires in the sense of Plato or Kant.

If this is where you're going, then I don't understand the connection to my original post.

Which sentence(s) of my original post do you disagree with, and why?

I have already gone over this.

And, imagine what an agent could do without the limits of human hardware or software. Now that would really be something.

Such an agent may not have the limits of human hardware or software but such an agent does require a similar amount of restrictions and (from the agents point of view) irrational assumptions and desires or it is my opinion that the agent will not do anything.

It could learn and practice body language, fashion, salesmanship, seduction, the laws of money, and domain-specific skills and win in every sphere of life without constant defeat by human hangups.

The human hangups are what allow us to practice body language, fashion, etc and what gives us the desire to do so. If we didn't have such hang ups then, from experience, understanding such things is much harder, practicing such things is harder, and desiring such things requires convincing. It is easier to win if one is endowed with the required intuitions and heuristics to make practicing such things both desirable and natural.

a real agent with the power to reliably do things it believed would fulfill its desires

There is no accounting for preferences (or desires) meaning such things are not usually rationally chosen and when they are there is still a base of non-rational assumptions. Homo economicus is just as dependent on intuition and heuristics as anyone else. The only place that it is different, at least as classically understood, is its ability to access near perfect information and to calculate exactly its preferences and probabilities.

edit Also

You do not have much cognitive access to your motivations.

This is said as a bad thing when it is a necessary thing.

Such an agent may not have the limits of human hardware or software but such an agent does require a similar amount of restrictions and (from the agents point of view) irrational assumptions and desires or it is my opinion that the agent will not do anything.

Desires/goals/utility functions are non-rational, but I don't know what you mean by saying that an artificial agent needs restrictions and assumptions in order to do something. Are you just saying that it will need heuristics rather than (say) AIXI in order to be computationally tractable? If so, I agree. But that doesn't mean it needs to operate under anything like the limits of humans hardware and software, which is all I claimed.

The human hangups are what allow us to practice body language, fashion, etc and what gives us the desire to do so. If we didn't have such hang ups then, from experience, understanding such things is much harder, practicing such things is harder, and desiring such things requires convincing.

Sure, but I think a superintelligence could figure it out, the same way a superintelligence could figure out quantum computing or self-replicating probes.

There is no accounting for preferences (or desires) meaning such things are not usually rationally chosen and when they are there is still a base of non-rational assumptions

Agreed. This is the Humean theory of motivation, which I agree with. I don't see how anything I said disagrees with the Humean theory of motivation.

This is said as a bad thing when it is a necessary thing.

I didn't say it as a bad thing, but a correcting thing. People think they have more access to their motivations than they really do. Also, it's not a necessary thing that we don't have much cognitive access to our motivations. In fact, as neuroscience progresses, I expect us to gain much more access to our motivations.

JohnH, I kept asking what you meant because the claims I interpreted from your posts were so obviously false that I kept assuming I was interpreting you incorrectly. I'm still mostly assuming that, actually.

an artificial agent needs restrictions and assumptions in order to do something.

You need to assume inductive priors. Otherwise you're pretty much screwed.

The human hangups are what allow us to practice body language, fashion, etc and what gives us the desire to do so. If we didn't have such hang ups then, from experience, understanding such things is much harder, practicing such things is harder, and desiring such things requires convincing. It is easier to win if one is endowed with the required intuitions and heuristics to make practicing such things both desirable and natural.

Yeah, you probably do want to let the elephant be in charge of fighting or mating with other elephants, once the rider has decided it's a good idea to do so.

Intuitions are usually defined as being inexplicable. Apriori claims are usually explicable in terms of axioms, although axioms may be chosen for their intuitive appeal.

although axioms may be chosen for their intuitive appeal.

precisely.

You do not have the robust character you think you have, but instead are blown about by the winds of circumstance.

Am I wrong in taking this to be a one-liner critique of all virtue ethical theories?

I've been thinking about this with regards to Less Wrong culture. I had pictured your "deliberative thinking" module as more of an "excuse generator" - the rest of your mind would make its decisions, and then the excuse generator comes up with an explanation for them.

The excuse generator is primarily social - it will build excuses which are appropriate to the culture it is in. So in a rationalist culture, it will come up with rationalizing excuses. It can be exposed to a lot of memes, parrot them back and reason using them without actually affecting your behavior in any way at all.

Just sometimes though, the excuse generator will fail and send a signal back to the rest of the mind that it really needs to change something, else it will face social consequences.

The thing is, I don't feel that this stuff is new. But try and point it out to anyone, and they will generate excuses as to why it doesn't matter, or why everyone lacks the power of agency except them, or that it's an interesting question they'll get around to looking at sometime.

So currently I'm a bit stuck.

But try and point it out to anyone, and they will...

...act as predicted by the model.

I had pictured your "deliberative thinking" module as more of an "excuse generator" - the rest of your mind would make its decisions, and then the excuse generator comes up with an explanation for them.

I know this is a year or two late, but: I've noticed this and find it incredibly frustrating. Turning introspection (yes, I know) on my own internally-stated motivations more often than not reveals them to be either excuses or just plain bullshit. The most frequent failure mode is finding that I did , not because it was good, but because I wanted to be seen as the sort of person who would do it. Try though I might, it seems incredibly difficult to get my brain to not output Frankfurtian Bullshit.

I sort-of-intend to write a post about it one of these days.

I loved this, but I'm not here to contribute bland praise. I'm here to point out somebody who does, in fact, behave as an agent as defined by the italicized statement: "reliably do things it believed would fulfill its desires" that continues with "It could change its diet, work out each morning, and maximize its health and physical attractiveness. " I couldn't help but think of Scott H Young, a blogger I've been following for months. I really look up to that guy. He is effectively a paragon of the model that you can shape your life to live it as you like. (I'm sure he would never say that though.) He actually referenced a Less Wrong article recently, and it's not the first time he's done it, which significantly increased my opinion of him. His current "thing" is trying to master the equivalent of a rigorous CS curriculum (using MIT's requirements) in 12 months. Only those on the Less Wrong community stand a good chance of not thinking that's pretty audacious.

Excellent clarion call to raise our expectation of what agency is and can do in our lives, as well as to have sensible expectations of our and others' humble default states. Well done.

The real question is: how big of an impact can this stuff make, anyway? And how much are people able to actually implement it into their lives?

Are there any good sources of data on that? Beyond PUA, The Game, etc?

Besides, in theory we want to discuss non-Dark Arts topics...

There are many topics that are relevant here that some have labelled 'Dark Arts'.

Think of Benjamin Franklin, Teddy Roosevelt, Bill Clinton, or Tim Ferris.

It's Tim Ferriss.

Either way, the guy's a moron. He's basically a much better packaged snake oil salesman.

People don't change their sense of agency because they read a blog post.

"In alien hand syndrome, the afflicted individual’s limb will produce meaningful behaviors without the intention of the subject. The affected limb effectively demonstrates ‘a will of its own.’ The sense of agency does not emerge in conjunction with the overt appearance of the purposeful act even though the sense of ownership in relationship to the body part is maintained. This phenomenon corresponds with an impairment in the premotor mechanism manifested temporally by the appearance of the readiness potential (see section on the Neuroscience of Free Will above) recordable on the scalp several hundred milliseconds before the overt appearance of a spontaneous willed movement. Using functional magnetic resonance imaging with specialized multivariate analyses to study the temporal dimension in the activation of the cortical network associated with voluntary movement in human subjects, an anterior-to-posterior sequential activation process beginning in the supplementary motor area on the medial surface of the frontal lobe and progressing to the primary motor cortex and then to parietal cortex has been observed.[167] The sense of agency thus appears to normally emerge in conjunction with this orderly sequential network activation incorporating premotor association cortices together with primary motor cortex. In particular, the supplementary motor complex on the medial surface of the frontal lobe appears to activate prior to primary motor cortex presumably in associated with a preparatory pre-movement process. In a recent study using functional magnetic resonance imaging, alien movements were characterized by a relatively isolated activation of the primary motor cortex contralateral to the alien hand, while voluntary movements of the same body part included the concomitant activation of motor association cortex associated with the premotor process.[168] The clinical definition requires “feeling that one limb is foreign or has a will of its own, together with observable involuntary motor activity” (emphasis in original).[169] This syndrome is often a result of damage to the corpus callosum, either when it is severed to treat intractable epilepsy or due to a stroke. The standard neurological explanation is that the felt will reported by the speaking left hemisphere does not correspond with the actions performed by the non-speaking right hemisphere, thus suggesting that the two hemispheres may have independent senses of will.[170][171]

Similarly, one of the most important (“first rank”) diagnostic symptoms of schizophrenia is the delusion of being controlled by an external force.[172] People with schizophrenia will sometimes report that, although they are acting in the world, they did not initiate, or will, the particular actions they performed. This is sometimes likened to being a robot controlled by someone else. Although the neural mechanisms of schizophrenia are not yet clear, one influential hypothesis is that there is a breakdown in brain systems that compare motor commands with the feedback received from the body (known as proprioception), leading to attendant hallucinations and delusions of control.[173]

Coming back to this post, I feel like it's selling a dream that promises too much. I've come to think of such dreams as Marlboro Country ads. For every person who gets inspired to change, ten others will be slightly harmed because it's another standard they can't achieve, even if they buy what you're selling. Figuring out more realistic promises would do us all a lot of good.