But why should that be bad if you could justify any experiment? Let's say you had enough readership and enough 'active' readership that quite a few people did the same thing you did.
Then 1. You're doing a lot of good, and that sounds like a really cool blog and pursuit actually.
And 2. You will need to raise your $/hour in the VoI in order to pick and choose only the very highest-returning experiments.
Both interesting outcomes.
Thanks for clarifying that.
I should note that I am very interested in techniques for self-improvement, too. I am currently learning how to read. (Apparently, I never knew :( ) And also get everything organized, GTD-style. (It seems a far less daunting prospect now than when I first heard of the idea, because I'm pseudo-minimalist.)
I still am surprised at the average LWers reaction here. Probably because it's not clear to me the nature of 'volition on the level of people'. Not something to expect you to answer, clarifying the distinction was helpful enough.
But your environment includes people, dude.
This shouldn't be a puzzle. Reinforcement happens, consciously or subconsciously. Why in the name of FSM would you choose to relinquish the power to actually control what would otherwise happen just subconsciously?
How is that not on the face of it a paragon, a prototype of optimization? Isn't that optimizing is, more or less-consciously changing what is otherwise unconscious?
I'm confused, not only by the beginning of this comment, but by several others as well.
I thought being a LessWronger meant you no longer thought in terms of free will. That it's a naive theory of human behavior, somewhat like naive physics.
I thought so, anyway. I guess I was wrong? (This comment still up voted for amazing analysis.)
.... And here begins the debate.
What do we do? What do we think about this piece of freaking powerful magic-science?
I vote we keep it a secret. Some secrets are too dangerous and powerful to be shared.
.... And what about helping other people without knowing you helped them?