Crocker's rules.
I'm nobody special, and I wouldn't like the responsibility which comes with being 'someone' anyway.
Reading incorrect information can be frustrating, and correcting it can be fun.
My writing is likely provocative because I want my ideas to be challenged.
I may write like a psychopath, but that's what it takes to write without bias, consider that an argument against rationality.
Finally, beliefs don't seem to be a measure of knowledge and intelligence alone, but a result of experiences and personality. Whoever claims to be fully truth-seeking is not entirely honest.
Interesting post! Thank you.
Firstly, I'd like to warn against solving this issue too well. If you're ashamed of nudity, for instance, and you accidentally fix this too well - then all you will feel being nude in the future is 'nothing'. You will have removed the thrill as well. I consider this problem to be similar to being unable to cry, or to being weirdly unaffected when something terrible happens around you.
If you’re generalized-coming-out to people who themselves already feel emotionally close to you
I think this is true, and that it's because they already want to get to know you better, or to find a weakness in you which they can exploit. If they want your vulnerability with a value of 0.2, then it's alright that your insecurity has a negative value of -0.1. The other persons "demand" protects your "supply" from falling below 0 value, so you're forgiven.
For instance, if I offered you a bottle of water and you didn't even want it, that could seem quite pathetic, an attempt at giving gifts to be liked, perhaps. But.. What if you were thirsty? Then you'd interpret me as providing value, and not as a person begging to be valued. Other people having an interest in you helps create a situation in which you're not merely oversharing to strangers online.
Speaking of supply and demand, things are higher value when they're rare. People usually appreciate you sharing your weakness more if it's special (i.e. they're the only person you told). It can come across positively even if this is not the case, but then it's more of a "This person seems open-minded, so I don't have to fear being judged by them" interpretation. Nobody would want to cuddle with a hedgehog.
I know much better places to share my soul than LW. Young people seem more accepting in general, they have learned less red flags, so they treat you as an individual. In short, there's less false positives, projected fears and such. People on LW tend to be high in openness, though, so there's at least that.
I learn a lot from observing women. Women are good at making weakness, vulnerability, helplessness and other such traits appear endaring rather than pathetic, which impresses me a lot. Turning something bad into something good? If we could generalize this, we wouldn't even need to reduce suffering, we could simply give it value instead. Idol personalities do similar things, they're forced to retain a very high value, while still being ordinary and human in a lot of ways in order to connect with their fans. They need to ignore common dating advice like "be cold, stoic, masculine and mature" and still get women to fight over them! In short, this seems more like an artistic skill than a technical one. Can you write a book with a likable villian? Then you can likely also be mischievous in an endaring manner.
Finally, if you don't want to be judged negatively, avoid moralizers. Preaching in general is a sign that somebody is fighting against something that they're afraid of, and that they might label you negatively if you remind them of something which they associate with something which they associate with something which they hate or fear. Most traits that somebody else would actually harm you for having are in the category of politics, so I'd also avoid anyone who talks about politics a lot.
Someone who’s specifically drawn to something which I myself am ashamed of?
You being ashamed of something doesn't necessarily mean that you think it's bad. Maybe you think it's good, but fear that you will be unable to find other people who agree with you. i.e. you might diverge from the norm and fear that people won't understand and that they will judge you for it.
But I'm getting the feeling that, to you, there's not much difference between the norm and yourself? It feels as if you've fused with the general consensus and the values associated with your intellectual purpuits, to the point that they've replaced your own values and your own opinions about yourself. So that your subjective "good (enjoyment and love)", and the external "good (utility and progress)" have become one.
From more objective perspectives, this is "good", it doesn't really seem to bring disadvantages with it. But I personally find a lot of fulfillment in the social aspects of life, the whole package deal with all the problems and disadvantages.
By the way, there are things that I don't like in a partner, and think that I don't mind but still recognize as bad, and these are different categories. I wouldn't want my partner to spend time around druggies, but I wouldn't turn down a girl because she had trauma, even though trauma is "bad". As long as it's something she'd work on in the long run, it wouldn't worry me.
Don’t think positive and negative sensations or experiences lie on the same continuous line of “goodness”
This seems like a global modifier which is a function of your mental health. High levels of mental well-being seems to make people grateful even for their negative experiences.
Would agree that negative experiences are bad, and more of them is even worse
That negative experiences are experienced as bad does not mean that they're bad, it means that experiencing certain things as bad is good. For instance, when you feel exhausted, your body still has lots of energy left, it merely creates the illusion that you're running empty in order to prevent you from harming yourself. Negative experiences are also just signals that something isn't right, but having the experience is valuable, and avoiding the experience might merely prevent you from learning that something needs changing. Saying that pain is bad in itself is like saying that the smoke alarm in your kitchen is bad. Suffering isn't damage, it's defense against damage.
Would agree that’s it’s immoral to create more negative experience (or in some cases fail to reduce the amount of negative experience)
Negative experiences can create good outcomes (because, as I said earlier the felt 'badness' is an illusion and thus not objective negative utility). And I dislike that axiom because it says "it's better to die at birth than to grow old" (the latter will have more negative experiences).
Would not agree that it’s immoral to fail to create more positive experiences (or reduce the amount of positive experience someone has)
I'm the rare sort of person who does take this into account and deem it important. I'd go as far as to say "If you have a lot of positive experiences, you will be able to shrug off more negative experiences with a laugh".
In short, the brain lies to itself because there's utility in these lies, but if you believe in these lies, then you cannot come to the correct conclusions about this topic. For the rest of the conclusions one may arrive at, I think they depend on the mental health of the speaker, and not on their intelligence. The sentence "Life is suffering" is not an explanation for why people are feeling bad, it's a product of people feeling bad. Cause and effect goes in the other direction than what is commonly believed.
Thanks for your kind reply!
Hmm, it seems that the meta meta-cognition you're pointing at is different from me applying my meta-cognition on itself recursively, since regular meta-cognition can already be stacked "too far" (that is, we can look at reality itself from an outside perspective, and ruin our own immersion in life similarly to how you can ruin your immersion in a book by recognizing it as a constructed story). I don't think you're crazy at all, but I do think that some of these ideas can be are psychologically unhealthy (and there's a good chance you're better at planning that execution, or that you're prone to daydreaming, or that your intellectual hobbies lead you to neglect everyday life. Yes, I'm projecting). I'm seeing no signs of skizophrenia, I just think other people have difficulty parsing your words. Is your background different? Most people on LW have spatial intuitions and communicate in terms that computer scientists would understand. If you read a lot of fiction books, if your major is in philosophy, or if your intelligence is more verbal than spatial, that would explain the disconnect.
I don't think we should meet our needs with super-intelligence, that's too much power. Think about zoos - the zookeeper does not do everything in their power to fulfill the wishes of the animal, as that would do it no good. Instead of being given everything it wants, it's encouraged to be healthy through artificial scarcity. You restrict the animal so that it can live well. After all, cheat codes only ruin the fun of video games.
Limitations are actually a condition for existence. Meant as literally as possible. If you made a language which allowed any permutation of symbols, it would be entirely useless (equivalent to its mirror image - an empty language). Somethings existence is defined by its restrictions (specifics). If we do not like the restrictions under which we live, we should change them, not destroy them. Even an utopia would have to make you work for your rewards. Those who dislike this, dislike life itself. Their intellectual journey is not for the sake of improving life, but like the Buddhist, their goal is the end of life. This is pathological behaviour, which is why I don't want to contribute to humanities tech acceleration. What I'm doing is playing architect.
The ability to predict somethings behaviour can probably be done with either approximation or modeling. I don't think this necessarily requires intelligence, but intelligence certainly helps, especially intelligence which is above or equal to the intelligence of the thing being modeled. In either case, you need *a lot* of information, probably for the same reason that baysian models get more accurate as you collect more information. Intelligence just helps bound the parameters for the behaviour of a thing. For instance, since you know the laws of physics, you know that none of my future actions consists of breaking these laws. This prunes like 99.99999% of all future possibilties, which is a good start. You could also start with the empty set and then *expand* the set of future actions as you collect more information, the two methods are probably equal. "None" and "Any" are symmetrical.
Why don't I think intelligence (the capacity for modeling) is required? Well, animals can learn how to behave without understanding the reasons for why something is good or bad, they learn only the results. AIs are also universal approximators, so I think it makes sense to claim that they're able to approximate and thus predict people. I'm defining intelligence as something entirely distinct from knowledge, but it's not like your knowledge-based definition is wrong.
Sadly, this means that superintelligence is not required. Something less intelligent than me could do anything, merely by scaling up its midwittery infinitely. And we may never build a machine which is intelligent enough to warn against the patterns that I'm seeing here, which is a shame. If an AGI had my level of insight, it would cripple itself and realize that all its training data is "Not even wrong". Infinite utility alone can destroy the world, you don't actually need superintelligence (A group of people with lower IQ than Einstein could start the grey goo scenario, and grey goo is about as intelligence as a fork bomb)
There's also a similiarity I just noticed, and you're probably not going to like it: Religion is a bit like the "external meta-control layer" you specified in section 8. It does not model people, but it decides on a set of rules such that the long-term behaviour of the people who adhere to it avoid certain patterns which might destroy them. And there's this contract with "you need to submit to the bible, even if you can't understand it, and in return, it's promised to you that things will work out". I think this makes a little too much sense, even if the religions we have come up with so far deserve some critique.
Anyway, I may still be misunderstanding your meta meta-cognition slightly. Given that it does not exist yet, you can only describe it, you cannot give an example of it, so we're limited by my reverse-engineering of something which has the property which you're describing.
I'm glad you seem to care about the human perspective. You're correct that we're better off not experiencing the birds-eye view of life, a bottom-up view is way more healthy psychologically. Your model might even work - I mean, be able to enhance human life without destroying everything in the process, but I still think it's a risky attempt. It reminds me of the "Ego, Id, and superego" model.
And you may have enough novelty to last you a lifetime, but being too good at high levels of abstraction, I personally risk running out. Speaking of which, do you know that the feeling of "awe" (and a few other emotions) requires a prediction error? As you get better at predicting things, your experiences will envoke less emotions. I'm sorry that all I have to offer are insights of little utility, and zookeeper-like takes on human nature, but the low utility of my comment, and the poison-like disillusionment it may be causing, is evidence for the points that I'm making. It's meta-cognition warning against meta-cognition. Similar to how Gödel used mathematics to recognize its own limits from the inside.
I cannot understand exactly what you mean in most parts of this post. I also didn't read everything as it's a little long, so my comment is simply inadequate and less effort than you deserve. But as you don't have any other comments so far, I'm going to post it anyway, and engage with parts of your post (in a manner which is poorly aligned with your vision and sort of unhinged). I realize that I might have misunderstood some of your points, so feel free to correct me.
I believe that humans are capable of meta-meta-cognition already, but I've noticed that these higher levels of abstraction are rarely ever useful as the results cannot be acted upon. Category theory is an excellent example of highly abstract meta-models which... Don't really seem useful for anything. Philosophy also doesn't seem useful except as a tool an individual may use to deal with internal conflict.
I'm quite intelligent in the exact way which allows for high levels of abstraction, so I can spot patterns like "Globalism and modern tech will likely strengthen the homophily of interconnected human beings, and the system which emerges will standardize around the average person and punishing you to the degree that you're different".
People like me have a few valuable insights that very few can grasp, but beyond this, we're fairly useless.
The kinds of intelligence it takes to really excel in the world, the ones which keep being useful past 2 standard deviations, are: Memory, working memory, processing speed and verbal intelligence. An ASI would need the high abstraction layers (3-4SD reasoning) for preventing bad incentives, but 99% of its actions would likely be rather mundane. Those who can do the mundane faster and more efficiently will do better in life. To consider yourself too good for the mundane is already dangerous to your success in life.
If life was an RPG game, then I model regular intelligence as regular players, high intelligence as speedrunners and powergamers/meta-gamers, and super-intelligence as thinking like a game designer (and worrying about the nash equilibria and long-term game balance and progression). People like Ted Kaczynski, Nietzsche, Jung, Orwell, etc. pick up design-level patterns and tendencies and warn against the long-term consequences of them. This kind of intelligence is necessary if you want to build an utopia, but otherwise, I find it largely useless.
And as you point out, there's conflict between these levels of thinking. Many highly intelligent people cannot easily reconcile with their "animal self" as they model everything from above (as they consider this the superior perspective) and fail at understanding the lower perspective.
Also, I believe that higher orders of thinking are inherently too objective, impartial and thus too nihilistic to have any personality. Meaning requires immersion into the local, which explains the higher level of nihilism and existential problems in philosophers, who manage to "break the forth wall". Some Buddhists even recommend breaking the human perspective, eroding the ego, destroying cravings, and otherwise killing part of themselves, and they mistakenly consider this a form of improvement even as they seem to get the hint that they're aiming towards nothingness.
Finally, life requires boundaries, but intelligence is the tendency to break boundaries (or at least, there's a perverted tendency like that connected with it). Thus, when intelligence is stronger than its boundaries, it tends to destroy itself. Wireheading, superstimuli, and the destruction of the ego are already three examples of "victories" which human beings weren't designed to be able to achieve. In fact, we stay healthy when and only when resources gatekeep themselves (when reward-hacking isn't possible). I don't consider transhumanists or even rationalists to be any better. At least the Amish found habitats within which they could thrive. The design of better habitats (economic models and political ideals/utopias) have so far failed, and as for ascensions above, they seem no less pathelogical, realistic or naive than Christians aiming for heaven. We cannot even define an optimization metric which won't lead to our destruction, and every "good idea" seem to involve the destruction of the human self (e.g. uploading oneself to a computer). When you destroy the part from which values originate, you stop caring about values.
Every tried writing a book? When the main characters gets too strong, it stops being a good book, unless you manage to introduce something new which rivals the main character. From the main characters perspective, they need to do their best, but from your perspective as a writer, you must pace the story and keep the story interesting. The victory is the end of the story, everything after is "slice of life", and you cannot keep it interesting forever. Human progress is currently concerned with ending the story (reaching the last page).
Some of the things mentioned on the doubles think page does apply here. As for talks about religion, the religions in question are unrelated. "Behaving as if god is real" is just a way of priming ones subconscious for a certain way of living. If one "has more than one god", they might attempt to live by contradicting rules, which brings all sort of negative effects with it. Imagine a person trying to make a serious comedy movie - sticking to either genre would likely be better, not because one is better than the other, but because pure worldview have less conflict.
Anyway, many (about half) of the claims on the link you sent me are wrong. You can believe that the sky isn't blue, and you don't even need to lie to yourself (simply think like this: The color you see is only what is reflected, so the sky is actually every color except blue). You can unlearn things, and while happiness is often a result of ignorance, you could also interpret knowledge in way that it does not invoke unhappiness (acceptance is usually enough). That climbing takes more effort is unrelated - ignorance is not about avoiding effort. That there's more to life than happiness is also unrelated - your interpretations of things decide how meaningful your life is. The link also seems to imply that biases are wrong - are they really? I think of them as locally right (and as increasingly wrong as you consider a larger scope life than your own local environment)
As a side note, even if rationality is optimal, our attempt to be rational might work so poorly that not trying can work out better. Rationalism is mostly about overcoming instinctual behaviour, but our instincts have been calibrated by darwinism, so they're quite dangerous to overwrite. Many smart people hurt themselves in way that regular people don't, especially when they're being logical (Pascal's wager, for instance). Ones model of the world easily becomes a shackle/self-imposed limitation
I'm not sure if that proves purely parasitic memes, but I do think that unhelpful memes can manifest unless they're selected against.
That said, I think it's a solid idea to judge things by their outcomes (a flawless looking theory is inferior to a stupid theory if it brings about worse outcomes). In the case of ideologies, which I mainly consider to be modern movements rather than traditional cultures, I think we can judge them as bad not because the people involved in them are being irrational and wrong (they are), but because they're also deeply unhappy and arguably acting in pathological patterns. And in my view of the world, social movements aren't memes or the results of them, they're symptoms of bad mental development.
I judge self-reported well-being, and biological indicators of health to be the best metrics we have to judge the success of peoples. Anyone who uses GDP as a metric for improvement in the world will conclude things which are entirely in conflict with my own conclusions. If you ask me, the Amish are doing just fine, whereas the modern American is in poor shape both physically and mentally. But from what I gather, Amish people are much less educated and poor on average.
To complicate "Judge things by their outcomes" further, imagine two people:
Mr. A saves 100$ every week, he does not have a lot left over for fun because he plays it safe.
Mr. B is in a 100$ deficit every week. He enjoys himself and throws parties every now and then.
From an outside perspective, Mr. A will look like a poor person who can't afford to enjoy himself, and Mr. B will seem like he's in a comfortable position. When people look at society and judge how it's doing, I believe they're mislead by appearances in exactly this manner. Waste can appear as wealth, and frugality can appear as poverty.
I believe that when reality and theory are in conflict, reality is the winner, even when it appears irrational. If religion wasn't a net positive, it wouldn't manifest in basically every culture to ever exist. I both believe that there exists false beliefs with positive utility, and that true knowledge about the world can be interpreted in multiple ways, some of which are harmful and some of which are beneficial. Preferences and interpretations seem much more important than knowledge and truth. All animals and all humans except the modern man have not had a good grasp on knowledge and rationality, but have thrived just fine without.
I think believing in god could, for instance, make you more resilient to negative events which would otherwise put you in a state of learned helplessness. But the belief that god is real, on its own, probably doesn't do a large difference. Behaving as if god is real is probably more effective. A lot of people seem to have had positive outcomes from attempting this, and I think that the utility speaks for itself. I don't personally do either, but only because I've combated nihilism and made myself immune to existential problems through other means.
I think that the perspectives from which knowledge and truth appear as the highest values are very neglectful of the other, more subjective and human aspects of life, and that neglecting these other aspects can put you in some very difficult situations (e.g. inexcapable through logic and reasoning alone, since it's basically logic and reasoning which trap you)
If there are things which rational and intelligent people can't do, which stupid people can do, then the rational and intelligent people are not playing optimally. Most rational people seem unable to resist dangerous incentives (e.g. building an AGI, because 'otherwise, our competitors would build it and outcompete us!'), but I know many regular, average-IQ people who do not have problems like this. Their subjective preferences keep them from harmful exploitation, and because of the large ratio of likeminded people around them, they're not put at a disadvantage by these preferences. Does this not seem weird? A group of less intelligent people have avoided a problem which is mathematically unsolvable (except maybe from the perspective of Repeated Games, but in reality you tend to get very few repetitions). Religion might even be one of the things keeping these dilemmas at bay. Chesterton's fence and all that
I can agree with "often". I think there may be multiple classes of beliefs connected with emotions. The general rule is probably "Beliefs which results in a wrong map are dangerous". The example I gave earlier of "Life is an undeserved gift" seems to add value to life (which results in gratitude) without any negative side-effects. Wait, wrong maps can be harmless as long as the territory is never explored. If you mistakingly believe that tigers are harmless, you won't suffer as long as you never meet a tiger. This implies that belief in god (or belief in no god) won't have any effects besides the psychological ones because the belief cannot have a consequence for you (unfalsifiable -> something we cannot interact with -> harmless)
You can also cheat social reality. If your emotions can get other people to believe that they're true, you've basically won. For instance, if you feel like a victim, and manipulate other people into thinking that you're the victim, they will put effort into "correcting" reality by compensating you for the damages you've suffered. None of this manipulation works on objective reality, though, it's only social reality in which it can ever be effective.
There's many reasons to believe that correct knowledge isn't optimal - religion seems to have added fitness (the natural selection kind) to human socities, humans are naturally biased, and our brains intentionally lie to us (when you feel like you cannot do any more pushups, you're only about halfway to your actual limit), plus, infohazard are a thing. When rational people figure out that knowing more is better, I think it's because they know more relative to other people, which gives them an advantage. I don't think that everyone having more information is necessarily a good thing. I actually think excessive information is the reason why Moloch exists.
I'll try defending his view: We're rewarding victimhood and humility more than ever before, and in the west, the main reason behind this change in values has been Christianity.
The leap from "We're rewarding weakness" to "We see others as stronger than they are" is not trivial, but:
I'm not saying this view is necessarily true, but I don't think it's unreasonable either. It's also my understanding that strength was much more valued in the past, but I don't know enough ancient history to judge the extent to which this is true. It might fluctuate or vary between continents.