I grew up in Russia, not in Silicon Valley, so I didn't know the other "people in our cluster", and unfortunately I didn't like to read, so I'm not familiar with many of the obvious background facts. 5 years ago I read HPMoR, but unfortunately not sequences, I read them only a couple of years ago and then only the part that was translated into Russian, in English I could not read fluently enough, but then I noticed that Google Translate began to cope with a much better translation than before, and in most cases produces a readable text from English into Russian, so that I could finally read the "Sequences" to the end and generally begin to read and write in Lesswrong.
Now I write here in "Short Forms" any thoughts that I have not seen anyone else express, but since I have not read many books, many concepts are probably expressed somewhere else by someone before me, I just do not saw, in that case it would be desirable to add a link there in the comments. Unfortunately, I have many thoughts that I wrote down even before lessvrong, but rather than re-reading and editing them, it’s easier for me to write again, so many such thoughts lie unpublished, and since even at least I wrote down my thoughts far from births, then even more of them are not even recorded anywhere except in my head, however, again, if I stumble upon them again, I will try to write them down and publish them.
Hmm. Judging from the brief view, it feels like I'm the only one who included reactions in my brief forms. I wonder why?
It occurred to me that on LessWrong there doesn't seem to be a division of posts in evaluations into those that you want to promote as relevant right now, and those that you think will be useful over the years. If there was such an evaluation... Or such a response, then you could take a list not of karma posts, which would include those that were only needed sometime in a particular moment, but a list of those that people find useful beyond time.
That is, a short-term post might be well-written, really required for discussion at the time, rather than just reporting news, so there would be no reason to lower its karma, but it would be immediately obvious that it was not something that should be kept forever. In some ways, introducing such a system would make things easier with Best Of. And I also remember when choosing which of the sequences to include in the book, there were a number of grades on scales other than karma. This could also be added as reactions, so that such scores could be left in an independent mode.
A. I saw a post that reactions were added. I was just thinking that this would be very helpful and might solve my problem. Included them for my short forms. I hope people don't just vote no more without asking why through reactions.
On the one hand, I really like that on LessWrong, unlike other platforms, everything unproductive is downgraded in the rating. But on the other hand, when you try to publish something yourself, it looks like a hell of a black box, which gives out positive and negative reinforcements for no reason at all.
This completely chaotic reward system seems to be bad for my tendency to post anything at all on LessWrong, just in the last few weeks that I've been using EverNote, it has counted 400 posts, and by a quick count, I have about 1500 posts lying in Google Keep , at the same time, on LessWrong I have published only about 70 over the past year, that is, this is 6-20 times less, although according to EverNote estimates ~ 97% of these notes belong to the "thoughts" category, and not to something like lists shopping.
I tried literally following the one advice given to me here and treating any scores less than ±5 as noise, but that didn't negate the effect. I don't even know, maybe if the ratings of the best posts here don't match up with my rating of my best posts, I should post a couple of really terrible posts to make sure they get rated extremely bad and not good or not?
I must say, I wonder why I did not see here speed reading and visual thinking as one of the most important tips for practical rationality, that is, a visual image is 2 + 1 d, and an auditory image is 0 + 1 d, plus auditory images use sequential thinking, in which people are very bad, and visual thinking is parallel. And according to Wikipedia, the transition from voice to visual reading should speed you up 5 (!) times, and in the same way, visual thinking should be 5 times faster compared to voice, and if you can read and think 5 times in a lifetime more thoughts, it's just an incredible difference in productivity.
Well, the same applies to the use of visual imagination instead of voice, here you can also use pictures. (I don’t know, maybe it was all in Korzybski’s books and my problem is that I didn’t read them, although I definitely should have done this?)
Yudkowsky says that public morality should be derived from personal morality, and that personal morality is paramount. But I don't think this is the right way to put it, in my view morality is the social relationships that game theory talks about, how not to play games with a negative sum, how to achieve the maximum sum for all participants.
And morality is independent of values, or rather, each value system has its own morality, or even more accurately, morality can work even if you have different value systems. Morality is primarily about questions of justice, sometimes all sorts of superfluous things like god worship are dragged under this kind of human sentiment, so morality and justice may not be exactly equivalent.
And game theory and answers questions about how to achieve justice. Also, justice may concern you as directly one of your values, and then you won't betray even in a one-time prisoner's dilemma without penalty. Or it may not bother you and then you will pass on always when you do not expect to be punished for it.
In other words, morality is universal between value systems, but it cannot be independent of them. It makes no sense to forbid someone to be hurt if he has absolutely nothing against being hurt.
In other words, I mean that adherence to morality just feels different from inside than conformity to your values, the former feels like an obligation and the latter feels like a desire, in one case you say "should" and in the other "wants."
I've read "Sorting Pebbles into Different Piles" several times and never understood what it was about until it was explained to me. Certainly the sorters aren't arguing about morality, but that's because they're not arguing about game theory, they're arguing about fun theory... Or more accurately not really, they are pure consequentialists after all, they don't care about fun or their lives, only piles into external reality, so it's theory of value, but not theory of fun, but theory of prime.
But in any case, I think people might well argue with them about morality. If people can sell primes to sorters and they can sell hedons to people, would it be moral to betray in a prisoner's dilemma and get 2 primes by giving -3 hedons. And most likely they will come to the conclusion that no, that would be wrong, even if it is just ("prime").
That you shouldn't kill people, even if you can get yourself the primeons you so desire, and they shouldn't destroy the right piles, even if they get pleasure from looking at the blowing pebbles.
I added link to comment: https://www.lesswrong.com/posts/34Tu4SCK5r5Asdrn3/unteachable-excellence
In HPMOR, Draco Malfoy thinks that either Harry Potter was lucky enough to be able to come up with a bunch of great ideas in a short period of time, or he, for some unimaginable reason, has already spent a bunch of time thinking about how to do it. The real answer to this false dilemma is that Harry just read a book as a kid where its author came up with all these for the book's needs.
In How to Seem (and Be) Deep Thought, Eliezer Yudkowsky says that the Japanese often portray Christians as bearers of incredible wisdom, while the opposite is true of the "eastern sage" archetype in the western midst. And the real answer is that both cultures have vastly different, yet meaningful sets of multiple ideas, so when one person meets another, and he immediately throws at him 3 meaningful and highly plausible thoughts that the first person has never even heard of, and then does so again and again, the first person concludes that he is a genius.
I've also seen a number of books and fanfics whose authors seemed like incredible writing talents and whose characters seemed like geniuses, fountaining so many brilliant ideas. And then each time it turned out that they really just came from a cultural background that was unfamiliar to me. And I generalized this to the point that when you meet someone who spouts a bunch of brilliant ideas in a row, you should conclude that it's almost certainly not that he's a genius, but that he's familiar with a meme you're unfamiliar with.
And hmmm. Just now I thought about it, but it probably also explains that Aura of Power around characters who are familiar with a certain medium, and people who are familiar with a certain profession (https://www.lesswrong.com/posts/zbSsSwEfdEuaqCRmz/eniscien-s-shortform?commentId=dMxfcMMteKqM33zSa), that's probably the point, and it means that the feeling is not false, it really elevates above mere mortals, because you have a whole bunch of meaningful thoughts that mere mortals simply do not have.
When the familiar with more memeplexes will ponder, he will stand on a bigger pile of cached thoughts, not just on the shoulders of giants, not on the shoulders of a human pyramid of human giants, so he can see much further than someone who looks only from his, no matter how big or small, height.
I've read, including on lesswrong (https://www.lesswrong.com/posts/34Tu4SCK5r5Asdrn3/unteachable-excellence), that often listening to those who failed is more useful than those who succeeded, but I somehow missed if there was an explanation somewhere as to why? And the fact is that there are 1000 ways to be wrong and only 1 way to do something right, so if you listen to a story about success, it should be 1000 times longer than a story about failure, because for the latter it is enough to make one fatal mistake, while for the former you have to not make the whole thousand.
However, in practice, stories of failure and stories of success are likely to be about the same length, since people will take note of about the same number of factors. In the end, you will still have to read 1,000 stories each, whether success or failure, except that success happens 1,000 times less often and the stories about it will be just as short.
Does the LessWrong site use a password strength check like the one Yudkowsky talks about (I don't remember that one)? And if not, why not? It doesn't seem particularly difficult to hook this up to a dictionary or something. Or is it not considered worth implementing because there's Google registration?