I've read HPMOR and all Russian translated sequences, I recently found that Google Translate has started to work VERY well lately, so now I'm reading the original English version of LessWrong with it and commenting too.

Wiki Contributions


As for the many attack vectors, I would also add "many places and stages where things can go wrong", AI became a genius social and computer hacker. (By the way, I heard that most hacks are carried out not with the help of computer hacking, but with the help of social engineering, because a person is a much more unreliable and difficult to patch system) From my point of view, the main problem is not even that the first piece of uranium explodes so that it melts the Earth, the problem is that there are 8 billion people on Earth, each has several electronic devices, and processors (well, or batteries for a more complete analogy) are made of californium. Now you have to hope that literally no one in 8 billion people will cause their device to explode (this is much worse than expecting that no one in just 1 million wizards will be prompted with the idea of ​​​​transfiguring anti matter, botulinum toxin, thousands of infections, nuclear weapons, strandels, as well as things like "only top quarks", which cannot be imagined at all), or that literally none of these reactions will go as a chain reaction through all processors (which are also connected to a worldwide network operating on the basis of radiation) in form of a direct explosion or neutron beams, or that you will be able to stop literally every explosive / neutron chain reaction. We can conditionally calculate that for each of 8 billion people there are three probabilities that they will not fail all three points, and even if on average each of them is very high, we raise each of them to the power of 8 billion, worse, these are all probabilities in a certain period of time, conditionally, a year, the problem is that over time, not even the probabilities grow, but the interval for creating AI is shortened, so that we get the difference between a geometric and exponential progression. Of course, one can say that one should not consider the average over all, that the number should be reduced for all but the number of processors, but then the number of people who can interfere will be reduced, and the likelihood that one of them will create AI will increase, and again, the problem is that it's not the chance of creating AI that increases, but the process becomes easier, so that more people have a higher chance of creating it, and that's why I still count for all people. Finally, we can say that civilization will react when it sees not smoke, but fire. But civilization is not adequate. Generally. Only here she did not take fire-fighting measures and did not react to smoke. She also showed how she would react to the example of the coronavirus. But only here, "it's not more dangerous than the flu. Graphic is exponential? Never mind", "it's all a conspiracy and not true danger", "I won't get vaccinated" will be added "it's all fiction / cult", "AI is good" and so on.

I think it could have been written better, I found it a little stretched, especially in the beginning and middle (the ending looks very powerful), it could also be better with more references to already known concepts like "lost goals". But at the same time, it looks like a very important post for instrumental rationality, epistemological rationality is well solved by sequences, but instrumental seems to be what most people lack for a good achievement of goals, this is a more significant node in the tree (at least considering how bad everything is with its implementation). And this, among other things, looks like one of the significant nodes for my instrumental rationality, that, the understanding of which I lacked, both to the upper nodes and to the lower ones. Although I suspect that there are still not enough other nodes of the same importance. (I'm confused no one has written a comment like mine and that there are so few comments at all)

It occurred to me that looking through first-order logic could be the answer to many questions. For example, the choice of complexity by the number of rules or the number of objects, the formulas of quantum mechanics do not predict some specific huge combination of particles, they, like all hypotheses, limit your expectations compared to the space of all hypotheses/objects, so for at least complexity according to the rules, at least according to objects will be given one answer. At the same time, limiting the complexity of objects should be the solution to the Pascal robbery (the original articles have no link if they are already solved), this is the answer, where the leverage penalty comes from. When you postulate a hypothesis, you narrow the space of objects, initially there is much more googolplex of people, in fact, but you specify only a specific googolplex as axioms of your hypothesis, and for this you need the corresponding number of bits, because in logic an object with identical properties cannot be different objects (and if I'm not mistaken, quantum mechanics says exactly that), so that each person in the googolplex must be different in some way, and to prove / indicate / describe this you need at least the logarithm of bits. And as long as you're human, you can't really even formulate that hypothesis exactly, define all the axioms when that is 1 your hypothesis is 1, let alone get enough bits of evidence to establish that they really are 1. But also the hypothesis about any number of people is the sum of the hypotheses "there is at least n-1 people" and "there is one more person", so increasing its probability by a billion times it will be literally equivalent to believing at least that part of the hypothesis where there are a billion people , which will be affected by the master of the matrix. This can also be expressed as that each very unlikely hypothesis is the sum of two less complex and unlikely hypotheses, and so on until you have enough memory to consider them, or in other words, you must start with more likely hypotheses, test, and only then add new axioms to them, new bits of complexity. Or a version of leverage penalty, only not for the probability of being on such a significant node, but for choosing from the space of hypotheses, where for the hypothesis about the googolplex of people there will be a googolplex of analogues for smaller numbers. That is, according to first-order logic, our programs have unreasonably high priors for regular hypotheses, in fact, infinite, although in fact you have to choose from two options for each bit, so the longer the set of certain bit values, the less likely it will be. Of course, we have evidence that things behave regularly, but not all evidence goes there, much less an infinite amount of it, since we haven't even tested all 10^80 atoms in our Hubble sphere, so our prior is larger the probabilities of regular hypotheses will not be strong enough to overpower even a googolplex.

It seems to me that the standard question on the conjunction error about "the probability of an attack by the USSR on Poland as a result of conflict X" and "the probability of an attack by the USSR on Poland" is inaccurate for this experiment, since it contains an implicit connotation that once in the first reason X is named, and in the second case, Y or Z is not indicated, then in the second case we evaluate the attack for no reason, and if we continue to show the person himself his answers to this question, the effect of hindsight comes into play, like that experiment with the substitution of the selected photo. It seems to me that a more correct question, in order not to create such subconscious false premises, would be "the probability of an attack by the USSR on Poland as a result of conflict X" and "the probability of an attack by the USSR on someone as a result of any conflict between them." Although I'm not sure that this will change the results of the experiment as a whole. At least because even with an explicit indication of "some" instead of an implicit premise of "no reason", something so multivariate and vague will still not look like a plausible plot, as a result, vague = indetailed = implausible = unlikely = lowprobable.

I liked the writing style. But it seems that no one in the comments noted the obvious that it’s not “you control the aliens”, but “the aliens control you” (although this sounds even crazier and like a freak in everyday life), in other words, you are in any case a simulation, but whose results can predict the decision of another agent, including a decision based on prediction, and so on. This removes all questions about magic (although it can be fun to think about it). Although this can cause a problem for the calculator, which, instead of determining the result "5 + 7", will try to determine what it will give out for the query "5 + 7", but will not work on calculating the sum.

Oh, I did not expect to see a link to this channel here. I already watched it some time ago, and unfortunately none of the explanations helped me. And recently I suddenly discovered that I understand everything perfectly.

Personally, April Fool's jokes annoy me, because I keep forgetting what date it is today. But on the other hand, often these post factum posts make sense on a meta level. Not to mention, I'm not against gaslighting in Dat Ilan, because when it's done consciously to optimize the beneficial effects (and not like with Santa), it looks like a good exercise in your distrust/doubt/critical thinking. In this particular case, I would like to know how many people on lesswrong took it seriously, but it’s not noticeable from the comments, it doesn’t let me understand in what proportion here those who paid attention to the date, realized everything or did not want to admit it. At least I'll write my reaction: At the beginning I met with a set of terms in a strange combination, but since the post was upvoted, I assumed that perhaps this combination is an established fallout and / or the post is otherwise good enough that this disadvantage pays off. When mentioning capitalism, I thought that perhaps I was mistaken and there is some sympathy for some of Marx's ideas, those that were successful. Or is this again a strange use of the term. A little further on, I remembered that I had heard that during one of the influxes of the audience on lesswrong, a bunch of people appeared who wrote and promoted meaningless combinations of terms from the chains to the top. Then I met the mention of dialectics and my confusion jumped to enormous heights, because I checked the explanations of its proponents for the sake of interest and came to the conclusion that amidst a huge amount of water lies a misunderstanding of the principles of science (crackpot index) and a completely counter-rational opinion that contradictions are on territory, not on the map. In general, before the second point, I exploded and without finishing reading went to read the comments, looking for answers to the extreme degree of confusion and trying to find out why this post still has so many likes. But in fairness, in addition to Yudkowsky's opinion about meaningless wisdom, I read an article "On the perception of pseudo deep thought nonsense" about the fact that Hegel's texts look equally deep if you add and remove a particle not, as well as delete and interchange words. (This is another very negative sign about dialectics, as far as I know, it came from Hegel) And that people cannot strictly explain what it generally means, but from this they conclude that the writer owns terms from a bunch of areas of science unknown to the reader, which means he is very smart. Before reading this, I had a couple of cases where I initially perceived such sets of words as something possibly meaningful.

I just now understood why Eliezer Yudkowsky chose Harry Potter for his character with such qualities, he's a typical Wiggin!

It's funny that to understand the "open-eyed look" people didn't have enough open-eyed look. Coincidentally, I looked into the comments and saw this one only after, on this reading, I finally didn't take it as just a nicely written fable. The fact is that some time ago I noticed that for the first time in a very long time I looked inside myself, and did not choose the most harmonious of other people's opinions. This is similar to one of the posts where someone says that for the first time in their life they realized that they did not like the taste of food, but the social sense of status that eating this food gave. I also had a literal difficulty coming up with original plot twists and generally writing non-fan fiction in my attempts at fiction. And then I recently started posting my notes here on lesswrong, and in the process of thinking about it, I realized how few good thoughts I have that were created by me, and not taken from someone else. In fact, I almost never got more than a step, two at the most, from someone else's ideas. Reminds me of Yudkowsky's post I recently read about crossing the Rubicon, where he says that he really thinks the way he writes, reaching a huge depth of recursion in reflection, for example, not just experiencing emotion, but thinking that he is experiencing these emotions, whether he wants them to test whether he wants to want to experience them, and then in the same way thinking about his thoughts about his emotions and about thoughts about thoughts. It seems that with original thinking and attempts to go further than one step from other people's thoughts, one should do about the same. And I miss both options. Perhaps it was this post and its similarity that gave me the idea. In general, only recently I realized that in the case of other people's ideas, only pride in one's erudition is appropriate, but not in one's mind, because it was not you who came up with these ideas, these are not your thoughts. So, already trying to generate thoughts from looking inside myself, or at least moving many steps forward from other people's thoughts, I read this post and at the phrase "she did not know other people's words that can be repeated" I "clicked", as they say. P.S. I remembered that at the first reading this moment was perceived as that this girl did not understand that when you write an essay you have to invent something yourself, and not retell Wikipedia or other people's articles, and this became obvious only when she realized that no one in all over the world did not write an article about each specific brick.

I really don't like this idea, from personal experience. I thought: given my snail's pace in learning English, since I learned it at school and already know it at such a level that there is no pleasant feeling of novelty, then at the current pace I will finish in 5 years, plus any costs ... Ok, I'll be back in ten years, when I will have a decent level of English. Although learning German is much more interesting, and in any case I will learn English for a very long time, and knowledge of languages ​​speeds up the learning of new ones, especially since German and English are similar ... I'll be back in 15 years. (I wonder what is the probability that the end of the world would not have come in this time?) Fortunately, now, 10 years later, there are online translators, and I had the “impertinence” not only to read, but also to try to write through them. The SL4 rules mentioned are something that can be done quickly, for some moments a minute, for some a week, but learning a new language at a level with good literacy is a matter of several years. It is too long. If other points allow you to spend a little of your time to preserve better material for centuries and to save time for thousands of people, and a minute and even a week are clearly worth it, then years are clearly not.

Load More