Apparently LW does a great job on refining rationality and dissolving confusions. But is it helpful when it comes to anything apart from designing Friendly AI, apart from a purely academic treatment of rationality? I'm currently unable to benefit from what I have so far read on LW, it actually made me even more unproductive, to an extent that I get nothing done anymore. Let me explain...

You have to know that I'm still in the process of acquiring a basic education. If I say basic, I mean basic. Since I got almost no formal education, what I do know (or know about) is largely on a very low level, yet I am plagued by problems that are themselves on a level that require the intellect and education of the folks here on LW. The problem with that is that I'm yet lacking most of the skills, tools and requisite know-how while the problems in question concern me as well. This often causes me to get stuck, I can't decide what to do. It also doesn't help much that I am the kind of person who is troubled by problems others probably don't even think about. An example from when I was much younger (around the age of 13) is when I was troubled by the fact that I could accidentally squash insects when walking over grass in our garden. Since I have never been a prodigy, far from it, it was kind of an unsolvable problem at that time, especially since I am unable to concentrate for very long and other similar problems are accumulating in my mind all the time. So what happened? After a time of paralysis and distress, as it happens often, I simply became reluctant and unwilling, angry at the world. I decided that it is not my fault that the world is designed like that and that I am not smart enough to solve the problem and do what is right. I finally managed to ignore it. But this happens all the time and the result is never satisfactory. This process too often ends in simply ignoring the problem or becoming unwilling to do anything at all. What I'm doing is not effective it seems, it already stole years of my life in which I could have learnt mathematics or other important things or done what I would have liked to do. You might wonder, shouldn't this insight cause me to ignore subsequent problems and just learn something or do what I want to do, do something that is more effective? Nope, it is exactly the kind of mantra that LW teaches that always makes me think about it rather than ignoring the problem and trying to reach my goals. Namely that the low probability of a certain event might be outweighed by the possible positive or negative 'utility' that the problem implies, especially ethical considerations. What could happen if I just ignore it, if I instead pursue another goal?

It's partly the choice that is killing me, do X or Y or continue thinking about either doing X or Y, or maybe search for some superior unknown unknown activity Z? For how long should I think about a decision and how long should I think about how long I should be thinking about it? Maybe the best analogy would be the browsing of Wikipedia on a subject that is unknown to you and over your head and clicking the first link to a page that explains a certain term you don't know just to repeat that process until you end up with 10 additional problems on an entry that is only vaguely relevant to the original problem you tried to solve. The problem is still there and you've to make the decision to ignore it, pursue it further or think about what to do. 

Recently I had blood vessel crack in my eye. Nothing to worry about, but I searched for it and became subsequently worried if something like that could happen in my brain too. It turned out that about 6 out of 100 people are predisposed for such brain aneurysms, especially people with high blood pressure. Now I might have a somewhat abnormal blood pressure and additional activity might make some blood vessel in my brain leak. Should I stop doing sports, should I even stop thinking too much because it increases the blood circulation in my brain (I noticed that I hear my blood flow when thinking too hard)? But how can I decide upon it without thinking? So I looked up on how to check if I was predisposed and it turned out that all tests are too risky. But maybe it would be rational to stop doing anything that could increase the blood pressure until there are less risky tests? And so I lost a few more days without accomplishing anything I wanted to accomplish. 

How I feel about LW

LW makes me aware of various problems and tells me about how important it is to do this or that but it doesn't provide the tools to choose my instrumental goals. Thanks to LW I learnt about Solomonoff induction. Great...fascinating! But wait, I also learnt that there is a slight problem: "the only problem with Solomonoff induction is that it is incomputable" Phew, thanks for wasting my time! See what I mean? I'm not saying that there is something wrong with what LW is doing, but people like me are missing some mid-level decision procedures on how to approach all the implications. I wish LW would also be teaching utilizable rationality skills by exemplifying the application of rationality to, and the dissolving of, real-life problems via the breakdown of decision procedures.

Take for example some of the top scoring posts. I intuitively understood them, agreed and upvoted them. My initial reaction was something along the lines of "wow great, those people think like me but are able to write down all I thought to be true." Yes, great, but that doesn't help me. I'm not a politician who's going to create a new policy for dealing with diseases. Even if I was, that post would be completely useless because it is utopic and not implementable. The same could be said about most other posts. Awesome but almost completely useless when it comes to living your life. 'Confidence levels inside and outside an argument' was an really enlightening post but only made me even more uncertain. If there is often no reason to assume very low probabilities then I'm still left with the very high risks of various possibilities, just that they suddenly became much more likely in some cases.

The problem with LW is that it tells me about those low probability high risk events. But I don't know enough to trust myself enough to overpower my gut feeling and my urge to do other things. I'd like to learn math etc., but maybe I should just work as baker or street builder to earn money to donate it to the SIAI? Maybe I should read the sequences to become more certain to be able to persuade myself? But maybe I should first learn some math to be able to read the sequences? But maybe I don't need that and would waste too much time learning math when I could earn money? And how do I know what math is important without reading the sequences? And what about what I really want to do, intuitively, should I just ignore that?

New to LessWrong?

New Comment
46 comments, sorted by Click to highlight new comments since: Today at 10:54 AM

Re: blood vessels. If by "cracking a blood vessel in your eye" you mean you got a red bloody spot in your eye when you looked in the mirror for a little while, then it went away, and there was no loss of vision or pain or anything like that, then this is a subconjunctival haemmorhage. It's usually caused by minor trauma like sneezing really hard or rubbing your eyes too hard, and it is perfectly normal and I've gotten one myself. As far as I know it is not related at all to the pathological processes of aneurysms in the brain. Sometimes SCHs can be caused by hypertension which is generally bad, but you can easily check your BP with a BP cuff (there are usually free ones floating around pharmacies and places like that) and even if your BP was mildly elevated it wouldn't be anything a third of the country doesn't also have. If your blood pressure is below 140/90 there is no great medical evidence for bothering too hard to bring it even lower (unless you have diabetes or something like that)..

Re: insects - look up Jainism. They're a religion one of whose tenets is that you can't hurt anyone in any way, including insects, and they have developed a lot of methods for avoiding accidental harm. If your utility function really includes a term for this, drawing off the Jains' two millennia of expertise is your best bet.

As a more general solution to your problem, I would suggest reading the Sequences. If you have to, stop reading new LW posts and just read the Sequences. There is no reason not to read the Sequences. The Sequences are your friends. Everyone loves the Sequences. Do not taunt the Sequences.

...seriously, most new LW posts are either advanced extensions of Sequence material, or fluff. The Sequences are where you should really go if you're looking for a foundation for using probability in your life. Reading the Sequences looks daunting by sheer word count, but it's not like trying to read a calculus textbook. The Sequences are some of the most engaging, enjoyable things I have ever read. I think I finished them all within two weeks of finding the blog (to be fair, there were fewer of them at that point) and when I finished, I lay down and wept that there were no more sequences to read (not really). They're that good. People who keep complaining about having to read the Sequences don't realize how lucky they are that they have the opportunity to read them for the first time and get that much low-hanging fruit in a single go. They are that good.

Eliezer's Intro to Bayes Theorem and (especially) his Technical Explanation of Technical Explanation should really be counted as part of the Sequences for your purposes. If it's math you're dreading, consider the fact that even I read this stuff, and I have the mathematical ability of a random rock. All the math in the Sequences and yudkowsky.net can be skimmed once you have a good idea of the concepts behind it (someone will yell at me here and say no it can't, but I think people who are good at math will underestimate the ability of people who are not good at math to conceptualize the math to the point where they don't need every single equation -- as long as they honestly try to do this and don't just pretend it doesn't exist). Or if math is really the only thing preventing you from reading the Sequences, go ahead and pretend it doesn't exist and you'll still get a treasure trove out of it.

consider the fact that even I read this stuff, and I have the mathematical ability of a random rock

Seriously, you are too smart to have any trouble in acquiring understanding of mathematics if you made a serious effort. Just read the textbooks starting at an appropriate level. Given that you characterize your skill the way you do, there's probably some low-hanging fruit there for you. (But it's possible that you won't be able to enjoy the process. I know I would know less math if I didn't have something to protect.)

A fact I didn't appreciate before encountering this whole AI-related craze followed by overcomingbias followed by lesswrong is that it's possible to master an arbitrary field of expertise by systematically teaching yourself its skills, even if it's completely dissimilar to all you've ever known.

"Learn math" is kind of a broad imperative. I know the math that's common to many different applications, like arithmetic and algebra and a bit of calculus, but after that it becomes so fractured that even when I learn how to solve one specific problem in a specific field, I never encounter that problem or field again.

If there were a specific cause for which I needed math, I would force myself to learn the math relevant to that cause, but just "learn topology, who knows when you might need it?" has never been very convincing to me.

I've studied a little decision theory, since that seems to be the form of math most relevant to Friendly AI, but so far I've found it frustrating and hard to treat with suitable rigor. If anyone wants to recommend an unusually good textbook, preferably online, I suppose I'd take suggestions.

What I would say is that it's okay not to bother learning math if you don't need or want to, but for heaven's sake just don't go around saying you couldn't learn it if you tried because you lack some specific cognitive module. (When people say this, it almost always means simply that they weren't socialized into mastering it in childhood, and they haven't bothered to update this aspect of their identity since then.)

By the way, I totally agree on the Sequences: I remember when they were being written, I used to look forward to the next post the way a kid looks forward to the next episode of their favorite TV show.

[-][anonymous]13y-10

When people say this, it almost always means simply that they weren't socialized into mastering it in childhood, and they haven't bothered to update this aspect of their identity since then.

I don't think you have more evidence for this hypothesis than I do for "some people just don't have a head for figures."

Lacking a "head for figures" myself, I am personally a counterexample to your hypothesis.

Although what I really dislike about it is not even that it's false but that it's a curiosity-stopping fake explanation.

[-][anonymous]13y00

I have a generalized sense of "head for figures" in mind, if you mean that you're good at math but not at calculation. Some people are bad at both, and it's pretty optimistic to say that what's holding them back (almost always!) are only their childhood experiences.

I don't consider myself "good at math" despite having credentials in the subject. As far as I can tell, in order to do math, I have to use thought processes that are quite different from those used by people who are stereotypically "good at math".

What holds people back is not their childhood experiences per se but the general lesson learned in childhood that having certain abilities and lacking others is an integral part of one's tribal uniform.

[-][anonymous]13y30

I've known many people who are good at math but what they have in common isn't substantial enough to gel into a "type" for me, so when you say "stereotypically good at math" I draw a blank.

Ahab and Billy are two 14-year old kids in an algebra class this year, and at the end of it they're both going to get an 'A'. Ahab will achieve this effortlessly, spending less than an hour per week doing his homework, and Billy will really struggle, spending more than an hour every night on homework and supplementary studying. And it's always been like this for Ahab and Billy. Maybe you'll object but I think I'm describing something very plausible and common.

It sounds like you would reject any explanation for the difference between Ahab and Billy of the kind "Billy has less native math ability than Ahab," and favor an alternative explanation about childhood socialization. But can you spell out what this explanation is, or what some of its consequences are?

Several points to make in reply:

  • To get a sense of what I mean by "stereotypically good at math", think about the abilities involved in solving tricky puzzles or competition-style problems. Or, consider the comments section of Eliezer's Drawing Two Aces post, full of people who got the right answer (I didn't, which resulted in this post). The idea isn't exactly well-defined, but seems to involve some combination of powerful short-term memory and an ability to quickly identify the particular abstraction that the poser of a concrete problem is attempting to refer to.

  • The implication of the contrast between Ahab and Billy, on my account, isn't what you perhaps think. I don't necessarily deny that some kind of "native" difference could be responsible for Billy's greater difficulties relative to Ahab. The fact that Billy manages to get an "A", however, means that anyone with Billy's level of "native math ability" can't invoke that to explain why they didn't get an "A". Billy may have other native abilities that such an individual may lack, but they won't be specifically math-related, and instead will be general things like "the ability to overcome akrasia", etc.

  • Notwithstanding the above, "lack of native math ability" is still a fake explanation. Whatever "math ability" is, it is reducible. I want to know in detail what goes through Billy's mind as he attempts to solve an algebra problem, and how it differs from what goes on in Ahab's mind. Once we know this, we can try to determine what causes this difference: is Billy's IQ just lower than Ahab's (which would be a general problem, not a math-specific one), does he lack certain pieces of information that Ahab has (easily fixable), or is he executing particular cognitive habits that prevent him from processing the same information as efficiently as Ahab (fixable via training)?

[-][anonymous]13y20

Teasing this out a little more, bullet points of my own:

  1. If B learns math at a slower pace than A, then it can literally be the case that B will never understand math as well as A. At suitable slow (but common) learning paces, it can be impractical and unrewarding for B to study math. And I think there might even be large numbers of mentally normal human beings walking around for whom this pace is so slow that it's misleading to call it a "learning pace" at all, e.g. too slow for progress they make one day to stick the next.

  2. I'm sure that "math ability," like anything else, is reducible, but in these kinds of brain-and-behavior cases it might "reduce" to thousands of different factors that don't have much to do with each other. In that case it wouldn't be very easy to give advice on how to be better at math, beyond "arrange each of those thousands of factors in a way favorable to math ability."

  3. Even if "math ability" has a more satisfying explanation than the kind in 2., so that it's possible to give good advice on how to improve it, I think that this is not a solved problem. Specifically I still think that your proposed advice ("you simply haven't bothered to update some aspect of your identity since childhood") is no good.

  4. In the meantime "lack of math ability" seems to me to be a perfectly good label for a real phenomenon, though I guess I agree with you that it is not an explanation for that phenomenon.

1. I disagree that a slower learning pace is less rewarding. On the contrary, learning is most rewarding when there is time to do it properly, and the frustration many people experience in school settings results from the pressure to (appear-to-) learn things more quickly than their natural pace.

(I owe to Michael Vassar the observation that there is something inherently contradictory and unrealistic about expecting people to learn calculus in a semester when they required five years to learn arithmetic.)

2. It might seem like it could be that complicated, but it turns out not to be. In practice (as revealed by teaching experience), "lacking math ability" usually reduces to something like "I flinch and run away when I realize that I will have to carry out more than two or three steps (especially if there is recursion involved), instead of just gritting my teeth and carrying them out."

3. Most people don't even try updating their identities; while on the other hand, I myself have updated my skill-related identities on a number of occasions, and it worked. (Math happens to be an example.)

4. It would be best to have a label that conveys more information about the cause(s) of the phenomenon.

[-][anonymous]13y00

.

[-][anonymous]13y20

.

I found that revisiting formal logic/set theory forced more careful intuitions about decision-making, and learning category theory made it less scary to work with more complicated ideas. Learning topology helped with studying set theory and gave some insight into the process of coming up with new mathematical concepts. You've probably seen my reading list (all the stuff on it can be downloaded from Kad).

I can't make a proper explicit argument for studying math being on direct track to contributing to FAI research (particularly since UDT/ADT now look potentially less relevant than I thought before), but it looks like the best available option, giving general enough reasoning skills that could conceivably help, where I'm not aware of other kinds of knowledge that looks potentially useful to a similar extent.

(On the other hand, I probably don't pay enough attention to the skills I already had two years ago, which include good background in programming and basic background in machine learning.)

particularly since UDT/ADT now look potentially less relevant than I thought before

Expand?

I see no easy/convincing way of doing so right now. I'll write up my ideas when/if they sufficiently mature, or, as is often the case, I'll move on to a different line of investigation. Basically, morality is seen through a collection of many diverse heuristics, and while a few well-understood heuristics can form the backbone of a tool for boosting the power of an agent, they won't have foundational significance, and so selection of the heuristics that need to be explicitly understood should be based on the leverage they give, even where they are allowed to have some blind spots.

A fact I didn't appreciate before encountering this whole AI-related craze followed by overcomingbias followed by lesswrong is that it's possible to master an arbitrary field of expertise by systematically teaching yourself its skills, even if it's completely dissimilar to all you've ever known.

(If you are sufficiently intelligent and have a certain set of personality traits. This is not something everyone can realistically be considered capable of.)

[-][anonymous]13y00

When I have to learn math, I can learn it, but math is unpleasant to me, and no matter how much I learn it keeps exactly the same level of unpleasantness. I realize the importance of math and I try to know enough to understand all the fields that are important or interesting to me, but there are a lot of equations in the Sequences that I skimmed over because I trusted that Eliezer had derived them correctly and they meant what he said they meant, and while I totally admire people who love math and will work through every step of those equations, I'd recommend anyone else who's holding off on reading the Sequences just because they're math-y not to worry about skimming.

I'm going to keep in mind as something to link to every time XiXiDu repeats a new permutation of his same old 'question'/objections. Well, that and any time reading the sequences is worth recommending without being trite about it.

First of all, read the sequences.

It's partly the choice that is killing me, do X or Y or continue thinking about either doing X or Y, or maybe search for some superior unknown unknown activity Z?

A sufficiently advanced optimization process can work towards any goal. One reason AI is so great is that it is strictly superior in terms of finding thing we missed. It's possible that we could come up with something good sooner if we did not work on AI, but we have no evidence that that is the case.

Should I stop doing sports, should I even stop thinking too much because it increases the blood circulation in my brain (I noticed that I hear my blood flow when thinking too hard)?

As someone who cares greatly about existential risk, wouldn't the value of thinking outweigh the risk if there were any chance that you could help?

And what about what I really want to do, intuitively, should I just ignore that?

Intuitively as in "some stupid part of my brain keep trying to trick the more rational part into doing X" or as in "I'm really good at X but too worried about how much it will really help to do it effectively"?

Don't be too worried that you won't be able to contribute. According to one SIAI visiting fellow, "an ability to do 'something that anyone could do' is an accomplishment in itself - that is, the ability to do something even though it isn't great and glorious and exciting. She mentioned, and my own experience agrees, that getting volunteers to do exciting things is easy, but getting them to do the less glamorous work (of which there is much more to do) is much harder." If others really are that much more skilled than you, you can still do a lot of good by assisting their efforts.

[-][anonymous]13y20

Well, I'll tell you one thing. You do daily link-blogs. Those link-blogs point me towards a lot of fascinating material I wouldn't find elsewhere, and I in turn have linked several things I've found from your blog on mine - some of them have also affected, subtly, the way I'm writing the pop-science book I'm slowly doing on the scientific method, Bayes' theorem etc. If your linkblogs have been useful to me in that way, they will be useful to others - so even just in the process of educating yourself, you've been helping others almost as a byproduct. You cannot predict what the effects of your actions as an individual will be on low-probability catastrophic risk events, but if you try to learn as much as you can and to disseminate that knowledge as widely as you can - however you think best - you will undoubtedly do at least some good.

As for your blood pressure, try supplementing with magnesium. I've not read the studies, so take this as anecdata, but my uncle (doctorate in medical biophysics from a reputable university) tells me that most people are mildly deficient in magnesium and that this is a major cause of high blood pressure. My own blood pressure is lower since I started supplementing with magnesium (for an unrelated issue) though how much of that is placebo I of course can't say.

Well, I'll tell you one thing. You do daily link-blogs. Those link-blogs point me towards a lot of fascinating material I wouldn't find elsewhere, and I in turn have linked several things I've found from your blog on mine...

Thanks! Would you mind sending me a link to your blog? Also, how did you find out about my blog? I actually hope that I have been able to introduce some people to LW via my links.

[-][anonymous]13y00

No problem. My blog is http://andrewhickey.info , but probably of tangential-at-best interest - I mostly talk about music, comics and stuff, but I do occasional linkblogs. I presume I found your blog through you linking it here at some point - I don't remember precisely, but I do know I've been following it for a couple of months at least.

A lot of the mathematics are uncomputable in some situations. There are a lot of cases where you simply don't know enough to actually do Bayes' theorem, but knowing about it, and understanding the process behind it, makes your own thinking that much better. Solomonoff induction may be incomputable, but if you get the purpose behind it, or you use an approximation, that's good enough. Much of the theoretical ideas are difficult to use in real life situations. Approximate. Do the absolute best that you can to solve the problem.

Finally,

And what about what I really want to do, intuitively, should I just ignore that?

If what you intuitively want to do is going to lead you to not fulfill your goals, yes. If you've determined that acting on instinct is better in a given situation, by all means do it. Win.

[-][anonymous]13y10

On the irrelevance of some posts. The post on dealing with diseases actually has come in handy for me, both in reducing self-blame for adhd and anxiety, and for talking to my dad about it, who definitely did not embrace the idea at first of treating them as disorders rather than excuses. As is probably the case for many topical posts: Your Mileage May Vary.

Finally, existential risk. Is thinking seriously about existential risk a memetic hazard? This is an interesting and troubling idea. But I don't think that it's LessWrong consensus that you should choose a life you don't enjoy.

If I could take a guess, I'd say that LessWrong has lowered your estimates that you can personally have a big effect on making your future brighter while fully enjoying the present. Is this accurate?

And what about what I really want to do, intuitively, should I just ignore that?

Taoism councils going with that.

You've got to also beware of chocolate gateau, though.

Thanks to LW I learnt about Solomonoff induction. Great...fascinating! But wait, I also learnt that there is a slight problem: "the only problem with Solomonoff induction is that it is incomputable" Phew, thanks for wasting my time!

So: use a computable approximation. Not a huge deal, I figure.

Beware of the representativeness heuristic. Basing your computable approximation on AIXI does not necessarily maximize its accuracy, any more than naive Bayes is inherently superior to its fellow non-Bayesian algorithms due to having "Bayes" in the name.

Using a computable approximation of Solomonoff induction (not AIXI, that's different!) is not some kind of option that can be avoided - modulo some comments about the true razor.

You can warn about its dangers - but we will plunge in anyway.

Ah, I have no idea why I said AIXI. Must have gotten my wires crossed. :|

This seems to leave open the question of what approximation to use, which is essentially the same question posed by the original post. In the real world, for practical purposes, what do you actually use?

Making a computable approximation Solomonoff induction that can be used repeatedly is essentially the same problem as building a stream compressor.

There is quite a bit of existing work on that problem - and it is one of my current projects.

I don't understand the question. Can you explain what was wrong with the answer I just gave?

The question is: please recommend a model of rationality that a human can actually use in the real world. It's not clear to me in practice how I would use, say, gzip to help make predictions.

Right, well, the link between forecasting and compression was gone over in this previously-supplied link. See also, the other introductory material on that site:

http://matchingpennies.com/machine_forecasting/

http://matchingpennies.com/introduction/

http://matchingpennies.com/sequence_prediction/

If you want to hear something similar from someone else, perhaps try:

http://www.mattmahoney.net/dc/rationale.html

I understand the theoretical connection. I want a real-world example of how this theoretical result could be applied.

An example of prediction using compression?

E.g. see Dasher. It uses prediction by partial matching.

I also found this thesis, 'Statistical Inference through Data Compression', using gzip of all things, quite interesting. (Some half-related background.)

That is indeed a correct answer to a reasonable interpretation of the question I asked. I thereby realize that I should have asked differently.

Where examples of rationality usage are given on LW, they tend to be of the straightforward decision-theoretic kind, such as solving the trolley problem; that is, rationality and studied and taught here is mostly about helping humans better make the kinds of decisions that tend to be made by humans.

Suppose I want to take an umbrella to work with me if and only if it will rain this afternoon. How might I go about deciding whether to take my umbrella? And, in particular, is running my own statistical analysis on the weather patterns in my local area over the past hundred years really a better choice than just turning on the weather channel?

And, in particular, is running my own statistical analysis on the weather patterns in my local area over the past hundred years really a better choice than just turning on the weather channel?

Perhaps I am missing something, but the answer is obviously no. This follows from the usual humility and outside view arguments, and from more detailed inside view considerations like the following: the weather station has access to far more data than you over that time period, and has detailed recent data you do not, and can hire a weather statistics expert (or draw on such expertise) who will crush your predictions because they specialize in such problems.

The answer was intended to be obviously no. I wished to refute the idea that esoteric mathematical models like prediction-as-data-compression translated directly into useful advice for the real world outside of a few highly technical cases.

[-][anonymous]13y00

On uncertainty, yes, LessWrong has made me more explicitly uncertain. But I feel like I've learned quite a bit about turning unmasking "unknown" unknowns as known unknowns, about quantifying my uncertainty. I began reading regularly shortly after my confidence took a dive, and the reason it took a dive was due to the effects of misplaced confidence, mistaken beliefs. It's starting to climb again, and my restoration of confidence comes largely from recognizing the thought processes that brought me down to begin with.

Think of some common failure modes. Think of some that would undermine a person's confidence. Think of what wrong thought processes lead to them. If you had never read LW, could you see yourself falling into any such traps? Are the odds that you'd make those errors of thought really unchanged after reading a dozen LW posts?

ETA: I had the same deal with the insects in the grass at that age! I tried to walk around grassy areas, and eventually gave up. But I connected the knowledge of how soft dirt is, and how tough exoskeletons are, and felt much better. I felt pretty terrible though when I had to start mowing the lawn.

[+][anonymous]13y-120