Reflection from this particular experimental position:> Why was it possible for me to assume an offensive tone? What features contribute to an offensive tone, and how can I avoid that? I think HPMoR gave me the wrong idea about bringing awareness to something, and probably a lot more social behavior.- Conceptually speaking, my map correctly indicated to me that something more was left implicit for this example to be non-hyperbolic.- I had more than enough information to scratch off the possibility of it being hyperbolic, but I didn't even try looking.1. You have been on LessWrong for 4 years, have quite a bit of karma, have made over 20 posts on LessWrong, with many comments. I didn't even have to click your name for most of that info.> These metrics aren't pointless; are very useful. I will figure out how to determine what they mean for experiences on LessWrong.2. You made a post called, "Being Productive With Chronic Health Conditions", where the mystery could have been dispelled. Though I got to this post from a search, the mentioned post is listed right next to this one in your profile.> I should strive to always ask the right questions to see people in a broader, more accurate light. People are not stupid, and hardly ever without any reason. So why was it not the first-nature reaction to seek existing information to satisfy my curiosity?(P.S. I want to try again on another one of your posts. Based on my skim, I think you have quite a bit of value to offer.)
Thank you for the clarification. I am content. I congratulate you for running your errands on a bike with your condition; that's actually quite impressive. I do apologize sincerely for that unnecessarily harsh critique of a minor detail. I concede to your recommendation. I think I have some explaining to do.I am 19, and am relatively new to rationality. I have been exposed to it for about two years, but have attained only hints of scattered progress. I am ashamed of this, but also realize how difficult it is to change the underlying dispositional features of oneself; how difficult it is to get past a local optimum that the self uses for most of its stability. I quickly acknowledge how little I know, and have spent two years descending this macro-mount stupid. In the first 6 months, it got so bad that I disassociated from the normal sources of social stability. Family, friends, school, religion--all of it. I had a few things that kept me alive, but life was mostly cold, confusing, and lonely.After the strict perfectionism settled down and the emotional stability started to come back, pragmaticism (localized perfectionism) has started to face me as the true optimum. To this day, I'm trying to figure out what to do about my current limited state. I now just want to be less wrong and less dysfunctional, because that's the only improvement I could ever attain.But the harsh season left a stain on my cognition--absolute perfectionism is a powerful tool, but crippling when facing concrete challenges (where concrete progress is born). My abstraction engine became strong, but now the polarized forms of abstraction are... polarized still. I need to find a more systematic way to weigh them properly. I presume concrete challenges with feedback from others is the next step in the right direction.On LessWrong, I almost never comment on a post. I almost never join conversations. I've been left to my own analysis, and a static and vague window into others' lines of thought. I never thought I should even try those things because it would just be wrong or dysfunctional, or worst of all, that I would make the future worse (by doing things like wasting more intelligent peoples' time, or stimulating negative emotions). People on LessWrong aren't obviously wrong most of the time, so it makes it difficult for me to meaningfully contribute. It's in the subtlety where improvements can be made--the subtlety I have not yet learned. It's hard wanting to belong with a group from outside the window.What I've come to is that it would be better if I just said or did something I was convinced of, even if it was disproportionate and radical, mislead, wasted motion, or in this case, rude. But that I could figure out what went wrong after I made those mistakes and do my best to repair the damage.(P.S. I accept lower karma in exchange for a chance to mess up and learn.)
Think pragmatically here. How do you anticipate this list is going to change you?While much of LessWrong is dedicated to hypothetical truths, CFAR is more about iterative/adaptive truth/improvement.Don't consider anything and everything. A threshold of hypotheticals prevent you from acting and making progress (I wish they expounded upon this in the prior post).Just consider the limitations you anticipate that you'll actually be able to/actually want to resolve at some point.Hopefully this gives you some direction.
"Do you see how they flow into each other? Learning the order of the items helps me remember which virtues are connected to other ones, and how."Sure, it may help you remember how some of the virtues are connected to other virtues in an indirect way, but even if it were direct, it is quite partial. The flow can only hint at how lightness is related to evenness, or how perfectionism is related to evenness.Lightness doesn't just relate with evenness. It also relates with all the other virtues in a ton of different ways. In fact, they are all so heavily interrelated, that, "you will see how all techniques are one technique".If your objective is to have a good understanding of how all the virtues of rationality relate, I would chunk them in a way that is most sensible to you, then ask how each may relate to each other both in theory and in application.
I have done that here in the comments.@Mikhail Samin, you are welcome to apply my transcript to this post, if think that would be helpful to others.
Here is the Q+A section: [In the video, the timestamp is 5:42 onward.][The Transcript is taken from YouTube's "Show transcript" feature, then cleaned by me for readability. If you think the transcription is functionally erroneous somewhere, let me know.]Eliezer: Thank you for coming to my brief TED talk.
Host: So, Eliezer, thank you for coming and giving that. It seems like what you're raising the alarm about is that for an AI to basically destroy humanity, it has to break out, to escape controls of the internet and start commanding real-world resources. You say you can't predict how that will happen, but just paint one or two possibilities.
Eliezer: Okay. First, why is this hard? Because you can't predict exactly where a smarter chess program will move. Imagine sending the design for an air conditioner back to the 11th century. Even if there is enough detail for them to build it, they will be surprised when cold air comes out. The air conditioner will use the temperature-pressure relation, and they don't know about that law of nature. If you want me to sketch what a super intelligence might do, I can go deeper and deeper into places where we think there are predictable technological advancements that we haven't figured out yet. But as I go deeper and deeper, it gets harder and harder to follow.
It could be super persuasive. We do not understand exactly how the brain works, so it's a great place to exploit-- laws of nature that we do not know about, rules of the environment, new technologies beyond that. Can you build a synthetic virus that gives humans a cold, then a bit of neurological change such that they are easier to persuade? Can you build your own synthetic biology? Synthetic cyborgs? Can you blow straight past that to covalently-bonded equivalence of biology, where instead of proteins that fold up and are held together by static cling, you've got things that go down much sharper potential energy gradients and are bonded together? People have done advanced design work about this sort of thing for artificial red blood cells that could hold a hundred times as much oxygen if they were using tiny sapphire vessels to store the oxygen. There's lots and lots of room above biology, but it gets harder and harder to understand.
Host: So what I hear you saying is you know there are these terrifying possibilities, but your real guess is that AIs will work out something more devious than that. How is that really a likely pathway in your mind?
Eliezer: Which part? That they're smarter than I am? Absolutely. *Eliezer makes facial expression of stupidity upward, then the audience laughs.
Host: No, not that they're smarter, but that they would... Why would they want to go in that direction? The AIs don't have our feelings of envy, jealousy, anger, and so forth. So why might they go in that direction?
Eliezer: Because it is convergently implied by almost any of the strange and scrutable things that they might end up wanting, as a result of gradient descent on these thumbs-up and thumbs-down internal controls. If all you want is to make tiny molecular squiggles, or that's one component of what you want but it's a component that never saturates, you just want more and more of it--the same way that we want and would want more and more galaxies filled with life and people living happily ever after. By wanting anything that just keeps going, you are wanting to use more and more material. That could kill everyone on Earth as a side effect. It could kill us because it doesn't want us making other super intelligences to compete with it. It could kill us because it's using up all the chemical energy on Earth.
Host: So, some people in the AI world worry that your views are strong enough that you're willing to advocate extreme responses to it. Therefore, they worry that you could be a very destructive figure. Do you draw the line yourself in terms of the measures that we should take to stop this happening? Or is anything justifiable to stop the scenarios you're talking about happening?
Eliezer: I don't think that "anything" works. I think that this takes takes state, actors, and international agreements. All International agreements, by their nature, tend to ultimately be backed by force on the signatory countries and on the non-signatory countries, which is a more extreme measure. I have not proposed that individuals run out and use violence, and I think that the killer argument for that is that it would not work.
Host: Well, you are definitely not the only person to propose that what we need is some kind of international Reckoning here on how to manage this going forward. Thank you so much for coming here to TED.
I think a logical response would manifest more or less as follows: If all techniques surround one center, there will be at least one relationship between each of them. The meaning of these virtues is non-linear. Anki is linear. Notes are linear. Mind mapping is better, but still limiting. For one to truly learn the virtues of rationality, he must exist through them. His life must become a set of their instantiations.However, I think your purpose of putting it into Anki was to have a verbatim collection of words that represented something meaningful in your mind. Why? To have a clearer overarching schema on which to base your declarative knowledge mastery of rationality. As a result, two sub-goals are met: to better communicate rationality to others and to have a source of stability to turn to when things become uncertain, undesirable, or menial.You did not have a better idea on how to fulfill this purposes. Here is how I would personally do it: group them as intuitively as possible on a mind map, then figure how the groups are generally related (representing it with arrows). It should not just be left latent in the mind, but constantly rejuvenated through new instantiated experiences. For the sentences, make smaller keywords (and maybe a doodle) and connect it to the respective element on the mind map. As you learn more, add, relate, and refine what you allow to show.If you are confused on how to make the mind map, here's some tips: https://www.youtube.com/watch?v=5zT_2aBP6vM&t=217s&ab_channel=JustinSung
[edit: rewriting the sentences to reduce cognitive load for the reader.]
An exploration of the unknown through known first-principles seems to be a good balance between order and chaos.
Eliezer brilliantly wrote this in Twelve Virtues of Rationality:"Do not be blinded by words. When words are subtracted, anticipation remains."I think “rational” and “optimal” share similar anticipatory elements, but “optimal” is simpler and more abstract, whereas “rational” almost necessarily applies “optimal” to some bounded agent.When I think of a “rational” decision versus an “optimal” decision, or a “rational” person versus an “optimal” person, the overlap I see is the degree of effectiveness of something.What I anticipate with “rational” is the effectiveness of something as a result of the procedural decision-making of an agent with scarce knowledge and capability. Context reveals who this agent is; it’s often humans.What I anticipate with “optimal” is the highest effectiveness of something, either inclusive or exclusive of an agent and scarcity. If the context reveals no agent, scarcity can be physical but not distributive; if the context reveals an agent, it will imply which agent and what level of scarcity.I would imagine that using proper descriptors or clear context would alleviate a lot of the ambiguity.