All Posts

Sorted by Magic (New & Upvoted)

January 2020

Shortform [Beta]
21bgold23d* Why do I not always have conscious access to my inner parts? Why, when speaking with authority figures, might I have a sudden sense of blankness. * Recently I've been thinking about this reaction in the frame of 'legibility', ala Seeing like a State. State's would impose organizational structures on societies that were easy to see and control - they made the society more legible - to the actors who ran the state, but these organizational structure were bad for the people in the society. * For example, census data, standardized weights and measures, and uniform languages make it easier to tax and control the population. [Wikipedia] * I'm toying with applying this concept across the stack. * If you have an existing model of people being made up of parts [Kaj's articles], I think there's a similar thing happening. I notice I'm angry but can't quite tell why or get a conceptual handle on it - if it were fully legible and accessible to conscious mind, then it would be much easier to apply pressure and control that 'part', regardless if the control I am exerting is good. So instead, it remains illegible.A level up, in a small group conversation, I notice I feel missed, like I'm not being heard in fullness, but someone else directly asks me about my model and I draw a blank, like I can't access this model or share it. If my model were legible, someone else would get more access to it and be able to control it/point out its flaws. That might be good or it might be bad, but if it's illegible it can't be "coerced"/"mistaken" by others.One more level up, I initially went down this track of thinking for a few reasons, one of which was wondering why prediction forecasting systems are so hard to adopt within organizations. Operationalization of terms is difficult and it's hard to get a precise enough question that everyone can agree on, but it's very 'unfun' to have uncertain terms (people are much more lik
20tragedyofthecomments14dI often see people making statements that sound to me like . . . "The entity in charge of bay area rationality should enforce these norms." or "The entity in charge of bay area rationality is bad for allowing x to happen." There is no entity in charge of bay area rationality. There's a bunch of small groups of people that interact with each other sometimes. They even have quite a bit of shared culture. But no one is in charge of this thing, there is no entity making the set of norms for rationalists, there is no one you can outsource the building of your desired group to.
17bgold19d* Yes And is an improv technique where you keep the energy in a scene alive by going w/ the other persons suggestion and adding more to it. "A: Wow is that your pet monkey? B: Yes and he's also my doctor!" * Yes And is generative (creates a lot of output), as opposed to Hmm No which is critical (distills output) * A lot of the Sequences is Hmm No * It's not that Hmm No is wrong, it's that it cuts off future paths down the Yes And thought-stream. * If there's a critical error at the beginning of a thought that will undermine everything else then it makes sense to Hmm No (we don't want to spend a bunch of energy on something that will be fundamentally unsound). But if the later parts of the thought stream are not closely dependent on the beginning, or if it's only part of the stream that gets cut off, then you've lost a lot of potential value that could've been generated by the Yes And. * In conversation yes and is much more fun, which might be why the Sequences are important as a corrective (yeah look it's not fun to remember about biases, but they exist and you should model/include them) * Write drunk, edit sober. Yes And drunk, Hmm No in the morning.
17Ben Pace23dThere's a game for the Oculus Quest (that you can also buy on Steam) called "Keep Talking And Nobody Explodes". It's a two-player game. When playing with the VR headset, one of you wears the headset and has to defuse bombs in a limited amount of time (either 3, 4 or 5 mins), while the other person sits outside the headset with the bomb-defusal manual and tells you what to do. Whereas with other collaboration games, you're all looking at the screen together, with this game the substrate of communication is solely conversation, the other person is providing all of your inputs about how their half is going (i.e. not shown on a screen). The types of puzzles are fairly straightforward computational problems but with lots of fiddly instructions, and require the outer person to figure out what information they need from the inner person. It often involves things like counting numbers of wires of a certain colour, or remembering the previous digits that were being shown, or quickly describing symbols that are not any known letter or shape. So the game trains you and a partner in efficiently building a shared language for dealing with new problems. More than that, as the game gets harder, often some of the puzzles require substantial independent computation from the player on the outside. At this point, it can make sense to play with more than two people, and start practising methods for assigning computational work between the outer people (e.g. one of them works on defusing the first part of the bomb, and while they're computing in their head for ~40 seconds, the other works on defusing the second part of the bomb in dialogue with the person on the inside). This further creates a system which trains the ability to efficiently coordinate on informational work under. Overall I think it's a pretty great game for learning and practising a number of high-pressure communication skills with people you're close to.
15TurnTrout12dWhile reading Focusing today, I thought about the book and wondered how many exercises it would have. I felt a twinge of aversion. In keeping with my goal of increasing internal transparency, I said to myself: "I explicitly and consciously notice that I felt averse to some aspect of this book". I then Focused on the aversion. Turns out, I felt a little bit disgusted, because a part of me reasoned thusly: (Transcription of a deeper Focusing on this reasoning) I'm afraid of being slow. Part of it is surely the psychological remnants of the RSI I developed in the summer of 2018. That is, slowing down is now emotionally associated with disability and frustration. There was a period of meteoric progress as I started reading textbooks and doing great research, and then there was pain. That pain struck even when I was just trying to take care of myself, sleep, open doors. That pain then left me on the floor of my apartment, staring at the ceiling, desperately willing my hands to just get better. They didn't (for a long while), so I just lay there and cried. That was slow, and it hurt. No reviews, no posts, no typing, no coding. No writing, slow reading. That was slow, and it hurt. Part of it used to be a sense of "I need to catch up and learn these other subjects which [Eliezer / Paul / Luke / Nate] already know". Through internal double crux, I've nearly eradicated this line of thinking, which is neither helpful nor relevant nor conducive to excitedly learning the beautiful settled science of humanity. Although my most recent post [https://www.lesswrong.com/posts/eX2aobNp5uCdcpsiK/on-being-robust] touched on impostor syndrome, that isn't really a thing for me. I feel reasonably secure in who I am, now (although part of me worries that others wrongly view me as an impostor?). However, I mostly just want to feel fast, efficient, and swift again. I sometimes feel like I'm in a race with Alex2018, and I feel like I'm losing.
Load More (5/52)

December 2019

Shortform [Beta]
53Buck2mo[I'm not sure how good this is, it was interesting to me to think about, idk if it's useful, I wrote it quickly.] Over the last year, I internalized Bayes' Theorem much more than I previously had; this led me to noticing that when I applied it in my life it tended to have counterintuitive results; after thinking about it for a while, I concluded that my intuitions were right and I was using Bayes wrong. (I'm going to call Bayes' Theorem "Bayes" from now on.) Before I can tell you about that, I need to make sure you're thinking about Bayes in terms of ratios rather than fractions. Bayes is enormously easier to understand and use when described in terms of ratios. For example: Suppose that 1% of women have a particular type of breast cancer, and a mammogram is 20 times more likely to return a positive result if you do have breast cancer, and you want to know the probability that you have breast cancer if you got that positive result. The prior probability ratio is 1:99, and the likelihood ratio is 20:1, so the posterior probability is 1∗20:99∗1 = 20:99, so you have probability of 20/(20+99) of having breast cancer. I think that this is absurdly easier than using the fraction formulation. I think that teaching the fraction formulation is the single biggest didactic mistake that I am aware of in any field. -------------------------------------------------------------------------------- Anyway, a year or so ago I got into the habit of calculating things using Bayes whenever they came up in my life, and I quickly noticed that Bayes seemed surprisingly aggressive to me. For example, the first time I went to the Hot Tubs of Berkeley, a hot tub rental place near my house, I saw a friend of mine there. I wondered how regularly he went there. Consider the hypotheses of "he goes here three times a week" and "he goes here once a month". The likelihood ratio is about 12x in favor of the former hypothesis. So if I previously was ten to one against the three-times-a-week hyp
42BrienneYudkowsky2moSome advice to my past self about autism: Learn about what life is like for people with a level 2 or 3 autism diagnosis. Use that reference class to predict the nature of your problems and the strategies that are likely to help. Only after making those predictions, adjust for your own capabilities and circumstances. Try this regardless of how you feel about calling yourself autistic or seeking a diagnosis. Just see what happens. Many stereotypically autistic behaviors are less like symptoms of an illness, and more like excellent strategies for getting shit done and having a good life. It’s just hard to get them all working together. Try leaning into those behaviors and see what’s good about them. For example, you know how when you accidentally do something three times in a row, you then feel compelled to keep doing it the same way at the same time forever? Studying this phenomenon in yourself will lead you to build solid and carefully designed routines that allow you to be a lot more reliably vibrant. You know how some autistic people have one-on-one aides, caretakers, and therapists who assist in their development and day-to-day wellbeing? Read a bit about what those aides do. You’ll notice right away that the state of the art in this area is crap, but try to imagine what professional autism aides might do if they really had things figured out and were spectacular at their jobs. Then devote as many resources as you can spare for a whole year to figuring out how to perform those services for yourself. It seems to me that most of what’s written about autism by neurotypicals severely overemphasizes social stuff. You’ll find almost none of it compelling. Try to understand what’s really going on with autism, and your understanding will immediately start paying off in non-social quality of life improvements. Keep at it, and it’ll eventually start paying off in deep and practical social insights as well (which I know you don’t care about right now, but it’s true). I
40Kaj_Sotala1moOccasionally I find myself nostalgic for the old, optimistic transhumanism of which e.g. this 2006 article [https://web.archive.org/web/20081008121438/http://www.acceleratingfuture.com/michael/blog/2006/09/overpopulation-no-problem/] is a good example. After some people argued that radical life extension would increase our population too much, the author countered that oh, that's not an issue, here are some calculations showing that our planet could support a population of 100 billion with ease! In those days, the ethos seemed to be something like... first, let's apply a straightforward engineering approach to eliminating aging [https://en.wikipedia.org/wiki/Strategies_for_Engineered_Negligible_Senescence], so that nobody who's alive needs to worry about dying from old age. Then let's get nanotechnology and molecular manufacturing to eliminate scarcity and environmental problems. Then let's re-engineer the biosphere and human psychology for maximum well-being, such as by using genetic engineering to eliminate suffering [https://www.abolitionist.com/] and/or making it a violation of the laws of physics to try to harm or coerce someone [http://www.mitchellhowe.com/sysopfaq.htm]. So something like "let's fix the most urgent pressing problems and stabilize the world, then let's turn into a utopia". X-risk was on the radar, but the prevailing mindset seemed to be something like "oh, x-risk? yeah, we need to get to that too". That whole mindset used to feel really nice. Alas, these days it feels like it was mostly wishful thinking. I haven't really seen that spirit in a long time; the thing that passes for optimism these days is "Moloch hasn't entirely won (yet [https://www.lesswrong.com/posts/ham9i5wf4JCexXnkN/moloch-hasn-t-won])". If "overpopulation? no problem!" felt like a prototypical article to pick from the Old Optimistic Era, then Today's Era feels more described by Inadequate Equilibria [https://equilibriabook.com/] and a post saying "if you can afford it, c
40BrienneYudkowsky2moSuppose you wanted to improve your social relationships on the community level. (I think of this as “my ability to take refuge in the sangha”.) What questions might you answer now, and then again in one year, to track your progress? Here’s what’s come to mind for me so far. I’m probably missing a lot and would really like your help mapping things out. I think it’s a part of the territory I can only just barely perceive at my current level of development. * If something tragic happened to you, such as a car crash that partially paralyzed you or the death of a loved one, how many people can you name whom you'd find it easy and natural to ask for help with figuring out your life afterward? * For how many people is it the case that if they were hospitalized for at least a week you would visit them in the hospital? * Over the past month, how lonely have you felt? * In the past two weeks, how often have you collaborated with someone outside of work? * To what degree do you feel like your friends have your back? * Describe the roll of community in your life. * How do you feel as you try to describe the roll of community in your life? * When's the last time you got angry with someone and confronted them one on one as a result? * When's the last time you apologized to someone? * How strong is your sense that you're building something of personal value with the people around you? * When's the last time you spent more than ten minutes on something that felt motivated by gratitude? * When a big change happens in your life, such as loosing your job or having a baby, how motivated do you feel to share the experience with others? * When you feel motivated to share an experience with others, how satisfied do you tend to be with your attempts to do that? * Do you know the love languages of your five closest friends? To what extent does that influence how you behave toward them? * Does it seem to you that your friends know your love
34BrienneYudkowsky1moI wrote up my shame processing method. I think it comes from some combination of Max (inspired by NVC maybe?), Anna (mostly indirectly), and a lot of trial and error. I've been using it for a couple of years (in various forms), but I don't have much PCK on it yet. If you'd like to try it out, I'd love for you to report back on how it went! Please also ask me questions. What's up with shame? According to me, shame is for keeping your actions in line with what you care about. It happens when you feel motivated to do something that you believe might damage what is valuable (whether or not you actually do the thing). Shame indicates a particular kind of internal conflict. There's something in favor of the motivation, and something else against it. Both parts are fighting for things that matter to you. What is this shame processing method supposed to do? This shame processing method is supposed to aid in the goal of shame itself: staying in contact with what you care about as you act. It's also supposed to develop a clearer awareness of what is at stake in the conflict so you can use your full intelligence to solve the problem. What is the method? The method is basically a series of statements with blanks to fill in. The statements guide you a little at a time toward a more direct way of seeing your conflict. Here's a template; it's meant to be filled out in order. I notice that I feel ashamed. I think I first started feeling it while ___. I care about ___(X). I'm not allowed to want ___ (Y). I worry that if I want Y, ___. What's good about Y is ___(Z). I care about Z, and I also care about X. Example (a real one, from this morning): I notice that I feel ashamed. I think I first started feeling it while reading the first paragraph of a Lesswrong post. I care about being creative. I'm not allowed to want to move at a comfortable pace. I worry that if I move at a comfortable pace, my thoughts will slow down more and more over time and I'll become a vegetable.
Load More (5/106)

November 2019

Shortform [Beta]
54orthonormal3moDeepMind released their AlphaStar paper a few days ago [https://deepmind.com/blog/article/AlphaStar-Grandmaster-level-in-StarCraft-II-using-multi-agent-reinforcement-learning] , having reached Grandmaster level at the partial-information real-time strategy game StarCraft II over the summer. This is very impressive, and yet less impressive than it sounds. I used to watch a lot of StarCraft II (I stopped interacting with Blizzard recently because of how they rolled over for China), and over the summer there were many breakdowns of AlphaStar games once players figured out how to identify the accounts. The impressive part is getting reinforcement learning to work at all in such a vast state space- that took breakthroughs beyond what was necessary to solve Go and beat Atari games. AlphaStar had to have a rich enough set of potential concepts (in the sense that e.g. a convolutional net ends up having concepts of different textures) that it could learn a concept like "construct building P" or "attack unit Q" or "stay out of the range of unit R" rather than just "select spot S and enter key T". This is new and worth celebrating. The overhyped part is that AlphaStar doesn't really do the "strategy" part of real-time strategy. Each race has a few solid builds that it executes at GM level, and the unit control is fantastic, but the replays don't look creative or even especially reactive to opponent strategies. That's because there's no representation of causal thinking - "if I did X then they could do Y, so I'd better do X' instead". Instead there are many agents evolving together, and if there's an agent evolving to try Y then the agents doing X will be replaced with agents that do X'. (This lack of causal reasoning especially shows up in building placement, where the consequences of locating any one building here or there are minor, but the consequences of your overall SimCity are major for how your units and your opponents' units would fare if they attacked you. In one
21Chris_Leong2moHegel - A Very Short Introduction by Peter Singer - Book Review Part 1: Freedom Hegel is a philosopher who is notorious for being incomprehensible. In fact, for one of his books he signed a contract that assigned a massive financial penalty for missing the publishing deadline, so the book ended up being a little rushed. While there was a time when he was dominant in German philosophy, he now seems to be held in relatively poor regard and his main importance is seen to be historical. So he's not a philosopher that I was really planning to spend much time on. Given this, I was quite pleased to discover this book promising to give me A Very Short Introduction, especially since it is written by Peter Singer, a philosopher who write and thinks rather clearly. After reading this book, I still believe that most of what Hegel wrote was pretentious nonsense, but the one idea that struck me as the most interesting was his conception of freedom. A rough definition of freedom might be ensuring that people are able to pursue whatever it is that they prefer. Hegel is not a fan abstract definitions of freedom which treat all preferences the same and don't enquire where they come from. In his perspective, most of our preferences are purely a result of the context in which we exist and so such an abstract definition of freedom is merely the freedom to be subject to social and historical forces. Since we did not choose our desires, he argues that we are not free when we act from our desires. Hegel argues that, "every condition of comfort reveals in turn its discomfort, and these discoveries go on for ever". One such example would be the marketing campaigns to convince us that sweating was embarrassing ( https://www.smithsonianmag.com/…/how-advertisers-convinced…/ [https://l.facebook.com/l.php?u=https%3A%2F%2Fwww.smithsonianmag.com%2Fhistory%2Fhow-advertisers-convinced-americans-they-smelled-bad-12552404%2F%3Ffbclid%3DIwAR1KKh3gEJiwroC7b-Nooykui6_CBL2CsR-zaR-9ExSir591OYpM7ImwWb8&h
15Ruby2moWhy I'm excited by the 2018 Review I generally fear that perhaps some people see LessWrong as a place where people just read and discuss "interesting stuff", not much different from a Sub-Reddit on anime or something. You show up, see what's interesting that week, chat with your friends. LessWrong's content might be considered "more healthy" relative to most internet content and many people say they browse LessWrong to procrastinate but feel less guilty about than other browsing, but the use-case still seems a bit about entertainment. None of the above is really a bad thing, but in my mind, LessWrong is about much more than a place for people to hang out and find entertainment in sharing joint interests. In my mind, LessWrong is a place where the community makes collective progress on valuable problems. It is an ongoing discussion where we all try to improve our understanding of the world and ourselves. It's not just play or entertainment– it's about getting somewhere. It's as much like an academic journal where people publish and discuss important findings as it is like an interest-based sub-Reddit. And all this makes me really excited by the LessWrong 2018 Review [http://lesswrong]. The idea of the review is to identify posts that have stood the test of time and have made lasting contributions to the community's knowledge and meaningfully impacted people's lives. It's about finding the posts that represent the progress we've made. During the design of the review (valiantly driven by Raemon), I was apprehensive that people would not feel motivated by the process and put in the necessary work. But less than 24 hours after launching, I'm excited by the nominations [https://www.lesswrong.com/nominations] and what people are writing in their nomination comments. Looking at the list of nominations so far and reading the comments, I'm thinking "Yes! This is a list showing the meaningful progress the LW community has made. We are not just a news or entertainment site
14Daniel Kokotajlo3moIt seems to me that human society might go collectively insane sometime in the next few decades. I want to be able to succinctly articulate the possibility and why it is plausible, but I'm not happy with my current spiel. So I'm putting it up here in the hopes that someone can give me constructive criticism: I am aware of three mutually-reinforcing ways society could go collectively insane: 1. Echo chambers/filter bubbles/polarization: Arguably political polarization [https://en.wikipedia.org/wiki/Political_polarization] is increasing across the world of liberal democracies today. Perhaps the internet has something to do with this--it’s easy to self-select into a newsfeed and community that reinforces and extremizes your stances on issues. Arguably recommendation algorithms have contributed to this problem in various ways--see e.g. “Sort by controversial” [https://slatestarcodex.com/2018/10/30/sort-by-controversial/] and Stuart Russell’s claims in Human Compatible. At any rate, perhaps some combination of new technology and new cultural or political developments will turbocharge this phenomenon. This could lead to civil wars, or more mundanely, societal dysfunction. We can’t coordinate to solve collective action problems relating to AGI if we are all arguing bitterly with each other about culture war issues.Deepfakes/propaganda/persuasion tools: Already a significant portion of online content is deliberately shaped by powerful political agendas--e.g. Russia, China, and the US political tribes. Much of the rest is deliberately shaped by less powerful apolitical agendas, e.g. corporations managing their brands or teenagers in Estonia making money by spreading fake news during US elections. Perhaps this trend will continue; technology like chatbots, language models, deepfakes, etc. might make it cheaper and more effective to spew this sort of propaganda, to the point where most onlin
12TurnTrout2moFrom my Facebook My life has gotten a lot more insane over the last two years. However, it's also gotten a lot more wonderful, and I want to take time to share how thankful I am for that. Before, life felt like... a thing that you experience, where you score points and accolades and check boxes. It felt kinda fake, but parts of it were nice. I had this nice cozy little box that I lived in, a mental cage circumscribing my entire life. Today, I feel (much more) free. I love how curious I've become, even about "unsophisticated" things. Near dusk, I walked the winter wonderland of Ogden, Utah with my aunt and uncle. I spotted this gorgeous red ornament hanging from a tree, with a hunk of snow stuck to it at north-east orientation. This snow had apparently decided to defy gravity. I just stopped and stared. I was so confused. I'd kinda guessed that the dry snow must induce a huge coefficient of static friction, hence the winter wonderland. But that didn't suffice to explain this. I bounded over and saw the smooth surface was iced, so maybe part of the snow melted in the midday sun, froze as evening advanced, and then the part-ice part-snow chunk stuck much more solidly to the ornament. Maybe that's right, and maybe not. The point is that two years ago, I'd have thought this was just "how the world worked", and it was up to physicists to understand the details. Whatever, right? But now, I'm this starry-eyed kid in a secret shop full of wonderful secrets. Some secrets are already understood by some people, but not by me. A few secrets I am the first to understand. Some secrets remain unknown to all. All of the secrets are enticing. My life isn't always like this; some days are a bit gray and draining. But many days aren't, and I'm so happy about that. Socially, I feel more fascinated by people in general, more eager to hear what's going on in their lives, more curious what it feels like to be them that day. In particular, I've fallen in love with the rationalist and
Load More (5/69)

October 2019

Shortform [Beta]
41DanielFilan3moHot take: if you think that we'll have at least 30 more years of future where geopolitics and nations are relevant, I think you should pay at least 50% as much attention to India as to China. Similarly large population, similarly large number of great thinkers and researchers. Currently seems less 'interesting', but that sort of thing changes over 30-year timescales. As such, I think there should probably be some number of 'India specialists' in EA policy positions that isn't dwarfed by the number of 'China specialists'.
37elityre3moNew post: Some notes on Von Neumann, as a human being [https://musingsandroughdrafts.wordpress.com/2019/10/26/some-notes-on-von-neumann-as-a-human-being/] I recently read Prisoner’s Dilemma, which half an introduction to very elementary game theory, and half a biography of John Von Neumann, and watched this [https://youtu.be/vLbllFHBQM4] old PBS documentary about the man. I’m glad I did. Von Neumann has legendary status in my circles, as the smartest person ever to live. [1] Many times I’ve written the words “Von Neumann Level Intelligence” in a AI strategy document, or speculated [http://www.overcomingbias.com/2014/07/30855.html#comment-4174545474] about how many coordinated Von Neumanns would it take to take over the world. (For reference, I now think that 10 is far too low, mostly because he didn’t seem to have the entrepreneurial or managerial dispositions.) Learning a little bit more about him was humanizing. Yes, he was the smartest person ever to live, but he was also an actual human being, with actual human traits. Watching this first clip [https://www.youtube.com/watch?v=vLbllFHBQM4], I noticed that I was surprised by a number of thing. 1. That VN had an accent. I had known that he was Hungarian, but somehow it had never quite propagated that he would speak with a Hungarian accent. 2. That he was middling height (somewhat shorter than the presenter he’s talking too). 3. The thing he is saying is the sort of thing that I would expect to hear from any scientist in the public eye, “science education is important.” There is something revealing about Von Neumann, despite being the smartest person in the world, saying basically what I would expect Neil DeGrasse Tyson to say in an interview. A lot of the time he was wearing his “scientist / public intellectual” hat, not the “smartest person ever to live” hat. Some other notes of interest: He was not a skilled poker player, which punctured my assumption that Von Neumann was om
29Daniel Kokotajlo4moMy baby daughter was born two weeks ago, and in honor of her existence I'm building a list of about 100 technology-related forecasting questions, which will resolve in 5, 10, and 20 years. Questions like "By the time my daughter is 5/10/20 years old, the average US citizen will be able to hail a driverless taxi in most major US cities." (The idea is, tying it to my daughter's age will make it more fun and also increase the likelihood that I actually go back and look at it 10 years later.) I'd love it if the questions were online somewhere so other people could record their answers too. Does this seem like a good idea? Hive mind, I beseech you: Help me spot ways in which this could end badly! On a more positive note, any suggestions for how to do it? Any expressions of interest in making predictions with me? Thanks! EDIT: Now it's done, though I have yet to import it to Foretold.io it works perfectly fine in spreadsheet form [https://docs.google.com/spreadsheets/d/1PmMRSgwdmRWr7xy7gXUFfE1-O36cpPfLgKflp_JFT1I/edit?usp=sharing] .
25Vaniver3mo[Meta: this is normally something I would post on my tumblr [https://vaniver.tumblr.com/], but instead am putting on LW as an experiment.] Sometimes, in games like Dungeons and Dragons, there will be multiple races of sapient beings, with humans as a sort of baseline. Elves are often extremely long-lived, but most handlings of this I find pretty unsatisfying. Here's a new take, that I don't think I've seen before (except the Ell in Worth the Candle [https://archiveofourown.org/works/11478249/chapters/25740126] have some mild similarities): Humans go through puberty at about 15 and become adults around 20, lose fertility (at least among women) at about 40, and then become frail at about 60. Elves still 'become adults' around 20, in that a 21-year old elf adventurer is as plausible as a 21-year old human adventurer, but they go through puberty at about 40 (and lose fertility at about 60-70), and then become frail at about 120. This has a few effects: * The peak skill of elven civilization is much higher than the peak skill of human civilization (as a 60-year old master carpenter has had only ~5 decades of skill growth, whereas a 120-year old master carpenter has had ~11). There's also much more of an 'apprenticeship' phase in elven civilization (compare modern academic society's "you aren't fully in the labor force until ~25" to a few centuries ago, when it would have happened at 15), aided by them spending longer in the "only interested in acquiring skills" part of 'childhood' before getting to the 'interested in sexual market dynamics' part of childhood. * Young elves and old elves are distinct in some of the ways human children and adults are distinct, but not others; the 40-year old elf who hasn't started puberty yet has had time to learn 3 different professions and build a stable independence, whereas the 12-year old human who hasn't started puberty yet is just starting to operate as an independent entity. And so sometimes
22Vaniver4moPeople's stated moral beliefs are often gradient estimates instead of object-level point estimates. This makes sense if arguments from those beliefs are pulls on the group epistemology, and not if those beliefs are guides for individual action. Saying "humans are a blight on the planet" would mean something closer to "we should be more environmentalist on the margin" instead of "all things considered, humans should be removed." You can probably imagine how this can be disorienting, and how there's a meta issue of the point estimate view is able to see what it's doing in a way that the gradient view might not be able to see what it's doing.
Load More (5/87)

Load More Months