All Posts

Sorted by Magic (New & Upvoted)

Tuesday, September 17th 2019
Tue, Sep 17th 2019

Shortform [Beta]
3FactorialCode13h Meta-philosophy hypothesis: Philosophy is the process of reifying fuzzy concepts that humans use. By "fuzzy concepts" I mean things where we can say "I know it when I see it." but we might not be able to describe what "it" is. Examples that I believe support the hypothesis: * This shortform is about the philosophy of "philosophy" and this hypothesis is an attempt at an explanation of what we mean by "philosophy". * In epistemology, Bayesian epistemology is a hypothesis that explains the process of learning. * In ethics, an ethical theory attempts to make explicit our moral intuitions. * A clear explanation of consciousness and qualia would be considered philosophical progress.
3Spiracular14h One of my favorite little tidbits from working on this post []: realizing that idea innoculation and the Streisand effect are opposite sides of the same heuristic.
3Spiracular14h Bubbles in Thingspace It occurred to me recently that, by analogy with ML, definitions might occasionally be more like "boundaries and scoring-algorithms in thingspace" than clusters per-say (messier! no central example! no guaranteed contiguity!). Given the need to coordinate around definitions, most of them are going to have a simple and somewhat-meaningful center... but for some words, I suspect there are dislocated "bubbles" that use the same word for a completely different concept. Homophones are one of the clearest examples.

Monday, September 16th 2019
Mon, Sep 16th 2019

Personal Blogposts
2[Event]SSC Meetups Everywhere645 South Clark Street, ChicagoSep 28th
Shortform [Beta]
14mr-hire1d Been mulling around about doing a podcast in which each episode is based on acquiring a particular skillset (self-love, focus, making good investments) instead of just interviewing a particular person. I interview a few people who have a particular skill (e.g. self-love, focus, creating cash flow businesses), and model the cognitive strategies that are common between them. Then interview a few people who struggle a lot with that skill, and model the cognitive strategies that are common between them. Finally, model a few people who used to be bad at the skill but are now good, and model the strategies that are common for them to make the switch. The episode is cut to tell a narrative of what the skills are to be acquired, what beliefs/attitudes need to be let go of and acquired, and the process to acquire them, rather than focusing on interviewing a particular person If there's enough interest, I'll do a pilot episode. Comment with what skillset you'd love to see a pilot episode on. Upvote if you'd have 50% or more chance of listening to the first episode.
6Matthew Barnett18h There's a phenomenon I currently hypothesize to exist where direct attacks on the problem of AI alignment are criticized much more often than indirect attacks. If this phenomenon exists, it could be advantageous to the field in the sense that it encourages thinking deeply about the problem before proposing solutions. But it could also be bad because it disincentivizes work on direct attacks to the problem (if one is criticism averse and would prefer their work be seen as useful). I have arrived at this hypothesis from my observations: I have watched people propose solutions only to be met with immediate and forceful criticism from others, while other people proposing non-solutions and indirect analyses are given little criticism at all. If this hypothesis is true, I suggest it is partly or mostly because direct attacks on the problem are easier to defeat via argument, since their assumptions are made plain If this is so, I consider it to be a potential hindrance on thought, since direct attacks are often the type of thing that leads to the most deconfusion -- not because the direct attack actually worked, but because in explaining how it failed, we learned what definitely doesn't work.
6TurnTrout18h I seem to differently discount different parts of what I want. For example, I'm somewhat willing to postpone fun to low-probability high-fun futures, whereas I'm not willing to do the same with romance.

Sunday, September 15th 2019
Sun, Sep 15th 2019

Shortform [Beta]
5An1lam2d Epistemic status: Thinking out loud. Introducing the QuestionScientific puzzle I notice I'm quite confused about: what's going on with the relationship between thinking and the brain's energy consumption? On one hand, I'd always been told that thinking harder sadly doesn't burn more energy than normal activity. I believed that and had even come up with a plausible story about how evolution optimizes for genetic fitness not intelligence, and introspective access is pretty bad as it is, so it's not that surprising that we can't crank up our brains energy consumption to think harder. This seemed to jive with the notion that our brain's putting way more computational resources towards perceiving and responding to perception than abstract thinking. It also fit well with recent results calling ego depletion into question and into the framework in which mental energy depletion is the result of a neural opportunity cost calculation [] . Going even further, studies like this one [] left me with the impression that experts tended to require less energy to accomplish the same mental tasks as novices. Again, this seemed plausible under the assumption that experts brains developed some sort of specialized modules over the thousands of hours of practice they'd put in. I still believe that thinking harder doesn't use more energy, but I'm now much less certain about the reasons I'd previously given for this. Chess Players' Energy ConsumptionThis recent ESPN (of all places) article [] about chess players' energy consumption during tournaments has me questioning this story. The two main points of the article are: 1. Chess players burn a lot of energy during tournaments, potentially on the order of 6000 ca
4Ruby2d Converting this from a Facebook comment to LW Shortform. A friend complains about recruiters who send repeated emails saying things like "just bumping this to the top of your inbox" when they have no right to be trying to prioritize their emails over everything else my friend might be receiving from friends, travel plans, etc. The truth is they're simply paid to spam. Some discussion of repeated messaging behavior ensued. These are my thoughts: I feel conflicted about repeatedly messaging people. All the following being factors in this conflict: * Repeatedly messaging can be making yourself an asshole that gets through someone's unfortunate asshole filter []. [filter. *] * There's an angle from which repeatedly, manually messaging people is a costly signal bid that their response would be valuable to you. Admittedly this might not filter in the desired ways. * I know that many people are in fact disorganized and lose emails or otherwise don't have systems for getting back to you such that failure to get back to you doesn't mean they didn't want to. * There are other people have extremely good systems I'm always impressed by the super busy, super well-known people who get back to you reliably after three weeks. Systems. I don't always know where someone falls between "has no systems, relies on other people to message repeatedly" vs "has impeccable systems but due to volume of emails will take two weeks." * The overall incentives are such that most people probably shouldn't generally reveal which they are. * Sometimes the only way to get things done is to bug people. And I hate it. I hate nagging, but given other people's unreliability, it's either you bugging them or a good chance of not getting some important thing. * A wise, well-respected, business-experienced rationalist told me many years ago that if you want somethi
3Evan Rysdam2d I would appreciate an option to hide the number of votes that posts have. Maybe not hide entirely, but set them to only display at the bottom of a post, and not at the top nor on the front page. With the way votes are currently displayed, I think I'm getting biased for/against certain posts before I even read them, just based on the number of votes they have.
1crabman2d Many biohacking guides suggest using melatonin. Does liquid melatonin [] spoil under high temperature if put in tea (95 degree Celcius)? More general question: how do I even find answers to questions like this one?

Saturday, September 14th 2019
Sat, Sep 14th 2019

Personal Blogposts
2[Event]SSC Meetups Everywhere: Rochester, NY200 East Avenue, RochesterSep 21st
2[Event]Dublin SSC/LW/EA "Meetups Everywhere" MeetupStephen Court, GF1, Saint Stephen's Green, Dublin 2Sep 21st
0[Event]SSC Meetups Everywhere: Albany NY260 Lark Street, AlbanySep 21st
0[Event]SSC Meetups Everywhere: Auckland, New Zealand184 Karangahape Road, Auckland CBD, AucklandSep 22nd
0[Event]SSC Meetups Everywhere: Baltimore, MD1000 Hilltop Circle, BaltimoreSep 29th
0[Event]SSC Meetups Everywhere: Berkeley, CA2412 Martin Luther King Junior Way, BerkeleyOct 11th
Shortform [Beta]
23Ruby4d Selected Aphorisms from Francis Bacon's Novum Organum I'm currently working to format Francis Bacon's Novum Organum [] as a LessWrong sequence. It's a moderate-sized project as I have to work through the entire work myself, and write an introduction which does Novum Organum justice and explains the novel move of taking an existing work and posting in on LessWrong (short answer: NovOrg is some serious hardcore rationality and contains central tenets of the LW foundational philosophy notwithstanding being published back in 1620, not to mention that Bacon and his works are credited with launching the modern Scientific Revolution) While I'm still working on this, I want to go ahead and share some of my favorite aphorisms from is so far: 3. . . . The only way to command reality is to obey it . . . 9. Nearly all the things that go wrong in the sciences have a single cause and root, namely: while wrongly admiring and praising the powers of the human mind, we don’t look for true helps for it.Bacon sees the unaided human mind as entirely inadequate for scientific progress. He sees for the way forward for scientific progress as constructing tools/infrastructure/methodogy to help the human mind think/reason/do science. 10. Nature is much subtler than are our senses and intellect; so that all those elegant meditations, theorizings and defensive moves that men indulge in are crazy—except that no-one pays attention to them. [Bacon often uses a word meaning ‘subtle’ in the sense of ‘fine-grained, delicately complex’; no one current English word will serve.] 24. There’s no way that axioms •established by argumentation could help us in the discovery of new things, because the subtlety of nature is many times greater than the subtlety of argument. But axioms •abstracted from particulars in the proper way often herald the discovery of new particulars and point them out, thereby returning the sciences to their active status.Bacon repeat
15crabman3d A competition on solving math problems via AI is coming. [] * The problems are from the International math olympiad (IMO) * They want to formalize all the problems in Lean (theorem prover) language. They haven't figured out how to do that, e.g. how to formalize problems of form "determine the set of objects satisfying the given property", as can be seen in [] * A contestant must submit a computer program that will take a problem's description as input and output a solution and its proof in Lean. I would guess that this is partly a way to promote Lean. I think it would be interesting to pose questions about this on metaculus.
6TekhneMakre3d Referential distance or referential displacement. Like inferential distance [] , but for reference instead of inference. Inferential distance is when one perspective holds conclusions that require many steps of inference to justify, and the other perspective hasn't gone through those steps of inference (perhaps because the inferences are unsound). Referential distance is when one perspective has a term that points to something, and that term is most easily define using many other terms, and the other perspective doesn't know what those terms are supposed to point to. Referential distance, unlike normal distance, is directional: you are distant from an ancient Greek yarn spinner knows what a spindle and whorl is, and perhaps you don't; and separately, you know what a computer is, and they don't. (So really RD should be called referential displacement, and likewise for inferential displacement. It's the governor's job to rectify this.) Referential distance breaks up into translational distance (we have yet to match up the terms in our respective idiolects), and conceptual distance (you have concepts for things that I don't have concepts for, and/or we carve things up differently). Be careful not to mistakenly take a referential distance to be a translational distance when really it's conceptual; much opportunity to grow is lost in this way.
5Raemon4d I know I'll go to programmer hell for asking this... but... does anyone have a link to a github repo that tried really hard to use jQuery to build their entire website, investing effort into doing some sort of weird 'jQuery based components' thing for maintainable, scalable development? People tell me this can't be done without turning into terrifying spaghetti code but I dunno I feel sort of like the guy in this xkcd [] and I just want to know for sure.

Friday, September 13th 2019
Fri, Sep 13th 2019

Shortform [Beta]
17habryka4d Thoughts on impact measures and making AI traps I was chatting with Turntrout today about impact measures, and ended up making some points that I think are good to write up more generally. One of the primary reasons why I am usually unexcited about impact measures is that I have a sense that they often "push the confusion into a corner" in a way that actually makes solving the problem harder. As a concrete example, I think a bunch of naive impact regularization metrics basically end up shunting the problem of "get an AI to do what we want" into the problem of "prevent the agent from interferring with other actors in the system". The second one sounds easier, but mostly just turns out to also require a coherent concept and reference of human preferences to resolve, and you got very little from pushing the problem around that way, and sometimes get a false sense of security because the problem appears to be solved in some of the toy problems you constructed. I am definitely concerned that Turntrou's AUP does the same, just in a more complicated way, but am a bit more optimistic than that, mostly because I do have a sense that in the AUP case there is actually some meaningful reduction going on, though I am unsure how much. In the context of thinking about impact measures, I've also recently been thinking about the degree to which "trap-thinking" is actually useful for AI Alignment research. I think Eliezer was right in pointing out that a lot of people, when first considering the problem of unaligned AI, end up proposing some kind of simple solution like "just make it into an oracle" and then consider the problem solved. I think he is right that it is extremely dangerous to consider the problem solved after solutions of this type, but it isn't obvious that there isn't some good work that can be done that is born out of the frame of "how can I trap the AI and make it marginally harder for it to be dangerous, basically pretending it's just a slightly smarter human
9Matthew Barnett5d I agree with Wei Dai that we should use our real names [] for online forums, including Lesswrong. I want to briefly list some benefits of using my real name, * It means that people can easily recognize me across websites, for example from Facebook and Lesswrong simultaneously. * Over time my real name has been stable whereas my usernames have changed quite a bit over the years. For some very old accounts, such as those I created 10 years ago, this means that I can't remember my account name. Using my real name would have averted this situation. * It motivates me to put more effort into my posts, since I don't have any disinhibition from being anonymous. * It often looks more formal than a silly username, and that might make people take my posts more seriously than they otherwise would have. * Similar to what Wei Dai said, it makes it easier for people to recognize me in person, since they don't have to memorize a mapping from usernames to real names in their heads. That said, there are some significant downsides, and I sympathize with people who don't want to use their real names. * It makes it much easier for people to dox you. There are some very bad ways that this can manifest. * If you say something stupid, your reputation is now directly on the line. Some people change accounts every few years, as they don't want to be associated with the stupid person they were a few years ago. * Sometimes disinhibition from being anonymous is a good way to spur creativity. I know that I was a lot less careful in my previous non-real-name accounts, and my writing style was different -- perhaps in a way that made my writing better. * Your real name might sound boring, whereas your online username can sound awesome.
9Hasturtimesthree5d Most people in the rationality community are more likely to generate correct conclusions than I am, and are in general better at making decisions. Why is that the case? Because they have more training data, and are in general more competent than I am. They actually understand the substrate on which they make decisions, and what is likely to happen, and therefore have reason to trust themselves based on their past track record, while I do not. Is the solution therefore just "git gud"? This sounds unsatisfactory, it compresses competence to a single nebulous attribute rather than recommending concrete steps. It is possible that there are in fact generalizable decisionmaking algorithms/heuristics that I am unaware of, that I can use to actually generate decisions that have good outcomes. It might then be possible that I haven't had enough training? When was the last time I actually made a decision unguided by the epistemic modesty shaped thing that I otherwise use, because relying on my own thought processes is known to have bad outcomes and mess things up? In which case the solution might be to have training in a low-stakes environment, where I can mess up without consequence, and learn from that. Problem: these are hard to generate in a way that carries over cross-domain. If I trust my decision process about which tech to buy in Endless Legend, that says nothing about my decision process about what to do when I graduate. Endless Legend is simple, and the world is complicated. I can therefore fully understand: "this is the best tech to research: I need to convert a lot of villages, so I need influence to spend on that, and this tech generates a lot of it". While figuring out what path to take such that the world benefits the most requires understanding what the world needs, an unsolved problem in itself, and the various effects each path is likely to have. Or on even the small scale, where to put a particular object in the REACH that doesn't seem to have an obv
1JustMaier5d I've recently started to try and participate more in online discussions like LessWrong (technically this is my first post here). However in doing so I've realized what feels like a gaping hole in digital identity. No one knows who I am, and how could they? They see my name, my photo, my short bio, and they have no way of knowing the complex person behind it all. In my experience, when I interact with others, I feel like I am often misunderstood because the people I'm interacting with don't have adequate context of me to understand where my perspective is coming from. This plays out positively and negatively. Ultimately it causes people to unwittingly apply bias because they don't have the information they need to make sense of why I'm saying what I'm saying and how who I am plays a factor in what I'm trying to communicate. It seems to me that currently, the most effective way to establish a digital identity is by surrounding yourself with individuals with similar affinities and building social networks that establish your identity within those fields. This seems like a complicated and inefficient process and I'm curious to hear if I'm way off base and what others see as ways to establish a powerful digital identity.

Thursday, September 12th 2019
Thu, Sep 12th 2019

Shortform [Beta]
22jimrandomh6d Eliezer has written about the notion of security mindset [] , and there's an important idea that attaches to that phrase, which some people have an intuitive sense of and ability to recognize, but I don't think Eliezer's post quite captured the essence of the idea, or presented anything like a usable roadmap of how to acquire it. An1lam's recent shortform post [] talked about the distinction between engineering mindset and scientist mindset, and I realized that, with the exception of Eliezer and perhaps a few people he works closely with, all of the people I know of with security mindset are engineer-types rather than scientist-types. That seemed like a clue; my first theory was that the reason for this is because engineer-types get to actually write software that might have security holes, and have the feedback cycle of trying to write secure software. But I also know plenty of otherwise-decent software engineers who don't have security mindset, at least of the type Eliezer described. My hypothesis is that to acquire security mindset, you have to: * Practice optimizing from a red team/attacker perspective, * Practice optimizing from a defender perspective; and * Practice modeling the interplay between those two perspectives. So a software engineer can acquire security mindset because they practice writing software which they don't want to have vulnerabilities, they practice searching for vulnerabilities (usually as an auditor simulating an attacker rather as an actual attacker, but the cognitive algorithm is the same), and they practice going meta when they're designing the architecture of new projects. This explains why security mindset is very common among experienced senior engineers (who have done each of the three many times), and rare among junior engineers (who haven't yet)
7hereisonehand5d I keep seeing these articles about the introduction of artificial intelligence/data science to football and basketball strategy. What's crazy to me is that it's happening now instead of much much earlier. The book Moneyball was published in 2003 (the movie in 2011) spreading the story of how use of statistics changed the game when it came to every aspect of managing a baseball team. After reading it, I and many others thought to ourselves "this would be cool to do in other sports" - using data would be interesting in every area of every sport (drafting, play calling, better coaching, clock management, etc). But I guess I assumed - if I thought of it, why wouldn't other people? It's kind of a wild example of the idea that "if something works a little, you should do more of it and see if it works a lot, and keep doing that until you see evidence that it's running out of incremental benefit." My assumption that the "Moneyball" space was saturated back in 2011 was completely off given that in the time between 2011 and now, one could have trained themselves from scratch in the relevant data science methods and pushed for such jobs (my intuition is that 8 years of training could get you there). So, it's not even a "right place, right time" story given the timeline. It's just - when you saw the obvious trend, did you assume that everyone else was already thinking about it, or did you jump in yourself?
6Spiracular5d While I could rattle off the benefits of "delegating" or "ops people", I don't think I've seen a highly-discrete TAP + flowchart for realizing when you're at the point where you should ask yourself "Have my easy-to-delegate annoyances added up to enough that I should hire a full-time ops person? (or more)." Many people whose time is valuable seem likely to put off making this call until they reach the point where it's glaringly obvious. Proposing an easy TAP-like decision-boundary seems like a potentially high-value post? Not my area of specialty, though.
6elityre5d New (image) post: My strategic picture of the work that needs to be done []
4TekhneMakre5d Mentally crowding out possibilities gets you stuck in local maxima. To glean the benefits of temporarily inhabiting local maxima while garnering the benefits of even better points that can be discovered via higher-energy search, acquire the ability to prevent {the idea of existing forms of X} from mentally crowding out {the possibility of, specific visualizations of, and motivation to create} better forms of X.
Load More (5/7)

Wednesday, September 11th 2019
Wed, Sep 11th 2019

Shortform [Beta]
5TekhneMakre7d There's that dumb cliche about a lost item you're searching for: it's always in the last place you look. A pattern I've noticed in me: I'll want to retrieve an item, so I'll check a few places I think it's likely to be. When I don't find it I check a few other places that are less likely but plausible. These checks are all quick, because I'm not frustrated and I expect the item to be easy enough to find. Then I start getting frustrated and start looking in implausible places, and I search those places thoroughly, as I'm now expecting to have to carefully eliminate areas. Eventually I get really frustrated, and in desperation I start meticulously rechecking places I had been treating as eliminated, at which point I quickly find the item in one of the first few places I originally looked. I suspect this pattern metaphorizes well: it's always in the first place you looked, but you had to look a bit harder.
4Chris_Leong6d Book Review: The Rosie Project: Plot summary: After a disastrous series of dates, autistic genetics professor Don Tilman decides that it’d be easier to just create a survey to eliminate all of the women who would be unsuitable for him. Soon after, he meets a barmaid called Rosie who is looking for help with finding out who her father is. Don agrees to help her, but over the course of the project Don finds himself increasingly attracted to her, even though the survey suggests that he is completely unsuitable. The story is narrated in Don’s voice. He tells us all about his social mishaps, while also providing some extremely straight-shooting observations on society Should I read this?: If you’re on the fence, I recommend listening to a couple of minutes as the tone is remarkably consistent throughout, but without becoming stale My thoughts: I found it to be very humorous. but without making fun of Don. We hear the story from his perspective and he manages to be a very sympathetic character. The romance manages to be relatively believable since Don manages to establish himself as having many attractive qualities despite his limited social sills. However, I couldn’t believe that he’d think of Rosie as “the most beautiful woman in the world”; that kind of romantic idealisation is just too inconsistent with his character. His ability to learn skills quickly also stretched credibility, but it felt more believable after he dramatically failed during one instance. I felt that Don’s character development was solid; I did think that he’d struggle more to change his schedule after keeping it rigid for so long, but that wasn’t a major issue for me. I appreciated that by the end he had made significant growth (less strict on his expectations for a partner, not sticking so rigidly to a schedule, being more accomodating of other people’s faults), but he was still largely himself.
1TekhneMakre7d For any thing X, the group concept of X is an umbrella term for ideas regarding X that are shared by members of some (implicit) group. Two kinds of group concepts: the group mutual concept of X (mutual concept for short; mutual as in "mutual knowledge") is the intersection of all the group members's beliefs about X; so if everyone knows some thing Y about X, then Y is part of the mutual concept, but Y is less a part of the mutual concept to the extent that some members don't know or disagree with Y. The group common knowledge concept of X ( CK concept for short) is all the beliefs about X that are in common knowledge for the group. So Y is in the CK concept of X to the extent that group members will, when discussing X, correctly expect each other to: have Y in mind; easily understand implications of Y; coordinate action based on those implications of Y; and expect other members to do likewise.

Tuesday, September 10th 2019
Tue, Sep 10th 2019

Personal Blogposts
4[Event]San Francisco Meetup: Board Games170 Hawthorne St, San Francisco, CA 94107, USASep 17th
1[Event]SSC Atlanta October Meetup720 Moreland Avenue Southeast, AtlantaOct 12th
Shortform [Beta]
29jp7d Do Anki while Weightlifting Many rationalists appear to be interested in weightlifting. I certainly have enjoyed having a gym habit. I have a recommendation for those who do: Try studying Anki cards [] while resting between weightlifting sets. The upside is high. Building the habit of studying Anki cards is hard, and if doing it at the gym causes it to stick, you can now remember things by choice not chance. And the cost is pretty low. I rest for 90 seconds between sets, and do about 20 sets when I go to the gym. Assuming I get a minute in once the overheads are accounted for, that gives me 20 minutes of studying. I go through about 4 cards per minute, so I could do 80 cards per visit to the gym. In practice I spend only ~5 minutes studying per visit, because I don't have that many cards. I'm not too tired to concentrate. In fact, the adrenaline high makes me happy to have something mentally active to do. Probably because of this, it doesn't at all decrease my desire to go to the gym. I find I can add simple cards to my Anki deck at the gym, although the mobile app does make it slow. Give it a try! It's cheap to experiment and the value of a positive result is high.
11romeostevensit7d A short heuristic for self inquiry: * write down things you think are true about important areas of your life * produce counter examples * write down your defenses/refutations of those counter examples * come back later when you are less defensive and review whether your defenses were reasonable * if not, why not? whence the motivated reasoning? what is being protecting from harm?
5strangepoop7d Is metarationality about (really tearing open) the twelfth virtue? It seems like it says "the map you have of map-making is not the territory of map-making", and gets into how to respond to it fluidly, with a necessarily nebulous strategy of applying the virtue of the Void. (this is also why it always felt like metarationality seems to only provide comments where Eliezer would've just given you the code) The parts that don't quite seem to follow is where meaning-making and epistemology collide. I can try to see it as a "all models are false, some models are useful" but I'm not sure if that's the right perspective.
5Adam Scholl7d TIL that (according to this study [], at least) adenovirus serotype 36 is present in 30% of obese humans, but only 11% of non-obese humans. The virus appears to cause obesity in chickens, mice, rats and monkeys. It may work (paper [], pop summary []) by binding to and permanently activating the PI3K enzyme, causing it to activate the insulin signaling pathway even when insulin isn't present. Previous discussion [] on LessWrong.
4romeostevensit7d When young you mostly play within others' reward structures. Many choose which structure to play in based on Max reward. This is probably a mistake. You want to optimize for opportunity to learn how to construct reward structures.
Load More (5/6)

Monday, September 9th 2019
Mon, Sep 9th 2019

Shortform [Beta]
18G Gordon Worley III9d If an organism is a thing that organizes, then a thing that optimizes is an optimism.
15Chris_Leong8d Book Review: So Good They Can't Ignore You by Cal Newport: This book makes an interesting contrast to The 4 Hour Workweek. Tim Ferris seems to believe that the purpose of work should be to make as much money as possible in the least amount of time and that meaning can then be pursued during your newly available free time. Tim gives you some productivity tips in the hope that it will make you valuable enough to negotiate flexibility in terms of how, when and where you complete your work, plus some dirty tricks as well. Cal Newport's book is similar in that it focuses on becoming valuable enough to negotiate a job that you'll love and downplays the importance of pursuing your passions in your career. However, while Tim extolls the virtues of being a digital nomad, Cal Newport emphasises self-determination theory and autonomy, competence and relatedness. That is, the freedom to decide how you pursue your work, the satisfaction of doing a good job and the pleasure of working with people who you feel connected to. He argues that these traits are rare and valuable and so that if you want such a job you'll need skills that rare and valuable to offer in return. That's the core of his argument against pre-existing passion; passions tend to cluster into a few fields such as music, arts or sports and only a very few people can ever make these the basis of their careers. Even for those who are interested in less insanely competitive pursuits such as becoming a yoga instructor or organic farmer, he cautions against pursuing the dream of just quitting your job one day. That would involve throwing away all of the career capital that you've accumulated and hence your negotiating power. Further, it can easily lead to restlessness, that is, jumping from career to career all the while searching for the "one" that meets an impossibly high bar. Here are some examples of the kind of path he endorses: * Someone becoming an organic farmer after ten years of growing and selling foo
13Ruby8d A random value walks into a bar. A statistician swivels around in her chair, one tall boot unlaced and an almost full Manhattan sitting a short distance from her right elbow. "I've been expecting you," she says. "Have you been waiting long?" respond the value. "Only for a moment." "Then you're very on point." "I've met enough of your kind that there's little risk of me wasting time." "I assure you I'm quite independent." "Doesn't mean you're not drawn from the same mold." "Well, what can I do for you?" "I was hoping to gain your confidence..."
7TekhneMakre8d A man wanted to study the form and the flow of a stream. The stream went by in a confusing blur, too fast to see and study, so the man built a series of small dams across the stream, to pause its race. The man soon lost interest in the water in the dams, where the alluvium was still and the vortices didn't play and the fish were lazy and the water was dead.
6Hazard8d Quick description of a pattern I have that can muddle communication. "So I've been mulling over this idea, and my original thoughts have changed a lot after I read the article, but not because of what the article was trying to persuade me of ..." Genera Pattern: There is a concrete thing I want to talk about (a new idea - ???). I don't tell what it is, I merely provide a placeholder reference for it ("this idea"). Before I explain it, I begin applying a bunch of modifiers (typically by giving a lot of context "This idea is a new take on a domain I've previously had thoughts on" "there was an article involved in changing my mind" "that article wasn't the direct cause of the mind change") This confuses a lot of people. My guess is that interpreting statements like this require a lot more working memory. If introduce the main subject, and then modify it, people can "mentally modify" the subject as I go along. If I don't give them the subject, they need to store a stack of modifiers, wait until I get to the subject, and then apply all those modifiers they've been storing. I notice I do this most when I expect the listener will have a negative gut reaction to the subject, and I'm trying to preemptively do a bunch of explanation before introducing it. Anyone notice anything similar?
Load More (5/7)

Sunday, September 8th 2019
Sun, Sep 8th 2019

Shortform [Beta]
23Rob Bensinger10d Facebook comment I wrote in February, in response to the question [] 'Why might having beauty in the world matter?': I assume you're asking about why it might be better for beautiful objects in the world to exist (even if no one experiences them), and not asking about why it might be better for experiences of beauty to exist. [... S]ome reasons I think this: 1. If it cost me literally nothing, I feel like I'd rather there exist a planet that's beautiful, ornate, and complex than one that's dull and simple -- even if the planet can never be seen or visited by anyone, and has no other impact on anyone's life. This feels like a weak preference, but it helps get a foot in the door for beauty. (The obvious counterargument here is that my brain might be bad at simulating the scenario where there's literally zero chance I'll ever interact with a thing; or I may be otherwise confused about my values.) 2. Another weak foot-in-the-door argument: People seem to value beauty, and some people claim to value it terminally. Since human value is complicated and messy and idiosyncratic (compare person-specific ASMR triggers or nostalgia triggers or culinary preferences) and terminal and instrumental values are easily altered and interchanged in our brain, our prior should be that at least some people really do have weird preferences like that at least some of the time. (And if it's just a few other people who value beauty, and not me, I should still value it for the sake of altruism and cooperativeness.) 3. If morality isn't "special" -- if it's just one of many facets of human values, and isn't a particularly natural-kind-ish facet -- then it's likelier that a full understanding of human value would lead us to treat aesthetic and moral preferences as more coextensive, interconnected, and fuzzy. If I can value someone else's happiness inherently, without needing to experience or know about i
13lifelonglearner9d Ben Pace has a new post up on LessWrong [] that's asking about good exercises for rationality / general LW-adjacent stuff. I think this is a good thing to put up a bounty for, and I started thinking about what makes a good exercise. Exercises are good because they help you further the develop the material; they give you opportunities to put whatever relevant skill to use. There are differing levels of what you can be trying to assess: * Identifying the correct idea from a group of different ones * Summarizing the correct idea * Transferring the idea to someone else * Actually demonstrating whatever skill it is (if it's something you can do) * Actually using the skill to deduce something else (if it's a model thing) I think there's a good set of stuff to dive into here about the distinction between optimizing for pedagogy versus effectiveness. In the most stark case, you want to teach people using less potent versions of something, at least at first. Think not just training wheels on a bike, but successively more advanced models for physics or arithmetic. There's a gradual shift happening. More than that, I wonder if the two angles are greatly orthogonal. Anyway, back to the original idea at hand. When you give people exercises, there's a sense of broad vs narrow that seems important, but I'm still teasing it out. In one sense, you can think of tests that do multiple choice vs open-ended answers. But it's not like multiple-choice questions have to suck. You could give people very plausible-sounding answers which require them to do a lot of work to determine which one is correct. Similarly, open-ended questions could allow for bullshitting. It's not exactly the format, but what sort of work it induces. At the very least, it's about pushing for more Generative content. But beyond that, it gets into pedagogy questions: 1. How can you give questions which increase in difficu
6G Gordon Worley III9d If CAIS if sufficient for AGI, then likely humans are CAIS-style general intelligences.
5benwr9d Doom circles seem hard to do outside of CFAR workshops: If I just pick the ~7 people who I most want to be in my doom circle, this might be the best doom circle for me, but it won't be the best doom circle for them, since they will mostly not know each other very well. So you might think that doing doom "circles" one-on-one would be best. But doom circles also have a sort of ceremony / spacing / high-cost-ness to them that cuts the other way: More people means more "weight" or something. And there are probably other considerations determining the optimal size. So if you wanted to have a not-at-the-end-of-a-workshop doom circle, should you find the largest clique [] with some minimum relationship strength in your social graph?
3Xenotech10d My shortest form is the known unknown: what hurdle of unacknowledged individual shortcoming prevents my being in accord with others? (This ingroup) I believe it is the missing, in-person declaration of mutuality, combined with declarations of the value of each person - to have in person communion with those estranged by supposed ideological differences, that would be unimaginably useful. Yet I assume my "alignment" is the problem. Or some character assessment which, perhaps only partially informed, drives intentional exclusion. Either way, it would be tremendous to discover a change in parameters benefiting the widest number of persons, yet including myself and others.

Load More Days