All Posts

Sorted by Magic (New & Upvoted)

December 2019

Shortform [Beta]
53Buck6d[I'm not sure how good this is, it was interesting to me to think about, idk if it's useful, I wrote it quickly.] Over the last year, I internalized Bayes' Theorem much more than I previously had; this led me to noticing that when I applied it in my life it tended to have counterintuitive results; after thinking about it for a while, I concluded that my intuitions were right and I was using Bayes wrong. (I'm going to call Bayes' Theorem "Bayes" from now on.) Before I can tell you about that, I need to make sure you're thinking about Bayes in terms of ratios rather than fractions. Bayes is enormously easier to understand and use when described in terms of ratios. For example: Suppose that 1% of women have a particular type of breast cancer, and a mammogram is 20 times more likely to return a positive result if you do have breast cancer, and you want to know the probability that you have breast cancer if you got that positive result. The prior probability ratio is 1:99, and the likelihood ratio is 20:1, so the posterior probability is 1∗20:99∗1 = 20:99, so you have probability of 20/(20+99) of having breast cancer. I think that this is absurdly easier than using the fraction formulation. I think that teaching the fraction formulation is the single biggest didactic mistake that I am aware of in any field. -------------------------------------------------------------------------------- Anyway, a year or so ago I got into the habit of calculating things using Bayes whenever they came up in my life, and I quickly noticed that Bayes seemed surprisingly aggressive to me. For example, the first time I went to the Hot Tubs of Berkeley, a hot tub rental place near my house, I saw a friend of mine there. I wondered how regularly he went there. Consider the hypotheses of "he goes here three times a week" and "he goes here once a month". The likelihood ratio is about 12x in favor of the former hypothesis. So if I previously was ten to one against the three-times-a-week hyp
42BrienneYudkowsky1dSome advice to my past self about autism: Learn about what life is like for people with a level 2 or 3 autism diagnosis. Use that reference class to predict the nature of your problems and the strategies that are likely to help. Only after making those predictions, adjust for your own capabilities and circumstances. Try this regardless of how you feel about calling yourself autistic or seeking a diagnosis. Just see what happens. Many stereotypically autistic behaviors are less like symptoms of an illness, and more like excellent strategies for getting shit done and having a good life. It’s just hard to get them all working together. Try leaning into those behaviors and see what’s good about them. For example, you know how when you accidentally do something three times in a row, you then feel compelled to keep doing it the same way at the same time forever? Studying this phenomenon in yourself will lead you to build solid and carefully designed routines that allow you to be a lot more reliably vibrant. You know how some autistic people have one-on-one aides, caretakers, and therapists who assist in their development and day-to-day wellbeing? Read a bit about what those aides do. You’ll notice right away that the state of the art in this area is crap, but try to imagine what professional autism aides might do if they really had things figured out and were spectacular at their jobs. Then devote as many resources as you can spare for a whole year to figuring out how to perform those services for yourself. It seems to me that most of what’s written about autism by neurotypicals severely overemphasizes social stuff. You’ll find almost none of it compelling. Try to understand what’s really going on with autism, and your understanding will immediately start paying off in non-social quality of life improvements. Keep at it, and it’ll eventually start paying off in deep and practical social insights as well (which I know you don’t care about right now, but it’s true). I
34BrienneYudkowsky3dHere’s what Wikipedia has to say about monographs [https://en.wikipedia.org/wiki/Monograph] . “A monograph is a specialist work of writing… or exhibition on a single subject or an aspect of a subject, often by a single author or artist, and usually on a scholarly subject… Unlike a textbook, which surveys the state of knowledge in a field, the main purpose of a monograph is to present primary research and original scholarship ascertaining reliable credibility to the required recipient. This research is presented at length, distinguishing a monograph from an article.” I think it’s a bit of an antiquated term. Either that or it’s chiefly British, because as an American I’ve seldom encountered it. I know the word because Sherlock Holmes is always writing monographs. In *A Study In Scarlet*, he says, “I gathered up some scattered ash from the floor. It was dark in colour and flakey—such an ash as is only made by a Trichinopoly. I have made a special study of cigar ashes—in fact, I have written a monograph upon the subject. I flatter myself that I can distinguish at a glance the ash of any known brand, either of cigar or of tobacco.” He also has a monograph on the use of disguise in crime detection, and another on the utilities of dogs in detective work. When I tried thinking of myself as writing “monographs” on things, I broke though some sort of barrier. The things I wrote turned out less inhibited and more… me. I benefited from them myself more as well. What I mean by “monograph” is probably a little different from what either Sherlock or academia means, but it’s in the same spirit. I think of it as a photo study or a character sketch, but in non-fiction writing form. Here are my guidelines for writing a monograph. 1. Pick a topic you can personally investigate. It doesn’t matter whether it’s “scholarly”. It’s fine if other people have already written dozens of books on the subject, regardless of whether you’ve read them, just as long as you can stick your own
29Ben Pace6dGood posts you might want to nominate in the 2018 Review I'm on track to nominate around 30 posts from 2018, which is a lot. Here is a list of about 30 further posts I looked at that I think were pretty good but didn't make my top list, in the hopes that others who did get value out of the posts will nominate their favourites. Each post has a note I wrote down for myself about the post. * Reasons compute may not drive AI capabilities growth [https://www.lesswrong.com/posts/hSw4MNTc3gAwZWdx9/reasons-compute-may-not-drive-ai-capabilities-growth] * I don’t know if it’s good, but I’d like it to be reviewed to find out. * The Principled-Intelligence Hypothesis [https://www.lesswrong.com/posts/Tusi9getaQ2o6kZsb/the-principled-intelligence-hypothesis] * Very interesting hypothesis generation. Unless it’s clearly falsified, I’d like to see it get built on. * Will AI See Sudden Progress? [https://www.lesswrong.com/posts/AJtfNyBsum6ZzWxKR/will-ai-see-sudden-progress] DONE * I think this post should be considered paired with Paul’s almost-identical post. It’s all exactly one conversation. * Personal Relationships with Goodness [https://www.lesswrong.com/posts/7xQAYvZL8T5L6LWyb/personal-relationships-with-goodness] * This felt like a clear analysis of an idea and coming up with some hypotheses. I don’t think the hypotheses really captures what’s going on, and most of the frames here seem like they’ve caused a lot of people to do a lot of hurt to themselves, but it seemed like progress in that conversation. * Are ethical asymmetries from property rights? [https://www.lesswrong.com/posts/zf4gvjTkbcJ5MGsJk/are-ethical-asymmetries-from-property-rights] * Again, another very interesting hypothesis. * Incorrect Hypotheses Point to Correct Observations [https://www.lesswrong.com/posts/MPj7t2w3nk4s9EYYh/incorrect-hypotheses-point-to-correct-observations]
21Raemon2dOver in this thread, Said asked [https://www.lesswrong.com/posts/5zSbwSDgefTvmWzHZ/affordance-widths#iM4Jfa3ThJcFii2Pm] the reasonable question "who exactly is the target audience with this Best of 2018 book?" I'm working on a post that goes into a bit more detail about the Review Phase, and, to be quite honest, the whole process is a bit in flux – I expect us (the LW team as well as site participants) to learn, over the course of the review process, what aspects of it are most valuable. But, a quick "best guess" answer for now. I see the overall review process as having two "major phases": * Phase 1: Nomination/Review/Voting/Post-that-summarizes-the-voting * Phase 2: Compilation and Publication I think the first phase should be oriented entirely around "internal consumption" – figuring out what epistemic standard to hold ourselves to, and how, so that we can do better in the future. (As well as figuring out what ideas we've developed that should be further built upon). Any other benefits are incidental. The final book/sequence is at least somewhat externally facing. I do expect it to be some people's first introduction to LessWrong, and other people's "one thing they read from LW this year". And at least some consideration should be given to those people's reading experience (which will be lacking a lot of context). But my guess is that should come more in the form of context-setting editor commentary than in decisions about what to include. I think “here are the fruits of our labors; take them and make use of them” is more of what I was aiming for. (Although "what standards are we internally holding ourselves to, and what work should we build towards?" is still an important function of the finished product). It'd be nice if people were impressed, but a better frame for that goal is "Outsiders looking in can get an accurate picture of how productive our community is, and what sort of things we do", and maybe they are impressed by that or maybe not. (I re
Load More (5/23)

November 2019

Frontpage Posts
Shortform [Beta]
54orthonormal1moDeepMind released their AlphaStar paper a few days ago [https://deepmind.com/blog/article/AlphaStar-Grandmaster-level-in-StarCraft-II-using-multi-agent-reinforcement-learning] , having reached Grandmaster level at the partial-information real-time strategy game StarCraft II over the summer. This is very impressive, and yet less impressive than it sounds. I used to watch a lot of StarCraft II (I stopped interacting with Blizzard recently because of how they rolled over for China), and over the summer there were many breakdowns of AlphaStar games once players figured out how to identify the accounts. The impressive part is getting reinforcement learning to work at all in such a vast state space- that took breakthroughs beyond what was necessary to solve Go and beat Atari games. AlphaStar had to have a rich enough set of potential concepts (in the sense that e.g. a convolutional net ends up having concepts of different textures) that it could learn a concept like "construct building P" or "attack unit Q" or "stay out of the range of unit R" rather than just "select spot S and enter key T". This is new and worth celebrating. The overhyped part is that AlphaStar doesn't really do the "strategy" part of real-time strategy. Each race has a few solid builds that it executes at GM level, and the unit control is fantastic, but the replays don't look creative or even especially reactive to opponent strategies. That's because there's no representation of causal thinking - "if I did X then they could do Y, so I'd better do X' instead". Instead there are many agents evolving together, and if there's an agent evolving to try Y then the agents doing X will be replaced with agents that do X'. (This lack of causal reasoning especially shows up in building placement, where the consequences of locating any one building here or there are minor, but the consequences of your overall SimCity are major for how your units and your opponents' units would fare if they attacked you. In one
21Chris_Leong11dHegel - A Very Short Introduction by Peter Singer - Book Review Part 1: Freedom Hegel is a philosopher who is notorious for being incomprehensible. In fact, for one of his books he signed a contract that assigned a massive financial penalty for missing the publishing deadline, so the book ended up being a little rushed. While there was a time when he was dominant in German philosophy, he now seems to be held in relatively poor regard and his main importance is seen to be historical. So he's not a philosopher that I was really planning to spend much time on. Given this, I was quite pleased to discover this book promising to give me A Very Short Introduction, especially since it is written by Peter Singer, a philosopher who write and thinks rather clearly. After reading this book, I still believe that most of what Hegel wrote was pretentious nonsense, but the one idea that struck me as the most interesting was his conception of freedom. A rough definition of freedom might be ensuring that people are able to pursue whatever it is that they prefer. Hegel is not a fan abstract definitions of freedom which treat all preferences the same and don't enquire where they come from. In his perspective, most of our preferences are purely a result of the context in which we exist and so such an abstract definition of freedom is merely the freedom to be subject to social and historical forces. Since we did not choose our desires, he argues that we are not free when we act from our desires. Hegel argues that, "every condition of comfort reveals in turn its discomfort, and these discoveries go on for ever". One such example would be the marketing campaigns to convince us that sweating was embarrassing ( https://www.smithsonianmag.com/…/how-advertisers-convinced…/ [https://l.facebook.com/l.php?u=https%3A%2F%2Fwww.smithsonianmag.com%2Fhistory%2Fhow-advertisers-convinced-americans-they-smelled-bad-12552404%2F%3Ffbclid%3DIwAR1KKh3gEJiwroC7b-Nooykui6_CBL2CsR-zaR-9ExSir591OYpM7ImwWb8&h
15Ruby16dWhy I'm excited by the 2018 Review I generally fear that perhaps some people see LessWrong as a place where people just read and discuss "interesting stuff", not much different from a Sub-Reddit on anime or something. You show up, see what's interesting that week, chat with your friends. LessWrong's content might be considered "more healthy" relative to most internet content and many people say they browse LessWrong to procrastinate but feel less guilty about than other browsing, but the use-case still seems a bit about entertainment. None of the above is really a bad thing, but in my mind, LessWrong is about much more than a place for people to hang out and find entertainment in sharing joint interests. In my mind, LessWrong is a place where the community makes collective progress on valuable problems. It is an ongoing discussion where we all try to improve our understanding of the world and ourselves. It's not just play or entertainment– it's about getting somewhere. It's as much like an academic journal where people publish and discuss important findings as it is like an interest-based sub-Reddit. And all this makes me really excited by the LessWrong 2018 Review [http://lesswrong]. The idea of the review is to identify posts that have stood the test of time and have made lasting contributions to the community's knowledge and meaningfully impacted people's lives. It's about finding the posts that represent the progress we've made. During the design of the review (valiantly driven by Raemon), I was apprehensive that people would not feel motivated by the process and put in the necessary work. But less than 24 hours after launching, I'm excited by the nominations [https://www.lesswrong.com/nominations] and what people are writing in their nomination comments. Looking at the list of nominations so far and reading the comments, I'm thinking "Yes! This is a list showing the meaningful progress the LW community has made. We are not just a news or entertainment site
12TurnTrout9dFrom my Facebook My life has gotten a lot more insane over the last two years. However, it's also gotten a lot more wonderful, and I want to take time to share how thankful I am for that. Before, life felt like... a thing that you experience, where you score points and accolades and check boxes. It felt kinda fake, but parts of it were nice. I had this nice cozy little box that I lived in, a mental cage circumscribing my entire life. Today, I feel (much more) free. I love how curious I've become, even about "unsophisticated" things. Near dusk, I walked the winter wonderland of Ogden, Utah with my aunt and uncle. I spotted this gorgeous red ornament hanging from a tree, with a hunk of snow stuck to it at north-east orientation. This snow had apparently decided to defy gravity. I just stopped and stared. I was so confused. I'd kinda guessed that the dry snow must induce a huge coefficient of static friction, hence the winter wonderland. But that didn't suffice to explain this. I bounded over and saw the smooth surface was iced, so maybe part of the snow melted in the midday sun, froze as evening advanced, and then the part-ice part-snow chunk stuck much more solidly to the ornament. Maybe that's right, and maybe not. The point is that two years ago, I'd have thought this was just "how the world worked", and it was up to physicists to understand the details. Whatever, right? But now, I'm this starry-eyed kid in a secret shop full of wonderful secrets. Some secrets are already understood by some people, but not by me. A few secrets I am the first to understand. Some secrets remain unknown to all. All of the secrets are enticing. My life isn't always like this; some days are a bit gray and draining. But many days aren't, and I'm so happy about that. Socially, I feel more fascinated by people in general, more eager to hear what's going on in their lives, more curious what it feels like to be them that day. In particular, I've fallen in love with the rationalist and
12ofer12d--Daniel Kahneman, Thinking, Fast and Slow To the extent that the above phenomenon tends to occur, here's a fun story that attempts to explain it: At every moment our brain can choose something to think about (like "that exchange I had with Alice last week"). How does the chosen thought get selected from the thousands of potential thoughts? Let's imagine that the brain assigns an "importance score" to each potential thought, and thoughts with a larger score are more likely to be selected. Since there are thousands of thoughts to choose from, the optimizer's curse [https://www.lesswrong.com/posts/5gQLrJr2yhPzMCcni/the-optimizer-s-curse-and-how-to-beat-it] makes our brain overestimate the importance of the thought that it ends up selecting.
Load More (5/69)

October 2019

Frontpage Posts
Shortform [Beta]
41DanielFilan2moHot take: if you think that we'll have at least 30 more years of future where geopolitics and nations are relevant, I think you should pay at least 50% as much attention to India as to China. Similarly large population, similarly large number of great thinkers and researchers. Currently seems less 'interesting', but that sort of thing changes over 30-year timescales. As such, I think there should probably be some number of 'India specialists' in EA policy positions that isn't dwarfed by the number of 'China specialists'.
37elityre1moNew post: Some notes on Von Neumann, as a human being [https://musingsandroughdrafts.wordpress.com/2019/10/26/some-notes-on-von-neumann-as-a-human-being/] I recently read Prisoner’s Dilemma, which half an introduction to very elementary game theory, and half a biography of John Von Neumann, and watched this [https://youtu.be/vLbllFHBQM4] old PBS documentary about the man. I’m glad I did. Von Neumann has legendary status in my circles, as the smartest person ever to live. [1] Many times I’ve written the words “Von Neumann Level Intelligence” in a AI strategy document, or speculated [http://www.overcomingbias.com/2014/07/30855.html#comment-4174545474] about how many coordinated Von Neumanns would it take to take over the world. (For reference, I now think that 10 is far too low, mostly because he didn’t seem to have the entrepreneurial or managerial dispositions.) Learning a little bit more about him was humanizing. Yes, he was the smartest person ever to live, but he was also an actual human being, with actual human traits. Watching this first clip [https://www.youtube.com/watch?v=vLbllFHBQM4], I noticed that I was surprised by a number of thing. 1. That VN had an accent. I had known that he was Hungarian, but somehow it had never quite propagated that he would speak with a Hungarian accent. 2. That he was middling height (somewhat shorter than the presenter he’s talking too). 3. The thing he is saying is the sort of thing that I would expect to hear from any scientist in the public eye, “science education is important.” There is something revealing about Von Neumann, despite being the smartest person in the world, saying basically what I would expect Neil DeGrasse Tyson to say in an interview. A lot of the time he was wearing his “scientist / public intellectual” hat, not the “smartest person ever to live” hat. Some other notes of interest: He was not a skilled poker player, which punctured my assumption that Von Neumann was om
28Daniel Kokotajlo2moMy baby daughter was born two weeks ago, and in honor of her existence I'm building a list of about 100 technology-related forecasting questions, which will resolve in 5, 10, and 20 years. Questions like "By the time my daughter is 5/10/20 years old, the average US citizen will be able to hail a driverless taxi in most major US cities." (The idea is, tying it to my daughter's age will make it more fun and also increase the likelihood that I actually go back and look at it 10 years later.) I'd love it if the questions were online somewhere so other people could record their answers too. Does this seem like a good idea? Hive mind, I beseech you: Help me spot ways in which this could end badly! On a more positive note, any suggestions for how to do it? Any expressions of interest in making predictions with me? Thanks!
25Vaniver2mo[Meta: this is normally something I would post on my tumblr [https://vaniver.tumblr.com/], but instead am putting on LW as an experiment.] Sometimes, in games like Dungeons and Dragons, there will be multiple races of sapient beings, with humans as a sort of baseline. Elves are often extremely long-lived, but most handlings of this I find pretty unsatisfying. Here's a new take, that I don't think I've seen before (except the Ell in Worth the Candle [https://archiveofourown.org/works/11478249/chapters/25740126] have some mild similarities): Humans go through puberty at about 15 and become adults around 20, lose fertility (at least among women) at about 40, and then become frail at about 60. Elves still 'become adults' around 20, in that a 21-year old elf adventurer is as plausible as a 21-year old human adventurer, but they go through puberty at about 40 (and lose fertility at about 60-70), and then become frail at about 120. This has a few effects: * The peak skill of elven civilization is much higher than the peak skill of human civilization (as a 60-year old master carpenter has had only ~5 decades of skill growth, whereas a 120-year old master carpenter has had ~11). There's also much more of an 'apprenticeship' phase in elven civilization (compare modern academic society's "you aren't fully in the labor force until ~25" to a few centuries ago, when it would have happened at 15), aided by them spending longer in the "only interested in acquiring skills" part of 'childhood' before getting to the 'interested in sexual market dynamics' part of childhood. * Young elves and old elves are distinct in some of the ways human children and adults are distinct, but not others; the 40-year old elf who hasn't started puberty yet has had time to learn 3 different professions and build a stable independence, whereas the 12-year old human who hasn't started puberty yet is just starting to operate as an independent entity. And so sometimes
22Vaniver2moPeople's stated moral beliefs are often gradient estimates instead of object-level point estimates. This makes sense if arguments from those beliefs are pulls on the group epistemology, and not if those beliefs are guides for individual action. Saying "humans are a blight on the planet" would mean something closer to "we should be more environmentalist on the margin" instead of "all things considered, humans should be removed." You can probably imagine how this can be disorienting, and how there's a meta issue of the point estimate view is able to see what it's doing in a way that the gradient view might not be able to see what it's doing.
Load More (5/87)

September 2019

Frontpage Posts
Shortform [Beta]
49elityre2moNew post: Some things I think about Double Crux and related topics I've spent a lot of my discretionary time working on the broad problem of developing tools for bridging deep disagreements and transferring tacit knowledge. I'm also probably the person who has spent the most time explicitly thinking about and working with CFAR's Double Crux framework. It seems good for at least some of my high level thoughts to be written up some place, even if I'm not going to go into detail about, defend, or substantiate, most of them. The following are my own beliefs and do not necessarily represent CFAR, or anyone else. I, of course, reserve the right to change my mind. [Throughout I use "Double Crux" to refer to the Double Crux technique, the Double Crux class, or a Double Crux conversation, and I use "double crux" to refer to a proposition that is a shared crux for two people in a conversation.] Here are some things I currently believe: (General) 1. Double Crux is one (highly important) tool/ framework among many. I want to distinguish between the the overall art of untangling and resolving deep disagreements and the Double Crux tool in particular. The Double Crux framework is maybe the most important tool (that I know of) for resolving disagreements, but it is only one tool/framework in an ensemble. 2. Some other tools/ frameworks, that are not strictly part of Double Crux (but which are sometimes crucial to bridging disagreements) include NVC, methods for managing people's intentions and goals, various forms of co-articulation (helping to draw out an inchoate model from one's conversational partner), etc.In some contexts other tools are substitutes for Double Crux (ie another framework is more useful) and in some cases other tools are helpful or necessary compliments (ie they solve problems or smooth the process within the Double Crux frame).In particular, my personal conversational facilitation repertoire is about 60%
33romeostevensit3moA service where a teenager reads something you wrote slowly and sarcastically. The points at which you feel defensive are worthy of further investigation.
27jp3moDo Anki while Weightlifting Many rationalists appear to be interested in weightlifting. I certainly have enjoyed having a gym habit. I have a recommendation for those who do: Try studying Anki cards [https://twitter.com/michael_nielsen/status/957763229454774272?lang=en] while resting between weightlifting sets. The upside is high. Building the habit of studying Anki cards is hard, and if doing it at the gym causes it to stick, you can now remember things by choice not chance. And the cost is pretty low. I rest for 90 seconds between sets, and do about 20 sets when I go to the gym. Assuming I get a minute in once the overheads are accounted for, that gives me 20 minutes of studying. I go through about 4 cards per minute, so I could do 80 cards per visit to the gym. In practice I spend only ~5 minutes studying per visit, because I don't have that many cards. I'm not too tired to concentrate. In fact, the adrenaline high makes me happy to have something mentally active to do. Probably because of this, it doesn't at all decrease my desire to go to the gym. I find I can add simple cards to my Anki deck at the gym, although the mobile app does make it slow. Give it a try! It's cheap to experiment and the value of a positive result is high.
26habryka3moWHAT IS THE PURPOSE OF KARMA? LessWrong has a karma system, mostly based off of Reddit's karma system, with some improvements and tweaks to it. I've thought a lot about more improvements to it, but one roadblock that I always run into when trying to improve the karma system, is that it actually serves a lot of different uses, and changing it in one way often means completely destroying its ability to function in a different way. Let me try to summarize what I think the different purposes of the karma system are: Helping users filter content The most obvious purpose of the karma system is to determine how long a post is displayed on the frontpage, and how much visibility it should get. Being a social reward for good content This aspect of the karma system comes out more when thinking about Facebook "likes". Often when I upvote a post, it is more of a public signal that I value something, with the goal that the author will feel rewarded for putting their effort into writing the relevant content. Creating common-knowledge about what is good and bad This aspect of the karma system comes out the most when dealing with debates, though it's present in basically any karma-related interaction. The fact that the karma of a post is visible to everyone, helps people establish common knowledge of what the community considers to be broadly good or broadly bad. Seeing a an insult downvoted, does more than just filter it out of people's feeds, it also makes it so that anyone who stumbles accross it learns something about the norms of the community. Being a low-effort way of engaging with the site On lesswrong, Reddit and Facebook, karma is often the simplest action you can take on the site. This means its usually key for a karma system like that to be extremely simple, and not require complicated decisions, since that would break the basic engagement loop with the site. PROBLEMS WITH ALTERNATIVE KARMA SYSTEMS Here are some of the most common alternatives to our current
23Ruby3moSelected Aphorisms from Francis Bacon's Novum Organum I'm currently working to format Francis Bacon's Novum Organum [https://en.wikipedia.org/wiki/Novum_Organum] as a LessWrong sequence. It's a moderate-sized project as I have to work through the entire work myself, and write an introduction which does Novum Organum justice and explains the novel move of taking an existing work and posting in on LessWrong (short answer: NovOrg is some serious hardcore rationality and contains central tenets of the LW foundational philosophy notwithstanding being published back in 1620, not to mention that Bacon and his works are credited with launching the modern Scientific Revolution) While I'm still working on this, I want to go ahead and share some of my favorite aphorisms from is so far: Bacon sees the unaided human mind as entirely inadequate for scientific progress. He sees for the way forward for scientific progress as constructing tools/infrastructure/methodogy to help the human mind think/reason/do science. Bacon repeatedly hammers that reality has a surprising amount of detail [http://johnsalvatier.org/blog/2017/reality-has-a-surprising-amount-of-detail] such that just reasoning about things is unlikely to get at truth. Given the complexity and subtlety of nature, you have to go look at it. A lot. Anticipations are what Bacon calls making theories by generalizing principles from a few specific examples and the reasoning from those [ill-founded] general principles. This is the method of Aristotle and science until that point which Bacon wants to replace. Interpretations is his name for his inductive method which only generalizes very slowly, building out slowly increasingly large sets of examples/experiments. I read Aphorism 28 as saying that Anticipations have much lower inferential distance since they can be built simple examples with which everyone is familiar. In contrast, if you build up a theory based on lots of disparate observation that isn't universal,
Load More (5/134)

Load More Months