All Posts

Sorted by New

Saturday, August 15th 2020
Sat, Aug 15th 2020

No posts for August 15th 2020
2mr-hire12hWhat can I do to get an intuitive grasp of Kelly betting? Are there apps I can play or exercises I can try?
1grumpyfreyr3hLess wrong is still wrong. There is a way of seeing that is not wrong.

Friday, August 14th 2020
Fri, Aug 14th 2020

No posts for August 14th 2020
25Hazard1dHOLY shit! I just checked out the new concepts portion of the site that shows you all the tags. This feels like a HUGE step in the direction the LW team's vision of a place where knowledge production can actually happen.
8elityre1dI’ve decided that I want to to make more of a point to write down my macro-strategic thoughts, because writing things down often produces new insights and refinements, and so that other folks can engage with them. This is one frame or lens that I tend to think with a lot. This might be more of a lens or a model-let than a full break-down. There are two broad classes of problems that we need to solve: we have some pre-paradigmatic science to figure out, and we have have the problem of civilizational sanity. PREPARADIGMATIC SCIENCE There are a number of hard scientific or scientific-philosophical problems that we’re facing down as a species. Most notably, the problem of AI alignment, but also finding technical solutions to various risks caused by bio-techinlogy, possibly getting our bearings with regards to what civilization collapse means and how it is likely to come about, possibly getting a handle on the risk of a simulation shut-down, possibly making sense of the large scale cultural, political, cognitive shifts that are likely to follow from new technologies that disrupt existing social systems (like VR?). Basically, for every x-risk, and every big shift to human civilization, there is work to be done even making sense of the situation, and framing the problem. As this work progresses it eventually transitions into incremental science / engineering, as the problems are clarified and specified, and the good methodologies for attacking those problems solidify. (Work on bio-risk, might already be in this phase. And I think that work towards human genetic enhancement is basically incremental science.) To my rough intuitions, it seems like these problems, in order of pressingness are: 1. AI alignment 2. Bio-risk 3. Human genetic enhancement 4. Social, political, civilizational collapse …where that ranking is mostly determined by which one will have a very large impact on the world first. So there’s the object-level work of just trying to make progress o
2Ricardo Meneghin1dHas there been any discussion around aligning a powerful AI by minimizing the amount of disruption it causes to the world? A common example of alignment failure is that of a coffee-serving robot killing its owner because that's the best way to ensure that the coffee will be served. Sure, it is, but it's also a course of action majorly more transformative to the world than just serving coffe. A common response is "just add safeguards so it doesn't kill humans", which is followed by "sure, but you can't add safeguards for every possible failure mode". But can't you? Couldn't you just add a term to the agent's utility function penalizing the difference between the current world and it's prediction of the future world, disincentivizing any action that makes a lot of changes (like taking over the world)?
2ChristianKl1dThinking more about the Russian vaccine is sad. There's no discussion in the media about what risk we should actually expect from the vaccine. The scientists that get asked by the media to comment are only asked to talk about the general policy of clinical trials but not about the underlying biology.
1MikkW17hI assign roughly 0% chance to me ever formulating The One True Theory of Physics, uniting General Relativity with the Standard Model But it seems that there's room for a person with slightly-above-average intelligence and curiosity to formulate The One True Theory of Psychology, tying together everything in Psychology, Neurology, and Computational Neuroscience/ML

Thursday, August 13th 2020
Thu, Aug 13th 2020

No posts for August 13th 2020
8DanielFilan2dAs far as I can tell, people typically use the orthogonality thesis to argue that smart agents could have any motivations. But the orthogonality thesis is stronger than that, and its extra content is false - there are some goals that are too complicated for a dumb agent to have, because the agent couldn't understand those goals. I think people should instead directly defend the claim that smart agents could have arbitrary goals.
1TAG2dThat's exactly what decoherent many worlds asserts!

Wednesday, August 12th 2020
Wed, Aug 12th 2020

No posts for August 12th 2020
9mr-hire3dRecently went on a quest to find the best way to minimize the cord clutter, cord management, and charging anxiety that creates a dozen trivial inconveniences throughout the day. Here's what worked for me: 1. For each area that is a wire maze, I get one of these surge protectors with 18 outlets and 3 usb slots: [] 2. For everywhere I am that I am likely to want to charge something, I fill 1 -3 of the slots with these 6ft multi-charging usb cables (more slots if I'm likely to want to charge multiple things). I get a couple extras for travel so that I can simply leave them in my travel bag: [] 3. For everywhere that I am likely to want to plug in my laptop, I get one of these universal laptop chargers. Save the attachments somewhere safe for future laptops, and leave the attachment that works for my laptop plugged in at each place. I get an extra to keep and put into my travel bag: [] 4. I run the USB cords and laptop cord through these nifty little cord clips, so they stay in place: [] 5. All the excess wiring, along with the surge protector, is put into this cord box. I use the twisty ties with that to secure wires from dangling, and ensure they go into the box neatly. Suddenly, the wires are super clean: [] 6. (Bonus Round) I have a charging case for my phone, so the only time I have to worry about charging it as night. I use this one for my Pixel 3A, but you'll have to find one that works for your phone: [] 7. (Bonus Round 2): Work to go wireless for things that have that option, like headphones. This will set you back $200 - $500 (depending on much of each thing you need) but man is it nice to not ever have to worry about finding a charging cord, moving a cord around, remembe
4AllAmericanBreakfast3dQuestion re: "Why Most Published Research Findings are False [] ": What is the difference between "the ratio of the number of 'true relationships' to 'no relationships' among those tested in the field" and "the pre-study probability of a relationship being true"?
1niplav3dI feel like this [] meme is related to the troll bridge problem [], but I can't explain how exactly.

Tuesday, August 11th 2020
Tue, Aug 11th 2020

No posts for August 11th 2020
15MikkW4dDoes newspeak actually decrease intellectual capacity? (No) In George Orwell's book 1984, he describes a totalitarian society that, among other initiatives to suppress the population, implements "Newspeak", a heavily simplified version of the English language, designed with the stated intent of limiting the citizens' capacity to think for themselves (thereby ensuring stability for the reigning regime) In short, the ethos of newspeak can be summarized as: "Minimize vocabulary to minimize range of thought and expression". There are two different, closely related, ideas, both of which the book implies, that are worth separating here. The first (which I think is to some extent reasonable) is that by removing certain words from the language, which serve as effective handles for pro-democracy, pro-free-speach, pro-market concepts, the regime makes it harder to communicate and verbally think about such ideas (I think in the absence of other techniques used by Orwell's Oceania to suppress independent thought, such subjects can still be meaningfully communicated and pondered, just less easily than with a rich vocabulary provided) The second idea, which I worry is an incorrect takeaway people may get from 1984, is that by shortening the dictionary of vocabulary that people are encouraged to use (absent any particular bias towards removing handles for subversive ideas), one will reduce the intellectual capacity of people using that variant of the language. A slight tangent whose relevance will become clear: If you listen to a native Chinese speaker, then compare the sound of their speech to a native Hawaiian speaker, there are many apparent differences in the sound of the two languages. Chinese has a rich phonological inventory containing 19 consonants, 5 vowels, and quite famously, 4 different tones (pitch patterns) which are used for each syllable, for a total of 5400 (approximately) possible syllables, including diphthongs and multi-syllabic vowels. Compare this to Haw
4ChristianKl4dThe public criticism of Russia's vaccination efforts seem strange to me. Claiming that Russia only wants to do early vaccinations because of reasons of national prestigue and not because of health and economic damage of COVID-19 seems to me like too many people still haven't understood that COVID-19 is a serious issue that warrents doing what we can.
2Gurkenglas4dI expect that all that's required for a Singularity is to wait a few years for the sort of language model that can replicate a human's thoughts faithfully, then make it generate a thousand year's worth of that researcher's internal monologue, perhaps with access to the internet. Neural networks should be good at this task - we have direct evidence that neural networks can run human brains. Whether our world's plot has a happy ending then merely depends on the details of that prompt/protocol - such as whether it decides to solve alignment before running a successor. Though it's probably simple to check alignment of the character - we have access to his thoughts. A harder question is whether the first LM able to run humans is still inner aligned.

Monday, August 10th 2020
Mon, Aug 10th 2020

No posts for August 10th 2020
6AllAmericanBreakfast5dHow should we weight and relate the training of our mind, body, emotions, and skills? I think we are like other mammals. Imitation and instinct lead us to cooperate, compete, produce, and take a nap. It's a stochastic process that seems to work OK, both individually and as a species. We made most of our initial progress in chemistry and biology through very close observation of small-scale patterns. Maybe a similar obsessiveness toward one semi-arbitrarily chosen aspect of our own individual behavior would lead to breakthroughs in self-understanding?
6MikkW5dI’ve been thinking about ways to signal truth value in speech- in our modern society, we have no way to readily tell when a person is being 100% honest- we have to trust that a communicator is being honest, or otherwise verify for ourselves if what they are saying is true, and if I want to tell a joke, speak ironically, or communicate things which aren’t-literally-the-truth-but-point-to-the-truth, my listeners need to deduce this for themselves from the context in which I say something not-literally-true. In language, we speak with different registers. Different registers are different ways of speaking, depending on the context of the speech. The way a salesman speaks to a potential customer, will be distinct from the way he speaks to his pals over a beer - he speaks in different registers in these different situations. But registers can also be used to communicate information about the intentions of the speaker - when a speaker is being ironic, he will intone his voice in a particular way, to signal to his listeners that he shouldn’t be taken 100% literally. There are two points that come to my mind here: One: establishing a register of communication that is reserved for speaking literally true statements, and Two: expanding the ability to use registers to communicate not-literally-true intent, particularly in text. On the first point, a large part of the reason why people speaking in a natural register cannot always be assumed to be saying something literally true, is that there is no external incentive to not lie. Well, sometimes there are incentives to not lie, but oftentimes these incentives are weak, and especially in a society built upon free speech, it is hard to - on a large scale - enforce a norm against not lying in natural-register speech. Now my mind imagines a protected register of speech, perhaps copyrighted by some organization (and which includes unique manners of speech which are distinctive enough to be eligible for copyright), which that organ
4G Gordon Worley III5dPersonality quizzes are fake frameworks [] that help us understand ourselves. What-character-from-show-X-are-you quizzes, astrology, and personality categorization instruments (think Big-5, Myers-Briggs, Magic the Gathering colors, etc.) are perennially popular. I think a good question is to ask, why do humans seem to like this stuff so much that even fairly skeptical folks tend to object not to categorization but that the categorization of any particular system is bad? My stab at an answer: humans are really confused about themselves, and are interested in things that seem to have even a little explanatory power to help them become less confused about who they are. Metaphorically, this is like if we lived in a world without proper mirrors, and people got really excited about anything moderately reflective because it let them see themselves, if only a little. On this view, these kinds of things, while perhaps not very scientific, are useful to folks because they help them understand themselves. This is not to say we can totally rehabilitate all such systems, since often they perform their categorization by mechanisms with very weak causal links that may not even rise above the level of noise (*cough* astrology *cough*), nor that we should be satisfied with personality assessments that involve lots of conflation and don't resolve much confusion, but on the whole we should be happy that these things exist because they help us see our psyches in the absence of proper mental mirrors. (FWIW, I do think there is a way to polish you mind into a mirror that can see itself and that I have managed to do this to some extent, but that's a bit besides the point I want to make here.)

Sunday, August 9th 2020
Sun, Aug 9th 2020

No posts for August 9th 2020
19AllAmericanBreakfast6dMath is training for the mind, but not like you think Just a hypothesis: People have long thought that math is training for clear thinking. Just one version []of this meme that I scooped out of the water: But math doesn't obviously seem to be the only way to practice precision, decision, creativity, beauty, or broad perspective-taking. What about logic, programming, rhetoric, poetry, anthropology? This sounds like marketing. As I've studied calculus, coming from a humanities background, I'd argue it differently. Mathematics shares with a small fraction of other related disciplines and games the quality of unambiguous objectivity. It also has the ~unique quality that you cannot bullshit your way through it. Miss any link in the chain and the whole thing falls apart. It can therefore serve as a more reliable signal, to self and others, of one's own learning capacity. Experiencing a subject like that can be training for the mind, because becoming successful at it requires cultivating good habits of study and expectations for coherence.
11Adam Scholl7dI made Twitter lists of researchers at DeepMind [] and OpenAI [], and find checking them useful for tracking team zeitgeists.
9Adam Scholl7dThought LinkedIn's role/background breakdown of DeepMind employees [] was interesting. Fewer people listed as having neuroscience backgrounds than I would have predicted.
5sairjy6dAfter GPT-3, is Nvidia undervalued? GPT-3 made me update considerably on various beliefs related to AI: it is a piece of evidence for the connectionist thesis, and I think one large enough that we should all be paying attention. There are 3 clear exponentials trends coming together: Moore's law, the AI compute/$ budget, and algorithm efficiency. Due to these trends and the performance of GPT-3, I believe it is likely humanity will develop transformative AI in the 2020s. The trends also imply a fastly rising amount of investments into compute, especially if compounded with the positive economic effects of transformative AI such as much faster GDP growth. In the spirit of using rationality to succeded in life, I start wondering if there is a "Bitcoin-sized" return potential currently untapped in the markets. And I think there is. As of today, the company that stands to reap the most benefits from this rising investment in compute is Nvidia. I say that because from a cursory look at the deep learning accelerators markets, none of the startups, such as Groq, Graphcore, Cerebras has a product that has clear enough advantages over their GPUs (which are now almost deep learning ASICs anyway). There has been a lot of debate on the efficient market hypothesis in the community lately, but in this case, it isn't even necessary: Nvidia stock could be underpriced because very few people have realized/believe that the connectionist thesis is true and that enough compute, data and the right algorithm can bring transformative AI and then eventually AGI. Heck, most people, and even smart ones, still believe that human intelligence is somewhat magical and that computers will never be able to __ . In this sense, the rationalist community could have an important mental makeup and knowledge advantage, considering we have been thinking about AI/AGI for a long time, over the rest of the market. As it stands today, Nvidia is valued at 260 billion dollars. It may appear massively

Saturday, August 8th 2020
Sat, Aug 8th 2020

No posts for August 8th 2020
14AllAmericanBreakfast7dWhat gives LessWrong staying power? On the surface, it looks like this community should dissolve. Why are we attracting bread bakers, programmers, stock market investors, epidemiologists, historians, activists, and parents? Each of these interests has a community associated with it, so why are people choosing to write about their interests in this forum? And why do we read other people's posts on this forum when we don't have a prior interest in the topic? Rationality should be the art of general intelligence. It's what makes you better at everything. If practice is the wood and nails, then rationality is the blueprint. To determine whether or not we're actually studying rationality, we need to check whether or not it applies to everything. So when I read posts applying the same technique to a wide variety of superficially unrelated subjects, it confirms that the technique is general, and helps me see how to apply it productively. This points at a hypothesis, which is that general intelligence is a set of defined, generally applicable techniques. They apply across disciplines. And they apply across problems within disciplines. So why aren't they generally known and appreciated? Shouldn't they be the common language that unites all disciplines? Perhaps it's because they're harder to communicate and appreciate. If I'm an expert baker, I can make another delicious loaf of bread. Or I can reflect on what allows me to make such tasty bread, and speculate on how the same techniques might apply to architecture, painting, or mathematics. Most likely, I'm going to choose to bake bread. This is fine, until we start working on complex, interdisciplinary projects. Then general intelligence becomes the bottleneck for having enough skill to get the project done. Sounds like the 21st century. We're hitting the limits of what's achievable through sheer persistence in a single specialty, and we're learning to automate them away. What's left is creativity, which arises from s
9AllAmericanBreakfast7dMarkets are the worst form of economy except for all those other forms that have been tried from time to time.
6Adele Lopez7dPrivacy as a component of AI alignment [realized this is basically just a behaviorist genie [], but posting it in case someone finds it useful] What makes something manipulative? If I do something with the intent of getting you to do something, is that manipulative? A simple request seems fine, but if I have a complete model of your mind, and use it phrase things so you do exactly what I want, that seems to have crossed an important line. The idea is that using a model of a person that is *too* detailed is a violation of human values. In particular, it violates the value of autonomy, since your actions can now be controlled by someone using this model. And I believe that this is a significant part of what we are trying to protect when we invoke the colloquial value of privacy. In ordinary situations, people can control how much privacy they have relative to another entity by limiting their contact with them to certain situations. But with an AGI, a person may lose a very large amount of privacy from seemingly innocuous interactions (we're already seeing the start of this with "big data" companies improving their advertising effectiveness by using information that doesn't seem that significant to us). Even worse, an AGI may be able to break the privacy of everyone (or a very large class of people) by using inferences based on just a few people (leveraging perhaps knowledge of the human connectome [], hypnosis, etc...). If we could reliably point to specific models an AI is using, and have it honestly share its model structure with us, we could potentially limit the strength of its model of human minds. Perhaps even have it use a hardcoded model limited to knowledge of the physical conditions required to keep it healthy. This would mitigate issues such as deliberate deception or mindcrime. We could also potentially allow it to use more detailed models in specific cases, for example, we co
6mr-hire7dHad an excellent interview with Hazard [] yesterday breaking down his felt sense of dealing with fear. As someone who does parkour and tricking, he's had to develop unique models that navigate the tension between ignoring his fear (which can lead to injury or death) and being consumed by fear (meaning he could never practice his craft). He implicitly breaks down fear into four categories, each with their own steps: 1. Fear Alarm Bells 2. Surfacing From Water 3. Listening 4. Transmuting to Resolve (or Backing off) At each step, he has tools and techniques (again, that were implicit before we chatted) telling you how to move forward. Just over the past day, I've already had a felt shift in how I relate to fear, and navigated a couple of situations differently. If you're interested in learning this model, I'd love to teach you! All I ask is that you let me use some of the clips from our teaching session in my podcast on the framework! Let me know if you're interested!

Friday, August 7th 2020
Fri, Aug 7th 2020

No posts for August 7th 2020
27Marcello8d"Aspiring Rationalist" Considered Harmful The "aspiring" in "aspiring rationalist" seems like superfluous humility at best. Calling yourself a "rationalist" never implied perfection in the first place. It's just like how calling yourself a "guitarist" doesn't mean you think you're Jimi Hendrix. I think this analogy is a good one, because rationality is a human art, just like playing the guitar. I suppose one might object that the word "rational" denotes a perfect standard, unlike playing the guitar. However, we don't hesitate to call someone an "idealist" or a "perfectionist" when they're putting in a serious effort to conform to an ideal or strive towards perfection, so I think this objection is weak. The "-ist" suffix already means that you're a person trying to do the thing, with all the shortcomings that entails. Furthermore, it appears harmful to add the "aspiring". It creates dilution. Think of what it would mean for a group of people to call themselves "aspiring guitarists". The trouble is, it also applies to the sort of person who daydreams about the adulation of playing for large audiences but never gets around to practicing. However, to honestly call yourself a "guitarist", you would have to actually, y'know, play the guitar once in a while. While I acknowledge I'm writing this many years too late, please consider dropping the phrase "aspiring rationalist" from your lexicon.
20elityre8d(Reasonably personal) I spend a lot of time trying to build skills, because I want to be awesome. But there is something off about that. I think I should just go after things that I want, and solve the problems that come up on the way. The idea of building skills sort of implies that if I don't have some foundation or some skill, I'll be blocked, and won't be able to solve some thing in the way of my goals. But that doesn't actually sound right. Like it seems like the main important thing for people who do incredible things is their ability to do problem solving on the things that come up, and not the skills that they had previously built up in a "skill bank". Raw problem solving is the real thing and skills are cruft. (Or maybe not cruft per se, but more like a side effect. The compiled residue of previous problem solving. Or like a code base from previous project that you might repurpose.) Part of the problem with this is that I don't know what I want for my own sake, though. I want to be awesome, which in my conception, means being able to do things. I note that wanting "to be able to do things" is a leaky sort of motivation: because the victory condition is not clearly defined, it can't be crisply compelling, and so there's a lot of waste somehow. The sort of motivation that works is simply wanting to do something, not wanting to be able to do something. Like specific discrete goals that one could accomplish, know that one accomplished, and then (in most cases) move on from. But most of the things that I want by default are of the sort "wanting to be able to do", because if I had more capabilities, that would make me awesome. But again, that's not actually conforming with my actual model of the world. The thing that makes someone awesome is general problem solving capability, more than specific capacities. Specific capacities are brittle. General problem solving is not. I guess that I could pick arbitrary goals that seem cool. But I'm much more emotio
6mr-hire9dCouldn't Eliezer just remove every reference to Harry Potter and publish it separately? It worked for E.L James.
4mr-hire8dJust had an excellent chat with CFAR Cofounder (although no longer a part of CFAR) Michael Smith [[0]=AZXWXG-1XM0S3B7JyKszCd5LUxSyHfoAtcuC9SMgvQPbRSrQGReW2lWLYPx-KdFSXMswzrpueL7cjjcfNsO3CUA-TkwvRyqT1GNmcaRCGukdym6mkdfonY7Mkiv7KPfXvDs&__tn__=-]K-R] breaking down in excruciating detail a skill he calls "Breaking Free." A step by step process to: 1. Notice auto-pilot scripts you are running that are causing you pain. 2. Dissolve them so you can see what actions will lead to what you truly want. Now, I'm looking for people to teach this skill to! It would involve a ~2 hour session where I ask you why you want the skill, and teach it to you, then a ~30 minute followup session a couple weeks later where we talk about what the skill has done for you. I'm happy to give free coaching on the skill to anyone who asks, all I ask is that I can use the recordings of your session in the podcast about the skill. Anyone interested?
4DanielFilan8dAvoid false dichotomies when reciting the litany of Tarski. Suppose I were arguing about whether it's morally permissible to eat vegetables. I might stop in the middle and say: But this ignores the possibility that it's neither morally permissible nor morally impermissible to eat vegetables, because (for instance) things don't have moral properties, or morality doesn't have permissible vs impermissible categories, or whether or not it's morally permissible or impermissible to eat vegetables depends on whether or not it's Tuesday. Luckily, when you're saying the litany of Tarski, you have a prompt to actually think about the negation of the belief in question. Which might help you avoid this mistake.

Thursday, August 6th 2020
Thu, Aug 6th 2020

No posts for August 6th 2020
1sberens9dTO WHAT EXTENT IS CREATING JOBS GOOD Is it always better to remove/replace inefficient jobs? For example if a company employs 100 people to do manual data entry, would it be better for either the economy or the utility function to fire them and automate the jobs?
1Mati_Roy9dI can pretty much only think of good reasons for having generally pro-entrapment laws. Not any kind of traps, but some kind of traps seem robustly good. Ex.: I'd put traps for situations that are likely to happen in real life, and that show unambiguous criminal intent. It seems like a cheap and effective way to deter crimes and identify people at risk of criminal behaviors. I've only thought about this for a bit though, so maybe I'm missing something. x-post with Facebook: []

Load More Days