All Posts

Sorted by New

Thursday, November 14th 2019
Thu, Nov 14th 2019

No posts for November 14th 2019
Shortform [Beta]
11Ben Pace12h Trying to think about building some content organisations and filtering systems on LessWrong. I'm new to a bunch of the things I discuss below, so I'm interested in other people's models of these subjects, or links to sites that solve the problems in different ways. Two Problems So, one problem you might try to solve is that people want to see all of a thing on a site. You might want to see all the posts on reductionism on LessWrong, or all the practical how-to guides (e.g. how to beat procrastination, Alignment Research Field Guide, etc), or all the literature reviews on LessWrong. And so you want people to help build those pages. You might also want to see all the posts corresponding to a certain concept, so that you can find out what that concept refers to (e.g. what is the term "goodhart's law" or "slack" or "mesa-optimisers" etc). Another problem you might try to solve, is that while many users are interested in lots of the content on the site, they have varying levels of interest in the different topics. Some people are mostly interested in the posts on big picture historical narratives, and less so on models of one's own mind that help with dealing with emotions and trauma. Some people are very interested AI alignment, some are interested in only the best such posts, and some are interested in none. I think the first problem is supposed to be solved by Wikis, and the second problem is supposed to be solved by Tagging. Speaking generally, Wikis allow dedicated users to curated pages around certain types of content, highlighting the best examples, some side examples, writing some context for people arriving on the page to understand what the page is about. It's a canonical, update-able, highly editable page built around one idea. Tagging is much more about filtering than about curating. Tagging Let me describe some different styles of tagging. One the site there are about 100 tags in total. Most tags give a very broad description of an area o
7Raemon15h The 2018 Long Review (Notes and Current Plans) I've spent much of the past couple years pushing features that help with the early stages of the intellectual-pipeline – things like shortform [], and giving authors moderation tools [] that let them have the sort of conversation they want (which often is higher-context, and assuming a particular paradigm that the author is operating in) Early stage ideas benefit from a brainstorming, playful, low-filter environment. I think an appropriate metaphor for those parts of LessWrong are "a couple people in a research department chatting about their ideas." But longterm incentives and filters matter a lot as well. I've focused on the early stages because that's where the bottleneck seemed to be, but LessWrong is now at a place where I think we should start prioritizing the later stages of the pipeline – something more analogous to publishing papers, and eventually distilling them into textbooks. So, here's the current draft of a plan that I've been discussing with other LW Team members: — The Long Review Format — Many LessWrong posts are more conceptual than empirical, and it's hard to tell immediately how useful they are. I think they benefit a lot from hindsight. So, once each year, we could reflect as a group about the best posts of the previous year*, and which them seem to have withstood the tests of time as something useful, true, and (possibly), something that should enter in the LessWrong longterm canon that people are expected to be familiar with. Here's my current best guess for the format: [note: I currently expect the entire process to be fully public, because it's not really possible for it to be completely private, and "half public" seems like the worst situation to me] * (1 week) Nomination * Users with 1000+ karma can nominate posts from 2018-or-earlier, desc

Wednesday, November 13th 2019
Wed, Nov 13th 2019

No posts for November 13th 2019
Shortform [Beta]
6TurnTrout1d Yesterday, I put the finishing touches on my chef d'œuvre, a series of important safety-relevant proofs I've been striving for since early June. Strangely, I felt a great exhaustion come over me. These proofs had been my obsession for so long, and now - now, I'm done. I've had this feeling before; three years ago, I studied fervently for a Google interview. The literal moment the interview concluded, a fever overtook me. I was sick for days. All the stress and expectation and readiness-to-fight which had been pent up, released. I don't know why this happens. But right now, I'm still a little tired, even after getting a good night's sleep.
5Naryan Wong1d Meta-moves may look off-topic from the object level Halfway through a double-crux about efficient markets, my interlocutor asks how I'm feeling. I'm deciding where to go for lunch and my friend asks me if I'm ready for the presentation in the afternoon. I'm planning my road-trip route on Waze and my partner asks what time we plan on leaving. Imagine if every time someone mentions something you consider irrelevant or off-topic, instead of dismissing it - you view it as a meta-move and considered their meta-frame on your thinking.
2elityre1d new post: Metacognitive space [Part of my Psychological Principles of Personal Productivity, which I am writing mostly in my Roam, now.] Metacognitive space is a term of art that refers to a particular first person state / experience. In particular it refers to my propensity to be reflective about my urges and deliberate about the use of my resources. I think it might literally be having the broader context of my life, including my goals and values, and my personal resource constraints loaded up in peripheral awareness. Metacognitive space allows me to notice aversions and flinches, and take them as object, so that I can respond to them with Focusing or dialogue, instead of being swept around by them. Similarly, it seems to, in practice, to reduce my propensity to act on immediate urges and temptations. [Having MCS is the opposite of being [[{Urge-y-ness | reactivity | compulsiveness}]]?] It allows me to “absorb” and respond to happenings in my environment, including problems and opportunities, taking considered instead of semi-automatic, first response that occurred to me, action. [That sentence there feels a little fake, or maybe about something else, or maybe is just playing into a stereotype?] When I “run out” of meta cognitive space, I will tend to become ensnared in immediate urges or short term goals. Often this will entail spinning off into distractions, or becoming obsessed with some task (of high or low importance), for up to 10 hours at a time. Some activities that (I think) contribute to metacogntive space: * Rest days * Having a few free hours between the end of work for the day and going to bed * Weekly [[Scheduling]]. (In particular, weekly scheduling clarifies for me the resource constraints on my life.) * Daily [[Scheduling]] * [[meditation]], including short meditation. * Notably, I’m not sure if meditation is much more efficient than just taking the same time to go for a walk. I think it might be or might not be. * [[Exerc

Tuesday, November 12th 2019
Tue, Nov 12th 2019

No posts for November 12th 2019
Shortform [Beta]
11Kaj_Sotala2d Here's a mistake which I've sometimes committed and gotten defensive as a result, and which I've seen make other people defensive when they've committed the same mistake. Take some vaguely defined, multidimensional thing that people could do or not do. In my case it was something like "trying to understand other people". Now there are different ways in which you can try to understand other people. For me, if someone opened up and told me of their experiences, I would put a lot of effort into really trying to understand their perspective, to try to understand how they thought and why they felt that way. At the same time, I thought that everyone was so unique that there wasn't much point in trying to understand them by any *other* way than hearing them explain their experience. So I wouldn't really, for example, try to make guesses about people based on what they seemed to have in common with other people I knew. Now someone comes and happens to mention that I "don't seem to try to understand other people". I get upset and defensive because I totally do, this person hasn't understood me at all! And in one sense, I'm right - it's true that there's a dimension of "trying to understand other people" that I've put a lot of effort into, in which I've probably invested more than other people have. And in another sense, the other person is right - while I was good at one dimension of "trying to understand other people", I was severely underinvested in others. And I had not really even properly acknowledged that "trying to understand other people" had other important dimensions too, because I was justifiably proud of my investment in one of them. But from the point of view of someone who *had* invested in those other dimensions, they could see the aspects in which I was deficient compared to them, or maybe even compared to the median person. (To some extent I thought that my underinvestment in those other dimensions was *virtuous*, because I was "not making assumption
9elityre2d New (short) post: Desires vs. Reflexes [] [Epistemic status: a quick thought that I had a minute ago.] There are goals / desires (I want to have sex, I want to stop working, I want to eat ice cream) and there are reflexes (anger, “wasted motions”, complaining about a problem, etc.). If you try and squash goals / desires, they will often (not always?) resurface around the side, or find some way to get met. (Why not always? What are the difference between those that do and those that don’t?) You need to bargain with them, or design outlet polices for them. Reflexes on the other hand are strategies / motions that are more or less habitual to you. These you train or untrain.
2Hazard3d Have some horrible jargon: I spit out a question or topic and ask you for your NeMRIT, your Next Most Relevant Interesting Take. Either give your thoughts about the idea I presented as you understand it, unless that's boring, then give thoughts that interests you that seem conceptually closest to the idea I brought up.
1Evan Rysdam3d I just noticed that I've got two similarity clusters in my mind that keep getting called to my attention by wording dichotomies like high-priority and low-priority, but that would themselves be better labeled as big and small. This was causing me to interpret phrases like "doing a string of low-priority tasks" as having a positive affect (!) because what it called to mind was my own activity of doing a string of small, on-average medium-priority tasks. My thought process might improve overall if I toss out the "big" and "small" similarity clusters and replace them with clusters that really are centered around "high-priority" and "low-priority".

Monday, November 11th 2019
Mon, Nov 11th 2019

No posts for November 11th 2019
Shortform [Beta]
3Raemon3d Meta/UI: I currently believe it was a mistake to add the "unread green left-border" to posts and comments in the Recent Discussion section – it mostly makes me click a bunch of things to remove the green that I didn't really want to mark as read. Curious if anyone has opinions about that.
3Grue_Slinky3d In reasoning about AGI, we're all aware of the problems with anthropomorphizing, but it occurs to me that there's also a cluster of bad reasoning that comes from an (almost?) opposite direction, where you visualize an AGI to be a mechanical automaton and draw naive conclusions based on that. For instance, every now and then I've heard someone from this community say something like: Even if ZFC is inconsistent, this hardly seems like a legitimate concern. There's no reason to hard-code ZFC into an AI unless we want a narrow AI that's just a theorem prover (e.g. Logic Theorist). Anything close to AGI will necessarily build rich world models, and from the standpoint of these, ZFC wouldn't literally be everything. ZFC would just be a sometimes-useful tool it discovers for organizing its mathematical thinking, which in turn is just a means toward understanding physics etc. better, much as humans wouldn't go crazy if ZFC yields a contradiction. The general fallacy I'm pointing to isn't just "AGI will be logic-based" but something more like "AGI will act like a machine, an automaton, or a giant look-up table". This is technically true, in the same way humans can be perfectly described as a giant look-up table, but it's just the wrong level of abstraction for thinking about agents (most of the time) and can lead one to silly conclusions if one isn't really careful. For instance my (2nd hand, half-baked, and lazy) understanding of Penrose's arguments are as follows: Godel's theorems say formal systems can't do X, humans can do X, therefore human brains can't be fully described as formal systems (or maybe he references Turing machines and the halting problem, but the point is still similar). Note that this makes sense as stated, the catch is that "the human brain when broken down all the way to a Turing machine" is what the Godel/Turing stuff applies to, not "the human brain at the level of abstraction we use to think about it (in terms of 'thoughts', 'concepts', etc.)".
2Hazard3d Kevin Zollman at CMU looks like he's done a decent amount of research on group epistemology. I plan to read the deets at some point, here's a link [] if anyone wanted to do it first and post something about it.
1liam_hinzman3d GROWTH, NOVELTY, AND PRESERVATION Note: This was heavily inspired by toonalfrink's recent shortform post [] There's this Farnam Street blog post that says "if you can get 1 percent better each day for one year, you’ll end up thirty-seven times better by the time you’re done" This always seemed a bit unrealistic to me, but until recently I wasn't able to pinpoint why. I think digging into this is important because it's easy to look back 1 year, see how much you've grown, and then extrapolate to your future and think "damn, there's no way I won't reach near-Elon status at some point in my life" So why doesn't everybody become Elon? It's because most people think: Growth = Novelty. I thought that if I did new things, if I'm having unique experiences, getting unique insights, and having realizations, then I'm growing. This is mostly true, but it's missing an important component. Decay. After a lot of conferences I'll realize that I just need to stop overthinking things and network like my life depends on it. It just makes sense, and it feels like I've leveled up. But then at the next conference I'm a bit nervous. Eventually I get into the swing of it and realize that I need to stop hesitating. I've had two realizations, but I didn't level up twice. I leveled up once, decayed, and then got back to where I was before. My initial response to this is "I internalized the realization more the second time". But this shows that I haven't actually grown more than how much I perceived I grew the first time. In reality Growth = Novelty - Decay, which is equivalent to Growth = Novelty + Preservation Preservation, compounding, consistency. Whatever you want to call it, it's important. When I want to grow, instead of starting over ("what's the best upgrade to add to my life?") I should check whether I'm on track with the implementation of past good ideas ("what did my past self in

Sunday, November 10th 2019
Sun, Nov 10th 2019

No posts for November 10th 2019
Shortform [Beta]
4rmoehn4d Updated the Prediced AI alignment event/meeting calendar [] . Main change: Deadline for SafeAI workshop corrected to November from December.
1Eigil Rischel4d A thought about productivity systems/workflow optimization: One principle of good design is "make the thing you want people to do, the easy thing to do". However, this idea is susceptible to the following form of Goodhart: often a lot of the value in some desirable action comes from the things that make it difficult. For instance, sometimes I decide to migrate some notes from one note-taking system to another. This is usually extremely useful, because it forces me to review the notes and think about how they relate to each other and to the new system. If I make this easier for myself by writing a script to do the work (as I have sometimes done), this important value is lost. Or think about spaced repetition cards: You can save a ton of time by reusing cards made by other people covering the same material - but the mental work of breaking the material down into chunks that can go into the spaced-repetition system, which is usually very important, is lost.

Saturday, November 9th 2019
Sat, Nov 9th 2019

No posts for November 9th 2019
Shortform [Beta]
10toonalfrink5d Here's a faulty psychological pattern that I recently resolved for myself. It's a big one. I want to grow. So I seek out novelty. Try new things. For example I might buy high-lumen light bulbs to increase my mood. So I buy them, feel somewhat better, celebrate the win and move on. Problem is, I've bought high-lumen bulbs three times in my life now already, yet I sit here without any. So this pattern might happen all over again: I feel like upgrading my life, get this nice idea of buying light bulbs, buy them, celebrate my win and move on. So here's 4 life-upgrades, but did I grow 4 times? Obviously I only grew once. From not having high lumen light bulbs to having them. My instinct towards growth seems to think this: But in reality, it seems to be more like this: which I define as equal to The tap I installed that puts this preservation mindset into practice seems to be very helpful. It's as follows: if I wonder what to do, instead of starting over ("what seems like the best upgrade to add to my life?") I first check whether I'm on track with the implementation of past good ideas ("what did my past self intend to do with this moment again?") Funnily enough, so far the feeling I get from this mindset seems pretty similar to the feeling I get from meditation. And meditation can be seen as training yourself to put your attention on your past intentions too. I think this one goes a lot deeper than what I've written here. I'll be revisiting this idea.
8elityre6d Totally an experiment, I'm trying out posting my raw notes from a personal review / theorizing session, in my short form. I'd be glad to hear people's thoughts. This is written for me, straight out of my personal Roam repository. The formatting is a little messed up because LessWrong's bullet don't support indefinite levels of nesting. This one is about Urge-y-ness / reactivity / compulsiveness * I don't know if I'm naming this right. I think I might be lumping categories together. * Let's start with what I know: * There are three different experiences, which might turn out to have a common cause, or which might turn out to be inssuficently differentiated 1. I sometimes experience a compulsive need to do something or finish something. 2. examples: 1. That time when I was trying to make an audiobook of Focusing: Learn from the Masters 2. That time when I was flying to Princeton to give a talk, and I was frustratedly trying to add photos to some dating app. 3. Sometimes I am anxious or agitated (often with a feeling in my belly), and I find myself reaching for distraction, often youtube or webcomics or porn. 4. Sometimes, I don't seem to be anxious, but I still default to immediate gratification behaviors, instead of doing satisfying focused work ()"my attention like a plow, heavy with inertia, deep in the earth, and cutting forward"). I might think about working, and then deflect to youtube or webcomics or porn. 5. I think this has to do with having a thought or urge, and then acting on it unreflectively.examples: 1. I think I've been like that for much of the past two days. [2019-11-8] These might be different states, each of which is high on some axis: something like reactivity (as opposed to responsive) or impulsiveness or compulsiveness.If so, the third case
4orthonormal5d Decision-theoretic blackmail is when X gets Y to choose A over B, not via acting to make the consequences of A more appealing to Y, but by making the consequences of B less appealing to Y. The exceptions to this definition are pretty massive, though, and I don't know a principled emendation that excludes them. 1. There's a contract / social contract / decision-theoretic equilibrium, and within that, B will be punished. (This may not be a true counterexample, because the true choice is whether to join the contract... though this is less clear for the social contract than for the other two.) 2. Precommitting not to give in to blackmail is not itself blackmail. Of course, in an ultimatum game both players can imagine themselves as doing this. Can anyone think of more exceptions, or a redefinition that clearly excludes these?
2Chris_Leong5d I've been thinking about Rousseau and his conception of freedom again because I'm not sure I hit the nail on the head last time. The most typical definition of freedom and that championed by libertarians focuses on an individual's ability to make choices in their daily life. On the more libertarian end, the government is seen as an oppressor and a force of external compulsion. On the other hand, Rousseau's view focuses on "the people" and their freedom to choose the kind of society that they want to live in. Instead of being seen as an external entity, the government is seen as a vessel through which the people can express and realise this freedom (or at least as potentially becoming such a vessel). I guess you could call this a notion of collective freedom, but at the same time this risks obscuring an important point: that at the same time it is an individual freedom as well. Part of it is that "the people" is made up of individual "people", but it goes beyond this. The "will of the people" at least in its idealised form isn't supposed to be about a mere numerical majority or some kind of averaging of perspectives or the kind of limited and indirect influence allowed in most representative democracies, but rather it is supposed to be about a broad consensus; a direct instantiation of the will of most individuals. There is a clear tension between these kinds of freedom in that the more the government respects personal freedom that less control the people have over the kind of society they want to live in and the more the government focuses on achieving the "will of the people" the less freedom exists for those for whom this doesn't sound so appealing. I can't recall the arguments Rousseau makes for this position, but I expect that they'd be similar to the arguments for positive freedoms. Proponents of positive freedom argue that theoretical freedoms, such as there being no legal restriction against gaining an education, are worthless if these opportunities aren'

Friday, November 8th 2019
Fri, Nov 8th 2019

No posts for November 8th 2019
Shortform [Beta]
9TekhneMakre7d The hermeneutic spiral is the process of understanding a text (or more generally, anything big and confusing) by passing over it again and again, each time using what you've learned to more deeply understand the workings and roles of each part and grow a truer image of the whole. The hermeneutic spiral is not depth-first search; it's more like bread-first search, but it can also involve depth, and altering the ordering you use to search, and expanding the set you're searching over. The hermeneutic spiral involves noticing landmarks, principles, cruxes, and generators. It involves logogenesis [] . It's an aspect of how Alexander Grothendieck did math. It's the Unix philosophy [] of programming. It's a way to make a pattern language [] (of rationality []). Compare the expectation–maximization algorithm []. GPT-2 uses the transformer architecture [], which is a sort of toy version of a hermeneutic spiral. "The sun was sinking in the sky, for Harry had been thinking for some hours now, thinking mostly the same thoughts over and over, but with key differences each time, like his thoughts were not going in circles, but climbing a spiral, or descending it." HPMOR, ch. 63.
7elityre7d New post: Some musings about exercise and time discount rates [] [Epistemic status: a half-thought, which I started on earlier today, and which might or might not be a full thought by the time I finish writing this post.] I’ve long counted exercise as an important component [] of my overall productivity and functionality. But over the past months my exercise habit has slipped some, without apparent detriment to my focus or productivity. But this week, after coming back from a workshop, my focus and productivity haven’t really booted up. Here’s a possible story: Exercise (and maybe mediation) expands the effective time-horizon of my motivation system. By default, I will fall towards attractors of immediate gratification and impulsive action, but after I exercise, I tend to be tracking, and to be motivated by, progress on my longer term goals. [1] When I am already in the midst of work: my goals are loaded up and the goal threads are primed in short term memory, this sort of short term compulsiveness causes me to fall towards task completion: I feel slightly obsessed about finishing what I’m working on. But if I’m not already in the stream of work, seeking immediate gratification instead drives me to youtube and web comics and whatever. (Although it is important to note that I did switch my non self tracking web usage to Firefox this week, and I don’t have my usual blockers for youtube and for SMBC set up yet. That might totally account for the effect that I’m describing here.) In short, when I’m not exercising enough, I have less meta cognitive space for directing my attention and choosing what is best do do. But if I’m in the stream of work already, I need that meta cognitive space less: because I’ll default to doing more of what I’m working on. (Though, I think
2Vanessa Kosoy6d It seems useful to consider agents that reason in terms of an unobservable ontology, and may have uncertainty over what this ontology is. In particular, in Dialogic RL [] , the user's preferences are probably defined w.r.t. an ontology that is unobservable by the AI (and probably unobservable by the user too) which the AI has to learn (and the user is probably uncertain about emself). However, onotlogies are more naturally thought of as objects in a category than as elements in a set. The formalization of an "ontology" should probably be a POMDP or a suitable Bayesian network. A POMDP involves an arbitrary set of states, so it's not an element in a set, and the class of POMDPs can be naturally made into a category. Therefore, there is need for defining the notion of a probability measure over a category. Of course we can avoid this by enumerating the states, considering the set of all possible POMDPs w.r.t. this enumeration and then requiring the probability measure to be invariant w.r.t. state relabeling. However, the category theoretic point of view seems more natural, so it might be worth fleshing out. Ordinary probably measures are defined on measurable spaces. So, first we need to define the analogue of "measurable structure" (σ-algebra) for categories. Fix a category C. Denote Meas the category of measurable spaces. A measurable structure on C is then specified by providing a Grothendick fibration [] B:MFC→Meas and an equivalence E:B−1(pt)→C. Here, B−1(pt) stands for the essential fiber [] of B over the one point space pt∈Meas. The intended interpretation of MFC is, the category of families of objects in C indexed by measurable spaces. The functor B is supposed to extract the base (index space) of the family. We impose the following conditions on MFC and B: Given A∈
1TekhneMakre7d Logogenesis. The creation of words, or more generally the creation of forms of language, such as phrases, abbreviations, grammatical rules, and so on. "Logos" = reason, idea, word; "genesis" = creation, origin, from "*gene-" = give birth.
1TekhneMakre7d Subficial. Opposite of superficial: under the surface, or on the underside. "sub-" meaning "under", "-ficial" meaning "having to do with form or face".
Load More (5/6)

Thursday, November 7th 2019
Thu, Nov 7th 2019

No posts for November 7th 2019
Shortform [Beta]
10toonalfrink7d You may have heard of the poverty trap, where you have so little money that you're not able to spend any money on the things you need to make more. Being broke is an attractor state. You may have heard of the loneliness trap. You haven't had much social interaction lately, which makes you feel bad and anxious. This anxiety makes it harder to engage in social interaction. Being lonely is an attractor state. I think the latter is a close cousin of something that I'd like to call the irrelevance trap: * Lemma 1: having responsibilities is psychologically empowering. When others depend on your decisions, it is so much easier to make the right decision. * Lemma 2: being psychologically empowered makes it more likely for you to take on responsibility, and for others to give you responsibility, because you're more able to handle it. I speculate that some forms of depression (the dopaminergic type) are best understood as irrelevance traps. I'm pretty sure that that was the case for me. How do you escape such a trap? Well you escape a loneliness trap by going against your intuition and showing up at a party. You escape an irrelevance trap by going against your intuition and taking on more responsibility than you feel you can handle.
6Raemon8d Ben Kuhn's Why and How to Start a For Profit Company Serving Emerging Markets [] is, in addition to being generally interesting, sort of cute for being two of the canonical Michael Vassar Questions rolled into one, while being nicely operationalized and clear. ("Move somewhere far away and stay their long enough to learn that social reality is arbitrary", and "start a small business and/or startup to a bunch about how pieces of the world fit together" being the two that come easiest to mind)

Wednesday, November 6th 2019
Wed, Nov 6th 2019

No posts for November 6th 2019

Tuesday, November 5th 2019
Tue, Nov 5th 2019

Shortform [Beta]
3An1lam9d Interesting Bill Thurston quote, sadly from his obituary []:

Load More Days