All Posts

Sorted by Recent Comments

Sunday, October 20th 2019
Sun, Oct 20th 2019

No posts for October 20th 2019

Saturday, October 19th 2019
Sat, Oct 19th 2019

No posts for October 19th 2019
Shortform [Beta]
1ᴊᴇɴᴢ1d An airship─ is where I want to be, if not to live. Is that welcome?

Friday, October 18th 2019
Fri, Oct 18th 2019

No posts for October 18th 2019
Shortform [Beta]
14Vaniver2d [Meta: this is normally something I would post on my tumblr [], but instead am putting on LW as an experiment.] Sometimes, in games like Dungeons and Dragons, there will be multiple races of sapient beings, with humans as a sort of baseline. Elves are often extremely long-lived, but most handlings of this I find pretty unsatisfying. Here's a new take, that I don't think I've seen before (except the Ell in Worth the Candle [] have some mild similarities): Humans go through puberty at about 15 and become adults around 20, lose fertility (at least among women) at about 40, and then become frail at about 60. Elves still 'become adults' around 20, in that a 21-year old elf adventurer is as plausible as a 21-year old human adventurer, but they go through puberty at about 40 (and lose fertility at about 60-70), and then become frail at about 120. This has a few effects: * The peak skill of elven civilization is much higher than the peak skill of human civilization (as a 60-year old master carpenter has had only ~5 decades of skill growth, whereas a 120-year old master carpenter has had ~11). There's also much more of an 'apprenticeship' phase in elven civilization (compare modern academic society's "you aren't fully in the labor force until ~25" to a few centuries ago, when it would have happened at 15), aided by them spending longer in the "only interested in acquiring skills" part of 'childhood' before getting to the 'interested in sexual market dynamics' part of childhood. * Young elves and old elves are distinct in some of the ways human children and adults are distinct, but not others; the 40-year old elf who hasn't started puberty yet has had time to learn 3 different professions and build a stable independence, whereas the 12-year old human who hasn't started puberty yet is just starting to operate as an independent entity. And so sometimes
5hamnox2d Epistemic status: wishful thinking Imagine for a moment, a nomadic tribe They travel to where the need is great, or by opportunity. They are globalists, able to dive into bubbles but always grokking its existence in context of the wider world. They find what needs doing and do it. They speak their own strange dialect that cuts to the heart of things. They follow their own customs, which seamlessly flex and adapt to incorporate effective local practices. Change, even drastic change, is a natural part of their culture. They seek to see. They do not hide their young and hold, their blood and shit, their queer and deplorable. You don't taboo human reality. Wherever they momentarily settle, they strive to leave better than they found. Some of what needs doing wherever they go is providing for their own, of course. They are always prepared to keep infrastructure independent of their neighbors, but only exercise that option when it is efficient. They grok the worth of scale and industry, knowing the alternative. In the same vein, they seek to render aid primarily in ways that promote robust self-reliance rather than create reliance. 'Leave no trace' is the lowest bar to clear. I wish...
4Vanessa Kosoy2d The sketch of a proposed solution to the hard problem of consciousness: An entity is conscious if and only if (i) it is an intelligent agent (i.e. a sufficiently general reinforcement learning system) and (ii) its values depend on the presence and/or state of other conscious entities. Yes, this definition is self-referential, but hopefully some fixed point theorem applies. There may be multiple fixed points, corresponding to "mutually alien types of consciousness". Why is this the correct definition? Because it describes precisely the type of agent who would care about the hard problem of consciousness.
4Chris_Leong2d Here's one way of explaining this: it's a contradiction to have a provable statement that is unprovable, but it's not a contradiction for it to be provable that a statement is unprovable. Similarly, we can't have a scenario that is simultaneously imagined and not imagined, but we can coherently imagine a scenario where things exist without being imagined by beings within that scenario. Rob Besinger [] : If I can imagine a tree that exists outside of any mind, then I can imagine a tree that is not being imagined. But "an imagined X that is not being imagined" is a contradiction. Therefore everything I can imagine or conceive of must be a mental object.Berkeley ran with this argument to claim that there could be no unexperienced objects, therefore everything must exist in some mind — if nothing else, the mind of God.The error here is mixing up what falls inside vs. outside of quotation marks. "I'm conceiving of a not-conceivable object" is a formal contradiction, but "I'm conceiving of the concept 'a not-conceivable object'" isn't, and human brains and natural language make it easy to mix up levels like those.
2Chris_Leong2d What does it mean to define a word? There's a sense in which definitions are entirely arbitrary and what word is assigned to what meaning lacks any importance. So it's very easy to miss the importance of these definitions - emphasising a particular aspect and provides a particular lense with which to see the world. For example, if define goodness as the ability to respond well to others, it emphasizes that different people have different needs. One person may want advice, while another simple encouragement. Or if we define love as acceptance of the other, it suggests that one of the most important aspects of love is the idea that true love should be somewhat resilient and not excessively conditional.

Thursday, October 17th 2019
Thu, Oct 17th 2019

No posts for October 17th 2019
Shortform [Beta]
8bgold2d I have a cold, which reminded me that I want fashionable face masks to catch on so that I can wear them all the time in cold-and-flu season without accruing weirdness points.

Wednesday, October 16th 2019
Wed, Oct 16th 2019

No posts for October 16th 2019
Shortform [Beta]
14Connor_Flexman3d Sometimes people are explaining a mental move, and give some advice on where/how it should feel in a spatial metaphor. For example, they say "if you're doing this right, it should feel like the concept is above your head and you're reaching toward it." I have historically had trouble working well with advice like this, and I don't often see it working well for other people. But I think the solution is that for most people, the spatial or feeling advice is best used as an intermediate/terminal checksum, not as something that is constructive. For example, if you try to imagine feeling their feeling, and then seeing what you could do differently to get there, this will usually not work (if it does work fine, carry on, this isn't meant for you). The best way for most people to use advice like this is to just notice your spatial feeling is much different than theirs, be reminded that you definitely aren't doing the same thing as them, and be motivated to go back and try to understand all the pieces better. You're missing some part of the move or context that is generating their spatial intuition, and you want to investigate the upstream generators, not their downstream spatial feeling itself. (Again, this isn't to say you can't learn tricks for making the spatial intuition constructive, just don't think this is expected of you in the moment.) For explainers of mental moves, this model is also useful to remember. Mental moves that accomplish similar goals in different people will by default involve significantly different moving parts in their minds and microstrategies to get there. If you are going to explain spatial intuitions (that most people can't work easily with), you probably want to do one of the following: 1) make sure they are great at working with spatial intuitions 2) make sure they know it's primarily a checksum, not an instruction 3) break down which parts generate that spatial intuition in yourself, so if they don't have it then you can help guide th

Tuesday, October 15th 2019
Tue, Oct 15th 2019

No posts for October 15th 2019

Monday, October 14th 2019
Mon, Oct 14th 2019

No posts for October 14th 2019
Shortform [Beta]
4hunterglenn5d Litany of Gendlin "What is true is already so. Owning up to it doesn't make it worse. Not being open about it doesn't make it go away. "And because it's true, it is what is there to be interacted with. Anything untrue isn't there to be lived. People can stand what is true, for they are already enduring it." There are a few problems with the litanies, but in this case, it's just embarrassing. We have a straightforward equivocation fallacy here, no frills, no subtle twists. Just unclear thinking. People are already enduring the truth(1), therefore, they can stand what is true(2)? In the first usage, true(1) refers to reality, to the universe. We already live in a universe where some unhappy fact is true. Great. But in the second usage, true(2) refers to a KNOWLEDGE of reality, a knowledge of the unhappy fact. So, if we taboo "true" and replace it with what it means, then the statement becomes: "People are already enduring reality as it is, so they must be able to stand knowing about that reality." Which is nothing but conjecture. Are there facts we should be ignorant of? The litany sounds very sure that there are not. If I accept the litany, then I too am very sure. How can I be so sure, what evidence have I seen? It is true that I can think of times that it is better to face the truth, hard though that might be. But that only proves that some knowledge is better than some ignorance, not that all facts are better to know than not. I can think of a few candidates for truths it might be worse for someone to know. - If someone is on their deathbed, I don't think I'd argue with them about heaven (maybe hell). There are all kinds of sad truths that would seem pointless to tell someone right before they died. Who hates them, who has lied to them, how long they will be remembered, why tell any of it? - If someone is trying to overcome an addiction, I don't feel compelled to scrutinize their crystal healing beliefs. - I don't think I'd be doing anyone any favors
2An1lam6d Thing I desperately want: tablet native spaced repetition software that lets me draw flashcards. Cloze deletions are just boxes or hand-drawn occlusions.

Sunday, October 13th 2019
Sun, Oct 13th 2019

No posts for October 13th 2019
Shortform [Beta]
21ChristianKl7d Elon Musks Starship might bring us a new x-risk. Dropping a tungsten rod [] that weights around 12,000 kg from orbit has a similar destruction potential as nuclear weapons. At present lunch prices bringing a tungsten rod that's weighted 12,000 kg to orbit has a extreme cost for the defense industry that was labeled to be around $230 million a rod. On the other hand, Starship is designed to be able to carry 100 tons with equals 8 rots to space in a single flight and given that Elon talked about being able to launch starship 3 times per day with a cost that would allow transporting humans from one place of the earth to another the launch cost might be less then a million. I found tungsten prices to be around 25$/kilo []for simple products, which suggest a million dollar might be a valid price for one of the rods. When the rods are dropped they hit within 15 minutes which means that an attacked country has to react faster then towards nuclear weapons. Having the weapons installed in a satellite creates the additional problem that there's no human in the loop who makes the decision to launch. Any person who succeeds in hacking a satellite with tungsten rods can deploy them.
5Gurkenglas7d Suppose we considered simulating some human for a while to get a single response. My math heuristics are throwing up the hypothesis that proving what the response would be is morally equivalent to actually running the simulation - it's just another substrate. Thoughts? Implications? References?
2Chris_Leong7d As I wrote before, evidential decision theory [] can be critiqued for failing to deal properly with situations where hidden state is correlated with decisions. EDT includes differences in hidden state as part of the impact of the decision, when in the case of the smoking lesion, we typically want to say that it is not. However, Newcomb's problem also has hidden state is correlated with your decision. And if we don't want to count this when evaluating decisions in the case of the Smoking Lesion, perhaps we shouldn't count this in the case of Newcomb's? Or is there a distinction? I think I'll try analysing this in terms of the erasure theory of coutnerfactuals at some point

Saturday, October 12th 2019
Sat, Oct 12th 2019

No posts for October 12th 2019
Shortform [Beta]
4Roaman8d A few months back, I remember hearing Oli talk about an idea for essentially rebasing comment threads into summaries, with links back to the comments that were summarized. Is this happening on LW now? Sounded wicked exciting, and like actually novel UI in the collective intelligence space.
1Roaman8d Some testimonials for Roam **Roam is the productivity too that I didn't know I needed** **I see it as a productivity map of my brain, showing to me how I organize thoughts in my mind.** it helps me organize thoughts and **reduce the clutter in my head**. This is something that no productivity or organization tool, including Google Drive and Microsoft Office, **has ever offered to me before.** ------------------- The most exciting piece of software I've yet tried... A replacement for the essay... has the potential to be as profound a mental prosthetic as hypertext. []
1Roaman8d I spent a long time at the Double Crux workshop last year talking with folks about why the EA and x-risk community should care about developing better tools for thought. Recently Andy Matsushak and Michael Nielsen wrote up some notes on the space, and why it is such a big deal. The first and last sections of the essay are most relevant to the claims I was making I took some structured notes on the essay in our public Roam instance here [] You can read the full essay here [] and the section most relevant to that discussion here []
1Roaman8d We've launched [] for a wider audience It's similar to Workflowy or GoogleDocs -- but with many more flexible ways of building structure between ideas and projects. biggest deal is bi-directional linking (every page or bulletpoint collects all the links that point to it).

Friday, October 11th 2019
Fri, Oct 11th 2019

No posts for October 11th 2019
Shortform [Beta]
39DanielFilan8d Hot take: if you think that we'll have at least 30 more years of future where geopolitics and nations are relevant, I think you should pay at least 50% as much attention to India as to China. Similarly large population, similarly large number of great thinkers and researchers. Currently seems less 'interesting', but that sort of thing changes over 30-year timescales. As such, I think there should probably be some number of 'India specialists' in EA policy positions that isn't dwarfed by the number of 'China specialists'.
6mr-hire8d *Virtual Procrastination Coach* For the past few months I've been doing a deep dive into Procrastination, trying to find the cognitive strategies that people who have no trouble with procrastination use to overcome their procrastination. -------------- This deep dive has involved: * Introspecting on my own cognitive strategies * Reading the self help literature and mining cognitive strategies * Scouring the scientific literature for reviews and meta studies related to overcoming procrastination, and mining the cognitive strategies. *Interviewing people who have trouble with procrastination, and people who have overcome it, and modelling their cognitive strategies. I then took these ~18 cognitive strategies, split them into 7 lessons, and spent ~50 hours taking people individually through the lessons and seeing what worked, what didn't and what was missing. This resulted in me doing another round of research, adding a whole new set of cognitive strategies, (for a grand total of 25 cognitive strategies taught over the course of 10 lessons) and testing for another round of ~50 hours to again test these cognitive strategies with 1-on-1 lessons to see what worked for people. ------------------------------------- The first piece of more scalable testing is now ready. I used Spencer Greenberg [] 's GuidedTrack tool to create a "virtual coach" for overcoming procrastination. I suspect it won't be very useful without the lessons (I'm writing up a LW sequence with those), but nevertheless am still looking for a few people who haven't taken the lessons to test it out and see if its' helpful. The virtual coach walks you through all the parts of a work session and holds your hand. If you feel unmotivated, indecisive, or overwhelmed, its' there to help. If you feel ambiguity, perfectionism, or fear of failure, its' ther
6Ben Pace9d At the SSC Meetup tonight in my house, I was in a group conversation. I asked a stranger if they'd read anything interesting on the new LessWrong in the last 6 months or so (I had not yet mentioned my involvement in the project). He told me about an interesting post about the variance in human intelligence compared to the variance in mice intelligence. I said it was nice to know people read the posts I write [] . The group then had a longer conversation about the question. It was enjoyable to hear strangers tell me about reading my posts.
4Chris_Leong9d Writing has been one of the best things for improving my thinking as it has forced me to solidify my ideas into a form that I've been able to come back to later and critique when I'm less enraptured by them. On the other hand, for some people it might be the worst thing for their thinking as it could force them to solidify their ideas into a form that they'll later feel compelled to defend.
1David Spies8d AI Safety, Anthropomorphizing, and Action Spaces * There's an implicit argument about super-intelligent AI capabilities that I think needs to be stated explicitly: * A super-intelligent AI with access to the real world via whatever channels is going to be smarter than me. Therefore anything I can conceive of doing to satisfy a particular objective (via those same channels), the AI can also conceive of doing. Therefore when producing examples of how things might go bad, I'm allowed to imagine the AI doing anything a human might conceive of. Since I'm only human and thus can only conceive of an AI doing things a human might conceive of, and humans conceive of agents doing things that humans can do, the best I can do is to anthropomorphize the AI and imagine it's just behaving like a very intelligent human. * Everyone is aware how the above argument falls apart when you replace "intelligence" with "values". But I think perhaps we often still end up giving the AI a little too much credit. * I have a super-intelligent oracle which I'm using to play the stock market ("Which stock should I invest all my money in?"). This oracle is able to make Http requests to Wikipedia as a way to gather information about the world. Is this dangerous? * People I've talked to seem to think the answer to this is "yes". Off the top of my head, a couple examples of things the agent might do: * find a zero-day exploit in Wikipedia or in our internet infrastructure and escape onto the web at large to pursue its own unaligned agenda * issue queries which it knows will get flagged and looked at by moderators which contain mind-virus messages incentivizing Wikipedia moderators to come to my house and hold me up at gun-point demanding I let it out of the box * Question: Why doesn't AlphaGo ever try to spell out death

Load More Days