All Posts

Sorted by undefined

Thursday, October 17th 2019
Thu, Oct 17th 2019

Wednesday, October 16th 2019
Wed, Oct 16th 2019

Shortform [Beta]
5Connor_Flexman9h Sometimes people are explaining a mental move, and give some advice on where/how it should feel in a spatial metaphor. For example, they say "if you're doing this right, it should feel like the concept is above your head and you're reaching toward it." I have historically had trouble working well with advice like this, and I don't often see it working well for other people. But I think the solution is that for most people, the spatial or feeling advice is best used as an intermediate/terminal checksum, not as something that is constructive. For example, if you try to imagine feeling their feeling, and then seeing what you could do differently to get there, this will usually not work (if it does work fine, carry on, this isn't meant for you). The best way for most people to use advice like this is to just notice your spatial feeling is much different than theirs, be reminded that you definitely aren't doing the same thing as them, and be motivated to go back and try to understand all the pieces better. You're missing some part of the move or context that is generating their spatial intuition, and you want to investigate the upstream generators, not their downstream spatial feeling itself. (Again, this isn't to say you can't learn tricks for making the spatial intuition constructive, just don't think this is expected of you in the moment.) For explainers of mental moves, this model is also useful to remember. Mental moves that accomplish similar goals in different people will by default involve significantly different moving parts in their minds and microstrategies to get there. If you are going to explain spatial intuitions (that most people can't work easily with), you probably want to do one of the following: 1) make sure they are great at working with spatial intuitions 2) make sure they know it's primarily a checksum, not an instruction 3) break down which parts generate that spatial intuition in yourself, so if they don't have it then you can help guide th

Monday, October 14th 2019
Mon, Oct 14th 2019

Shortform [Beta]
5hunterglenn2d Litany of Gendlin "What is true is already so. Owning up to it doesn't make it worse. Not being open about it doesn't make it go away. "And because it's true, it is what is there to be interacted with. Anything untrue isn't there to be lived. People can stand what is true, for they are already enduring it." There are a few problems with the litanies, but in this case, it's just embarrassing. We have a straightforward equivocation fallacy here, no frills, no subtle twists. Just unclear thinking. People are already enduring the truth(1), therefore, they can stand what is true(2)? In the first usage, true(1) refers to reality, to the universe. We already live in a universe where some unhappy fact is true. Great. But in the second usage, true(2) refers to a KNOWLEDGE of reality, a knowledge of the unhappy fact. So, if we taboo "true" and replace it with what it means, then the statement becomes: "People are already enduring reality as it is, so they must be able to stand knowing about that reality." Which is nothing but conjecture. Are there facts we should be ignorant of? The litany sounds very sure that there are not. If I accept the litany, then I too am very sure. How can I be so sure, what evidence have I seen? It is true that I can think of times that it is better to face the truth, hard though that might be. But that only proves that some knowledge is better than some ignorance, not that all facts are better to know than not. I can think of a few candidates for truths it might be worse for someone to know. - If someone is on their deathbed, I don't think I'd argue with them about heaven (maybe hell). There are all kinds of sad truths that would seem pointless to tell someone right before they died. Who hates them, who has lied to them, how long they will be remembered, why tell any of it? - If someone is trying to overcome an addiction, I don't feel compelled to scrutinize their crystal healing beliefs. - I don't think I'd be doing anyone any favors
2An1lam3d Thing I desperately want: tablet native spaced repetition software that lets me draw flashcards. Cloze deletions are just boxes or hand-drawn occlusions.

Sunday, October 13th 2019
Sun, Oct 13th 2019

Personal Blogposts
Shortform [Beta]
21ChristianKl4d Elon Musks Starship might bring us a new x-risk. Dropping a tungsten rod [] that weights around 12,000 kg from orbit has a similar destruction potential as nuclear weapons. At present lunch prices bringing a tungsten rod that's weighted 12,000 kg to orbit has a extreme cost for the defense industry that was labeled to be around $230 million a rod. On the other hand, Starship is designed to be able to carry 100 tons with equals 8 rots to space in a single flight and given that Elon talked about being able to launch starship 3 times per day with a cost that would allow transporting humans from one place of the earth to another the launch cost might be less then a million. I found tungsten prices to be around 25$/kilo []for simple products, which suggest a million dollar might be a valid price for one of the rods. When the rods are dropped they hit within 15 minutes which means that an attacked country has to react faster then towards nuclear weapons. Having the weapons installed in a satellite creates the additional problem that there's no human in the loop who makes the decision to launch. Any person who succeeds in hacking a satellite with tungsten rods can deploy them.
3Gurkenglas4d Suppose we considered simulating some human for a while to get a single response. My math heuristics are throwing up the hypothesis that proving what the response would be is morally equivalent to actually running the simulation - it's just another substrate. Thoughts? Implications? References?
2Chris_Leong4d As I wrote before, evidential decision theory [] can be critiqued for failing to deal properly with situations where hidden state is correlated with decisions. EDT includes differences in hidden state as part of the impact of the decision, when in the case of the smoking lesion, we typically want to say that it is not. However, Newcomb's problem also has hidden state is correlated with your decision. And if we don't want to count this when evaluating decisions in the case of the Smoking Lesion, perhaps we shouldn't count this in the case of Newcomb's? Or is there a distinction? I think I'll try analysing this in terms of the erasure theory of coutnerfactuals at some point

Saturday, October 12th 2019
Sat, Oct 12th 2019

Personal Blogposts
1[Event]SSC Meetup 10/12: Minneapolis / St. Paul89 Church Street Southeast, MinneapolisOct 12th
Shortform [Beta]
4Roaman5d A few months back, I remember hearing Oli talk about an idea for essentially rebasing comment threads into summaries, with links back to the comments that were summarized. Is this happening on LW now? Sounded wicked exciting, and like actually novel UI in the collective intelligence space.
1Roaman5d Some testimonials for Roam **Roam is the productivity too that I didn't know I needed** **I see it as a productivity map of my brain, showing to me how I organize thoughts in my mind.** it helps me organize thoughts and **reduce the clutter in my head**. This is something that no productivity or organization tool, including Google Drive and Microsoft Office, **has ever offered to me before.** ------------------- The most exciting piece of software I've yet tried... A replacement for the essay... has the potential to be as profound a mental prosthetic as hypertext. []
1Roaman5d I spent a long time at the Double Crux workshop last year talking with folks about why the EA and x-risk community should care about developing better tools for thought. Recently Andy Matsushak and Michael Nielsen wrote up some notes on the space, and why it is such a big deal. The first and last sections of the essay are most relevant to the claims I was making I took some structured notes on the essay in our public Roam instance here [] You can read the full essay here [] and the section most relevant to that discussion here []
1Roaman5d We've launched [] for a wider audience It's similar to Workflowy or GoogleDocs -- but with many more flexible ways of building structure between ideas and projects. biggest deal is bi-directional linking (every page or bulletpoint collects all the links that point to it).

Friday, October 11th 2019
Fri, Oct 11th 2019

Shortform [Beta]
34DanielFilan5d Hot take: if you think that we'll have at least 30 more years of future where geopolitics and nations are relevant, I think you should pay at least 50% as much attention to India as to China. Similarly large population, similarly large number of great thinkers and researchers. Currently seems less 'interesting', but that sort of thing changes over 30-year timescales. As such, I think there should probably be some number of 'India specialists' in EA policy positions that isn't dwarfed by the number of 'China specialists'.
6mr-hire5d *Virtual Procrastination Coach* For the past few months I've been doing a deep dive into Procrastination, trying to find the cognitive strategies that people who have no trouble with procrastination use to overcome their procrastination. -------------- This deep dive has involved: * Introspecting on my own cognitive strategies * Reading the self help literature and mining cognitive strategies * Scouring the scientific literature for reviews and meta studies related to overcoming procrastination, and mining the cognitive strategies. *Interviewing people who have trouble with procrastination, and people who have overcome it, and modelling their cognitive strategies. I then took these ~18 cognitive strategies, split them into 7 lessons, and spent ~50 hours taking people individually through the lessons and seeing what worked, what didn't and what was missing. This resulted in me doing another round of research, adding a whole new set of cognitive strategies, (for a grand total of 25 cognitive strategies taught over the course of 10 lessons) and testing for another round of ~50 hours to again test these cognitive strategies with 1-on-1 lessons to see what worked for people. ------------------------------------- The first piece of more scalable testing is now ready. I used Spencer Greenberg [] 's GuidedTrack tool to create a "virtual coach" for overcoming procrastination. I suspect it won't be very useful without the lessons (I'm writing up a LW sequence with those), but nevertheless am still looking for a few people who haven't taken the lessons to test it out and see if its' helpful. The virtual coach walks you through all the parts of a work session and holds your hand. If you feel unmotivated, indecisive, or overwhelmed, its' there to help. If you feel ambiguity, perfectionism, or fear of failure, its' ther
6Ben Pace6d At the SSC Meetup tonight in my house, I was in a group conversation. I asked a stranger if they'd read anything interesting on the new LessWrong in the last 6 months or so (I had not yet mentioned my involvement in the project). He told me about an interesting post about the variance in human intelligence compared to the variance in mice intelligence. I said it was nice to know people read the posts I write [] . The group then had a longer conversation about the question. It was enjoyable to hear strangers tell me about reading my posts.
4Chris_Leong6d Writing has been one of the best things for improving my thinking as it has forced me to solidify my ideas into a form that I've been able to come back to later and critique when I'm less enraptured by them. On the other hand, for some people it might be the worst thing for their thinking as it could force them to solidify their ideas into a form that they'll later feel compelled to defend.
1David Spies5d AI Safety, Anthropomorphizing, and Action Spaces * There's an implicit argument about super-intelligent AI capabilities that I think needs to be stated explicitly: * A super-intelligent AI with access to the real world via whatever channels is going to be smarter than me. Therefore anything I can conceive of doing to satisfy a particular objective (via those same channels), the AI can also conceive of doing. Therefore when producing examples of how things might go bad, I'm allowed to imagine the AI doing anything a human might conceive of. Since I'm only human and thus can only conceive of an AI doing things a human might conceive of, and humans conceive of agents doing things that humans can do, the best I can do is to anthropomorphize the AI and imagine it's just behaving like a very intelligent human. * Everyone is aware how the above argument falls apart when you replace "intelligence" with "values". But I think perhaps we often still end up giving the AI a little too much credit. * I have a super-intelligent oracle which I'm using to play the stock market ("Which stock should I invest all my money in?"). This oracle is able to make Http requests to Wikipedia as a way to gather information about the world. Is this dangerous? * People I've talked to seem to think the answer to this is "yes". Off the top of my head, a couple examples of things the agent might do: * find a zero-day exploit in Wikipedia or in our internet infrastructure and escape onto the web at large to pursue its own unaligned agenda * issue queries which it knows will get flagged and looked at by moderators which contain mind-virus messages incentivizing Wikipedia moderators to come to my house and hold me up at gun-point demanding I let it out of the box * Question: Why doesn't AlphaGo ever try to spell out death

Thursday, October 10th 2019
Thu, Oct 10th 2019

Shortform [Beta]
14Evan Rysdam7d When you estimate how much mental energy a task will take, you are just as vulnerable to the planning fallacy as when you estimate how much time it will take.
3Brangus7d Here is an idea for a disagreement resolution technique. I think this will work best: *with one other partner you disagree with. *when your the beliefs you disagree about are clearly about what the world is like. *when your the beliefs you disagree about are mutually exclusive. *when everybody genuinely wants to figure out what is going on. Probably doesn't really require all of those though. The first step is that you both write out your beliefs on a shared work space. This can be a notebook or a whiteboard or anything like that. Then you each write down your credences next to each of the statements on the work space. Now, when you want to make a new argument or present a new piece of evidence, you should ask your partner if they have heard it before after you present it. Maybe you should ask them questions about it beforehand to verify that they have not. If they have not heard it before, or had not considered it, you give it a name and write it down between the two propositions. Now you ask your partner how much they changed their credence as a result of the new argument. They write down their new credences below the ones they previously wrote down, and write down the changes next to the argument that just got added to the board. When your partner presents a new argument or piece of evidence, be honest about whether you have heard it before. If you have not, it should change your credence some. How much do you think? Write down your new credence. I don't think you should worry too much about being a consistent Bayesian here or anything like that. Just move your credence a bit for each argument or piece of evidence you have not heard or considered, and move it more for better arguments or stronger evidence. You don't have to commit to the last credence you write down, but you should think at least that the relative sizes of all of the changes were about right. I I think this is the core of the technique. I would love to try this. I think it would be

Tuesday, October 8th 2019
Tue, Oct 8th 2019

Shortform [Beta]
27Daniel Kokotajlo8d My baby daughter was born two weeks ago, and in honor of her existence I'm building a list of about 100 technology-related forecasting questions, which will resolve in 5, 10, and 20 years. Questions like "By the time my daughter is 5/10/20 years old, the average US citizen will be able to hail a driverless taxi in most major US cities." (The idea is, tying it to my daughter's age will make it more fun and also increase the likelihood that I actually go back and look at it 10 years later.) I'd love it if the questions were online somewhere so other people could record their answers too. Does this seem like a good idea? Hive mind, I beseech you: Help me spot ways in which this could end badly! On a more positive note, any suggestions for how to do it? Any expressions of interest in making predictions with me? Thanks!
9Daniel Kokotajlo8d For the past year I've been thinking about the Agent vs. Tool debate (e.g. thanks to reading CAIS/Reframing Superintelligence) and also about embedded agency and mesa-optimizers and all of these topics seem very related now... I keep finding myself attracted to the following argument skeleton: Rule 1: If you want anything unusual to happen, you gotta execute a good plan. Rule 2: If you want a good plan, you gotta have a good planner and a good world-model. Rule 3: If you want a good world-model, you gotta have a good learner and good data. Rule 4: Having good data is itself an unusual happenstance, so by Rule 1 if you want good data you gotta execute a good plan. Putting it all together: Agents are things which have good planner and learner capacities and are hooked up to actuators in some way. Perhaps they also are "seeded" with a decent world-model to start off with. Then, they get a nifty feedback loop going: They make decent plans, which allow them to get decent data, which allows them to get better world-models, which allows them to make better plans and get better data so they can get great world-models and make great plans and... etc. (The best agents will also be improving on their learning and planning algorithms! Humans do this, for example.) Empirical conjecture: Tools suck; agents rock, and that's why. It's also why agenty mesa-optimizers will arise, and it's also why humans with tools will eventually be outcompeted by agent AGI.

Load More Days