All Posts

Sorted by Magic (New & Upvoted)

Monday, October 14th 2019
Mon, Oct 14th 2019

Shortform [Beta]
1An1lam3h Thing I desperately want: tablet native spaced repetition software that lets me draw flashcards. Cloze deletions are just boxes or hand-drawn occlusions.

Sunday, October 13th 2019
Sun, Oct 13th 2019

Shortform [Beta]
19ChristianKl1d Elon Musks Starship might bring us a new x-risk. Dropping a tungsten rod [http://www.spacedaily.com/reports/US_Project_Thor_would_fire_tungsten_poles_at_targets_from_outer_space_999.html] that weights around 12,000 kg from orbit has a similar destruction potential as nuclear weapons. At present lunch prices bringing a tungsten rod that's weighted 12,000 kg to orbit has a extreme cost for the defense industry that was labeled to be around $230 million a rod. On the other hand, Starship is designed to be able to carry 100 tons with equals 8 rots to space in a single flight and given that Elon talked about being able to launch starship 3 times per day with a cost that would allow transporting humans from one place of the earth to another the launch cost might be less then a million. I found tungsten prices to be around 25$/kilo [https://www.tungsten.com/tips/tungsten-and-costs/]for simple products, which suggest a million dollar might be a valid price for one of the rods. When the rods are dropped they hit within 15 minutes which means that an attacked country has to react faster then towards nuclear weapons. Having the weapons installed in a satellite creates the additional problem that there's no human in the loop who makes the decision to launch. Any person who succeeds in hacking a satellite with tungsten rods can deploy them.
3Gurkenglas2d Suppose we considered simulating some human for a while to get a single response. My math heuristics are throwing up the hypothesis that proving what the response would be is morally equivalent to actually running the simulation - it's just another substrate. Thoughts? Implications? References?
2Chris_Leong1d As I wrote before, evidential decision theory [https://www.lesswrong.com/posts/SbAofYCgKkaXReDy4/chris_leong-s-shortform#yKRZgXjt3qvzpWQEr] can be critiqued for failing to deal properly with situations where hidden state is correlated with decisions. EDT includes differences in hidden state as part of the impact of the decision, when in the case of the smoking lesion, we typically want to say that it is not. However, Newcomb's problem also has hidden state is correlated with your decision. And if we don't want to count this when evaluating decisions in the case of the Smoking Lesion, perhaps we shouldn't count this in the case of Newcomb's? Or is there a distinction? I think I'll try analysing this in terms of the erasure theory of coutnerfactuals at some point

Saturday, October 12th 2019
Sat, Oct 12th 2019

Personal Blogposts
1[Event]SSC Meetup 10/12: Minneapolis / St. Paul89 Church Street Southeast, MinneapolisOct 12th
0
Shortform [Beta]
4Roaman3d A few months back, I remember hearing Oli talk about an idea for essentially rebasing comment threads into summaries, with links back to the comments that were summarized. Is this happening on LW now? Sounded wicked exciting, and like actually novel UI in the collective intelligence space.
1Roaman3d Some testimonials for Roam **Roam is the productivity too that I didn't know I needed** **I see it as a productivity map of my brain, showing to me how I organize thoughts in my mind.** it helps me organize thoughts and **reduce the clutter in my head**. This is something that no productivity or organization tool, including Google Drive and Microsoft Office, **has ever offered to me before.** ------------------- The most exciting piece of software I've yet tried... A replacement for the essay... has the potential to be as profound a mental prosthetic as hypertext. https://roamresearch.com/#/v8/help/page/9jAzaU0PN [https://roamresearch.com/#/v8/help/page/9jAzaU0PN]
1Roaman3d I spent a long time at the Double Crux workshop last year talking with folks about why the EA and x-risk community should care about developing better tools for thought. Recently Andy Matsushak and Michael Nielsen wrote up some notes on the space, and why it is such a big deal. The first and last sections of the essay are most relevant to the claims I was making I took some structured notes on the essay in our public Roam instance here https://roamresearch.com/#/v8/help/page/J9ZMhYbkP [https://roamresearch.com/#/v8/help/page/J9ZMhYbkP] You can read the full essay here https://numinous.productions/ttft/#top [https://numinous.productions/ttft/#top] and the section most relevant to that discussion here https://numinous.productions/ttft/#why-not-more-work [https://numinous.productions/ttft/#why-not-more-work]
1Roaman3d We've launched https://RoamResearch.com [https://RoamResearch.com] for a wider audience It's similar to Workflowy or GoogleDocs -- but with many more flexible ways of building structure between ideas and projects. biggest deal is bi-directional linking (every page or bulletpoint collects all the links that point to it).

Friday, October 11th 2019
Fri, Oct 11th 2019

Shortform [Beta]
25DanielFilan3d Hot take: if you think that we'll have at least 30 more years of future where geopolitics and nations are relevant, I think you should pay at least 50% as much attention to India as to China. Similarly large population, similarly large number of great thinkers and researchers. Currently seems less 'interesting', but that sort of thing changes over 30-year timescales. As such, I think there should probably be some number of 'India specialists' in EA policy positions that isn't dwarfed by the number of 'China specialists'.
6mr-hire3d *Virtual Procrastination Coach* For the past few months I've been doing a deep dive into Procrastination, trying to find the cognitive strategies that people who have no trouble with procrastination use to overcome their procrastination. -------------- This deep dive has involved: * Introspecting on my own cognitive strategies * Reading the self help literature and mining cognitive strategies * Scouring the scientific literature for reviews and meta studies related to overcoming procrastination, and mining the cognitive strategies. *Interviewing people who have trouble with procrastination, and people who have overcome it, and modelling their cognitive strategies. I then took these ~18 cognitive strategies, split them into 7 lessons, and spent ~50 hours taking people individually through the lessons and seeing what worked, what didn't and what was missing. This resulted in me doing another round of research, adding a whole new set of cognitive strategies, (for a grand total of 25 cognitive strategies taught over the course of 10 lessons) and testing for another round of ~50 hours to again test these cognitive strategies with 1-on-1 lessons to see what worked for people. ------------------------------------- The first piece of more scalable testing is now ready. I used Spencer Greenberg [https://www.facebook.com/spencer.greenberg?__tn__=%2CdK-R-R&eid=ARAnlWUH_zvuap2bJlmGynkZjs3a6DFNrzkgAQD3P_gxFsGvgTxbI_eHz0o9swWyr5oYa95hXNUtBI_j&fref=mentions] 's GuidedTrack tool to create a "virtual coach" for overcoming procrastination. I suspect it won't be very useful without the lessons (I'm writing up a LW sequence with those), but nevertheless am still looking for a few people who haven't taken the lessons to test it out and see if its' helpful. The virtual coach walks you through all the parts of a work session and holds your hand. If you feel unmotivated, indecisive, or overwhelmed, its' there to help. If you feel ambiguity, perfectionism, or fear of failure, its' ther
6Ben Pace3d At the SSC Meetup tonight in my house, I was in a group conversation. I asked a stranger if they'd read anything interesting on the new LessWrong in the last 6 months or so (I had not yet mentioned my involvement in the project). He told me about an interesting post about the variance in human intelligence compared to the variance in mice intelligence. I said it was nice to know people read the posts I write [https://www.lesswrong.com/posts/QqHhr9anrSnZRHCxf/why-so-much-variance-in-human-intelligence] . The group then had a longer conversation about the question. It was enjoyable to hear strangers tell me about reading my posts.
4Chris_Leong3d Writing has been one of the best things for improving my thinking as it has forced me to solidify my ideas into a form that I've been able to come back to later and critique when I'm less enraptured by them. On the other hand, for some people it might be the worst thing for their thinking as it could force them to solidify their ideas into a form that they'll later feel compelled to defend.
1David Spies3d AI Safety, Anthropomorphizing, and Action Spaces * There's an implicit argument about super-intelligent AI capabilities that I think needs to be stated explicitly: * A super-intelligent AI with access to the real world via whatever channels is going to be smarter than me. Therefore anything I can conceive of doing to satisfy a particular objective (via those same channels), the AI can also conceive of doing. Therefore when producing examples of how things might go bad, I'm allowed to imagine the AI doing anything a human might conceive of. Since I'm only human and thus can only conceive of an AI doing things a human might conceive of, and humans conceive of agents doing things that humans can do, the best I can do is to anthropomorphize the AI and imagine it's just behaving like a very intelligent human. * Everyone is aware how the above argument falls apart when you replace "intelligence" with "values". But I think perhaps we often still end up giving the AI a little too much credit. * I have a super-intelligent oracle which I'm using to play the stock market ("Which stock should I invest all my money in?"). This oracle is able to make Http requests to Wikipedia as a way to gather information about the world. Is this dangerous? * People I've talked to seem to think the answer to this is "yes". Off the top of my head, a couple examples of things the agent might do: * find a zero-day exploit in Wikipedia or in our internet infrastructure and escape onto the web at large to pursue its own unaligned agenda * issue queries which it knows will get flagged and looked at by moderators which contain mind-virus messages incentivizing Wikipedia moderators to come to my house and hold me up at gun-point demanding I let it out of the box * Question: Why doesn't AlphaGo ever try to spell out death

Thursday, October 10th 2019
Thu, Oct 10th 2019

Shortform [Beta]
14Evan Rysdam5d When you estimate how much mental energy a task will take, you are just as vulnerable to the planning fallacy as when you estimate how much time it will take.
3Brangus4d Here is an idea for a disagreement resolution technique. I think this will work best: *with one other partner you disagree with. *when your the beliefs you disagree about are clearly about what the world is like. *when your the beliefs you disagree about are mutually exclusive. *when everybody genuinely wants to figure out what is going on. Probably doesn't really require all of those though. The first step is that you both write out your beliefs on a shared work space. This can be a notebook or a whiteboard or anything like that. Then you each write down your credences next to each of the statements on the work space. Now, when you want to make a new argument or present a new piece of evidence, you should ask your partner if they have heard it before after you present it. Maybe you should ask them questions about it beforehand to verify that they have not. If they have not heard it before, or had not considered it, you give it a name and write it down between the two propositions. Now you ask your partner how much they changed their credence as a result of the new argument. They write down their new credences below the ones they previously wrote down, and write down the changes next to the argument that just got added to the board. When your partner presents a new argument or piece of evidence, be honest about whether you have heard it before. If you have not, it should change your credence some. How much do you think? Write down your new credence. I don't think you should worry too much about being a consistent Bayesian here or anything like that. Just move your credence a bit for each argument or piece of evidence you have not heard or considered, and move it more for better arguments or stronger evidence. You don't have to commit to the last credence you write down, but you should think at least that the relative sizes of all of the changes were about right. I I think this is the core of the technique. I would love to try this. I think it would be

Tuesday, October 8th 2019
Tue, Oct 8th 2019

Shortform [Beta]
19Daniel Kokotajlo6d My baby daughter was born two weeks ago, and in honor of her existence I'm building a list of about 100 technology-related forecasting questions, which will resolve in 5, 10, and 20 years. Questions like "By the time my daughter is 5/10/20 years old, the average US citizen will be able to hail a driverless taxi in most major US cities." (The idea is, tying it to my daughter's age will make it more fun and also increase the likelihood that I actually go back and look at it 10 years later.) I'd love it if the questions were online somewhere so other people could record their answers too. Does this seem like a good idea? Hive mind, I beseech you: Help me spot ways in which this could end badly! On a more positive note, any suggestions for how to do it? Any expressions of interest in making predictions with me? Thanks!
9Daniel Kokotajlo6d For the past year I've been thinking about the Agent vs. Tool debate (e.g. thanks to reading CAIS/Reframing Superintelligence) and also about embedded agency and mesa-optimizers and all of these topics seem very related now... I keep finding myself attracted to the following argument skeleton: Rule 1: If you want anything unusual to happen, you gotta execute a good plan. Rule 2: If you want a good plan, you gotta have a good planner and a good world-model. Rule 3: If you want a good world-model, you gotta have a good learner and good data. Rule 4: Having good data is itself an unusual happenstance, so by Rule 1 if you want good data you gotta execute a good plan. Putting it all together: Agents are things which have good planner and learner capacities and are hooked up to actuators in some way. Perhaps they also are "seeded" with a decent world-model to start off with. Then, they get a nifty feedback loop going: They make decent plans, which allow them to get decent data, which allows them to get better world-models, which allows them to make better plans and get better data so they can get great world-models and make great plans and... etc. (The best agents will also be improving on their learning and planning algorithms! Humans do this, for example.) Empirical conjecture: Tools suck; agents rock, and that's why. It's also why agenty mesa-optimizers will arise, and it's also why humans with tools will eventually be outcompeted by agent AGI.

Sunday, October 6th 2019
Sun, Oct 6th 2019

Shortform [Beta]
14Vaniver8d People's stated moral beliefs are often gradient estimates instead of object-level point estimates. This makes sense if arguments from those beliefs are pulls on the group epistemology, and not if those beliefs are guides for individual action. Saying "humans are a blight on the planet" would mean something closer to "we should be more environmentalist on the margin" instead of "all things considered, humans should be removed." You can probably imagine how this can be disorienting, and how there's a meta issue of the point estimate view is able to see what it's doing in a way that the gradient view might not be able to see what it's doing.

Saturday, October 5th 2019
Sat, Oct 5th 2019

Load More Days