All of riceissa's Comments + Replies

Writing On The Pareto Frontier

Robert Heaton calls this (or a similar enough idea) the Made-Up-Award Principle.

Eli's shortform feed

Maybe this? (There are a few subthreads on that post that mention linear regression.)

riceissa's Shortform

I think Discord servers based around specific books are an underappreciated form of academic support/community. I have been part of such a Discord server (for Terence Tao's Analysis) for a few years now and have really enjoyed being a part of it.

Each chapter of the book gets two channels: one to discuss the reading material in that chapter, and one to discuss the exercises in that chapter. There are also channels for general discussion, introductions, and a few other things.

Such a Discord server has elements of university courses, Math Stack Exchange, Redd... (read more)

2Raemon8dThis is a pretty cool concept.
How to turn money into AI safety?

I learned about the abundance of available resources this past spring.

I'm curious what this is referring to.

5johnswentworth25dThere's apparently a lot of funding looking for useful ways to reduce AI X-risk right now.
MIRI/OP exchange about decision theory

Rob, are you able to disclose why people at Open Phil are interested in learning more decision theory? It seems a little far away from the AI strategy reports they've been publishing in recent years, and it also seemed like they were happy to keep funding MIRI (via their Committee for Effective Altruism Support) despite disagreements about the value of HRAD research, so the sudden interest in decision theory is intriguing.

Mostly personal interest on my part (I was working on a blog post on the topic, now up), though I do think that the topic has broader relevance.

8Ben Pace24dI was in the chat and don't have anything especially to "disclose". Joe and Nick are both academic philosophers who've studied at Oxford and been at FHI, with a wide range of interests. And Abram and Scott are naturally great people to chat about decision theory with when they're available.
8ESRogs24dMy (uninformed) (non-)explanation: NB and JC are both philosophers by training, and it's not surprising for philosophers to be interested in decision theory.
Set image dimensions using markdown

I am also running into this problem now with the Markdown editor. I switched over from the new rich editor because that one didn't support footnotes, whereas the Markdown one does. It seems like there is no editor that can both scale images and do footnotes, which is frustrating.

Edit: I ended up going with the rich editor despite broken footnotes since that seemed like the less bad of the two problems.

Rob B's Shortform Feed

Re (a): I looked at chapters 4 and 5 of Superintelligence again, and I can kind of see what you mean, but I'm also frustrated that Bostrom seems really non-committal in the book. He lists a whole bunch of possibilities but then doesn't seem to actually come out and give his mainline visualization/"median future". For example he looks at historical examples of technology races and compares how much lag there was, which seems a lot like the kind of thinking you are doing, but then he also says things like "For example, if human-level AI is delayed because on... (read more)

4Daniel Kokotajlo1moI also wish I knew what Bostrom's median future was like, though I perhaps understand why he didn't put it in his book -- the incentives all push against it. Predicting the future is hard and people will hold it against you if you fail, whereas if you never try at all and instead say lots of vague prophecies, people will laud you as a visionary prophet. Re (b) cool, I think we are on the same page then. Re takeoff being too fast--I think a lot of people these days think there will be plenty of big scary warning shots and fire alarms that motivate lots of people to care about AI risk and take it seriously. I think that suggests that a lot of people expect a fairly slow takeoff, slower than I think is warranted. Might happen, yes, but I don't think Paul & Katja's arguments are that convincing that takeoff will be this slow. It's a big source of uncertainty for me though.
AGI will drastically increase economies of scale

I was reading parts of Superintelligence recently for something unrelated and noticed that Bostrom makes many of the same points as this post:

If the frontrunner is an AI system, it could have attributes that make it easier for it to expand its capabilities while reducing the rate of diffusion. In human-run organizations, economies of scale are counteracted by bureaucratic inefficiencies and agency problems, including difficulties in keeping trade secrets. These problems would presumably limit the growth of a machine intelligence project so long as it is op

... (read more)
Rob B's Shortform Feed

Ok I see, thanks for explaining. I think what's confusing to me is that Eliezer did stop talking about the deep math of intelligence sometime after 2011 and then started talking about big blobs of matrices as you say starting around 2016, but as far as I know he has never gone back to his older AI takeoff writings and been like "actually I don't believe this stuff anymore; I think hard takeoff is actually more likely to be due to EMH failure and natural lag between projects". (He has done similar things for his older writings that he no longer thinks is tr... (read more)

4Daniel Kokotajlo2moFor all I know you are right about Yudkowsky's pre-2011 view about deep math. However, (a) that wasn't Bostrom's view AFAICT, and (b) I think that's just not what this OP quote is talking about. From the OP: It's Yudkowsky/Bostrom, not Yudkowsky. And it's WFLLp1, not p2. Part 2 is the one where the AIs do a treacherous turn; part 1 is where actually everything is fine except that "you get what you measure" and our dumb obedient AIs are optimizing for the things we told them to optimize for rather than for what we want. I am pretty confident that WFLLp1 is not the main thing we should be worrying about; WFLLp2 is closer, but even it involves this slow-takeoff view (in the strong sense, in which economy is growing fast before the point of no return) which I've argued against. I do not think that the reason people shifted from "yudkowsky/bostrom" (which in this context seems to mean "single AI project builds AI in the wrong way, AI takes over world" and to WFLLp1 is that people rationally considered all the arguments and decided that WFLLp1 was on balance more likely. I think instead that probably some sort of optimism bias was involved, and more importantly win by default (Yud + Bostrom stopped talking about their scenarios and arguing for them, whereas Paul wrote a bunch of detailed posts laying out his scenarios and arguments, and so in the absence of visible counterarguments Paul wins the debate by default). Part of my feeling about this is that it's a failure on my part; when Paul+Katja wrote their big post on takeoff speeds I disagreed with it and considered writing a big point-by-point response, but never did, even after various people posted questions asking "has there been any serious response to Paul+Katja?"
Rob B's Shortform Feed

Thanks! My understanding of the Bostrom+Yudkowsky takeoff argument goes like this: at some point, some AI team will discover the final piece of deep math needed to create an AGI; they will then combine this final piece with all of the other existing insights and build an AGI, which will quickly gain in capability and take over the world. (You can search "a brain in a box in a basement" on this page or see here for some more quotes.)

In contrast, the scenario you imagine seems to be more like (I'm not very confident I am getting all of this right): there isn... (read more)

6Daniel Kokotajlo2moAh, this is helpful, thanks -- I think we just have different interpretations of Bostrom+Yudkowsky. You've probably been around before I was and read more of their stuff, but I first got interested in this around 2013, pre-ordered Superintelligence and read it with keen interest, etc. and the scenario you describe as mine is what I always thought Bostrom+Yudkowsky believed was most likely, and the scenario you describe as theirs -- involving "deep math" and "one hard step at the end" is something I thought they held up as an example of how things could be super fast, but not as what they actually believed was most likely. From what I've read, Yudkowsky did seem to think there would be more insights and less "just make blob of compute bigger" about a decade or two ago, but he's long since updated towards "dear lord, people really are just going to make big blobs of inscrutable matrices, the fools!" and I don't think this counts as a point against his epistemics because predicting the future is hard and most everyone else around him did even worse, I'd bet.
Rob B's Shortform Feed

Which of the "Reasons to expect fast takeoff" from Paul's post do you find convincing, and what is your argument against what Paul says there? Or do you have some other reasons for expecting a hard takeoff?

I've seen this post of yours, but as far as I know, you haven't said much about hard vs soft takeoff in general.

9Daniel Kokotajlo2moIt's a combination of not finding Paul+Katja's counterarguments convincing (AI Impacts has a slightly different version of the post, I think of this as the Paul+Katja post since I don't know how much each of them did), having various other arguments that they didn't consider, and thinking they may be making mistakes in how they frame things and what questions they ask. I originally planned to write a line-by-line rebuttal of the Paul+Katja posts, but instead I ended up writing a sequence of posts [] that collectively constitute my (indirect) response. If you want a more direct response, I can put it on my list of things to do, haha... sorry... I am a bit overwhelmed... OK here's maybe some quick (mostly cached) thoughts: 1. What we care about is point of no return, NOT GDP doubling in a year or whatever. 2. PONR seems not particularly correlated with GDP acceleration time or speed, and thus maybe Paul and I are just talking past each other -- he's asking and answering the wrong questions. 3. Slow takeoff means shorter timelines, so if our timelines are independently pretty short, we should update against slow takeoff. My timelines are independently pretty short. (See my other sequence.) Paul runs this argument in the other direction I think; since takeoff will be slow, and we aren't seeing the beginnings of it now, timelines must be long. (I don't know how heavily he leans on this argument though, probably not much. Ajeya does this too, and does it too much I think.) Also, concretely, if crazy AI stuff happens in <10 years, probably the EMH has failed in this domain and probably we can get AI by just scaling up stuff and therefore probably takeoff will be fairly fast (at least, it seems that way extrapolating from GPT-1, GPT-2, and GPT-3. One year apart, significantly qualitatively and quantitatively better. If that's what progress looks like when we are entering the "human range" then we will cross it quickly, it seem
riceissa's Shortform

(I have only given this a little thought, so wouldn't be surprised if it is totally wrong. I'm curious to hear what people think.)

I've known about deductive vs inductive reasoning for a long time, but only recently heard about abductive reasoning. It now occurs to me that what we call "Solomonoff induction" might better be called "Solomonoff abduction". From SEP:

It suggests that the best way to distinguish between induction and abduction is this: both are ampliative, meaning that the conclusion goes beyond what is (logically) contained in the premises (whi

... (read more)
Jimrandomh's Shortform

What alternatives to "split-and-linearly-aggregate" do you have in mind? Or are you just identifying this step as problematic without having any concrete alternative in mind?

Raj Thimmiah's Shortform

There is a map on the community page. (You might need to change something in your user settings to be able to see it.)

You Can Now Embed Flashcard Quizzes in Your LessWrong posts!

I'm curious why you decided to make an entirely new platform (Thought Saver) rather than using Andy's Orbit platform.

5spencerg5moA few thoughts on this: • I don't think Orbit existed when we started working on Thought Saver (or if it did, we didn't know about it). When Quantum Country came out I wasn't aware (not sure if anyone was aware?) at the time that it was part of a larger project (maybe it wasn't yet at that point?) • Thought Saver has a different focus than Orbit (e.g., Thought Saver is not focussed on embedded essays - it just seems that way because the LessWrong team asked if we could add that feature since they wanted to try embedded flashcard quizzes). Thought Saver is more fundamentally focussed on being a tool for accelerating your own learning. • Also, as far as I'm aware Orbit can't be used in a self-serve fashion right now (?) • I'll add that Andy is awesome, I deeply respect his work, and I consider him the world expert on the topic of writing essays with memory elements (and on other topics related to spaced repetition as well). I'm excited to see what he does with Orbit!
Using Flashcards for Deliberate Practice

Messaging sounds good to start with (I find calls exhausting so only want to do it when I feel it adds a lot of value).

Using Flashcards for Deliberate Practice

Ah ok cool. I've been doing something similar for the past few years and this post is somewhat similar to the approach I've been using for reviewing math, so I was curious how it was working out for you.

2AllAmericanBreakfast5moThanks for contributing a data point that it's a tenable longish-term strategy :) Any chance you'd be willing to do a chat/zoom/call to talk about your study strategies and so I can bounce my thoughts off you?
Using Flashcards for Deliberate Practice

Have you actually tried this approach, and if so for how long and how has it worked?

2AllAmericanBreakfast5moYes, but I'm just starting to use it. For context, I'm in the midst of figuring out a deep overhaul of how I approach scholarship. This includes how to self-teach, practice and retain information deeply and long-term, apply knowledge, build relationships around ideas, focus, strategize, and prioritize. To get feedback and create a record of my ideas as I go along, I'm doing lots of quick blog posts about them. But I take it from your comment that I should probably make this more clear up front :)
Progressive Highlighting: Picking What To Make Into Flashcards

So there's a need for an intermediate stage between creating an extract and creating a flashcard. This need is what progressive highlighting seeks to address.

I haven't actually done incremental reading in SuperMemo so I'm not sure about this, but I believe extract processing is meant to be recursive: first you extract a larger portion of the text that seems relevant, then when you encounter it again the extract itself is treated like an original article itself, so you might extract just a single sentence, then when you encounter that sentence again you might make a cloze deletion or Q&A card.

2AllAmericanBreakfast6moI guess the "funnel model" of reading doesn't seem quite right to me somehow? Like, I want to actually read all the words. I think they're probably almost all important at first for getting the meaning across to me. Not to mention the larger sections - of course I want to read those. It's just that I know that without a kind of swallow-it-whole approach to reading that is akin to extremely massed practice, I will really struggle to remember the bits, much less tie them together into a cohesive, synthesized form of larger meaning. So their model seems to be a progressive approach to narrowing down and re-reading that eventually arrives at making flashcards, which then are used to maintain knowledge over the long term. My model is more like a shallow goldfish read [] that quickly arrives at flashcards, crystallizes the important bits of knowledge, and then proceeds to re-reading in order to arrive at fuller understanding. I think the swap is in the order of flashcards vs. re-reading - they recommend re-reading before making flash-cards, while I recommend making and practicing with flash-cards before re-reading.
Progressive Highlighting: Picking What To Make Into Flashcards

This sounds a lot like (a subset of) incremental reading. Instead of highlighting, one creates "extracts" and reviews those extracts over time to see if any of them can be turned into flashcards. As you suggest, there is no pressure to immediately turn things into flashcards on a first-pass of the reading material. These two articles about incremental reading emphasize this point. A quote from the first of these:

Initially, you make extracts because “Well it seems important”. Yet to what degree (the number of clozes/Q&As) and in what formats (cloze

... (read more)
2AllAmericanBreakfast6moI'll have to look more into this, I'm sure that this task has been thought through to a greater depth than mine. At first glance, it seems like their pre-flashcard unit, the extract, is longer than mine. This may be because I'm working from a textbook, and it's all relevant enough to the course that everything "seems important" and is thus worthy of making an extract. So there's a need for an intermediate stage between creating an extract and creating a flashcard. This need is what progressive highlighting seeks to address. Thanks for sharing, I think this will be a great resource for my task analysis of scholarly reading.
riceissa's Shortform

Does life extension (without other technological progress to make the world in general safer) lead to more cautious life styles? The longer the expected years left, the more value there is in just staying alive compared to taking risks. Since death would mean missing out on all the positive experiences for the rest of one's life, I think an expected value calculation would show that even a small risk is not worth taking. Does this mean all risks that don't get magically fixed due to life extension (for example, activities like riding a motorcycle or drivin... (read more)

1Gerald Monroe6moThere's 2 factors here. Suppose there's a life extension treatment that resets someone to age 20. It's readily available to most first world residents, with the usual methods of rationing. (wait lists for years in European countries, the usual insurance scam in the USA) A rational human would yes, buy space in a bunker and do all of their work remotely. There would be many variations of commercially available bunkers and security products, and the recent pandemic has showed that many high value jobs can be worked remotely. However, the life extension treatment doesn't change the 'human firmware'. Novel experiences and mates will still remain highly pleasurable. Staying in the bunker and experiencing life via screens will likely cause various problems, ameliorated to some degree with artificial means. (vr headsets, etc) So there will be flocks of humans who keep taking risks, and they will do the majority of the dying. I think I read the average lifespan would still be about 3000 years, which seems like a large improvement over the present situation. In addition, this would probably be just a temporary state of affairs. ('a dreary few centuries') Neural backups, remote bodies, deep dive VR - there are many technologies that would make it practical to go out in the world safely. And a survival advantage for those humans who have the neurological traits to be able to survive the bunker years. But, yes, I also think that society would slowly push for cleaning up many of the risks we consider 'acceptable' now. Cars, guns that are not smart and can be fired accidentally, air pollution, electrical wiring and gas plumbing - we have a ton of infrastructure and devices where the risk is small...over short present human lifespans. Everything would need to be a lot safer if we had expectations of thousands of years otherwise.
Asymmetric Weapons

I like this tag! I think the current version of the page is missing the insight that influence gained via asymmetric weapons/institutions is restricted/inflexible, i.e. an asymmetric weapon not only helps out only the "good guys" but also constrains the "good guys" into only being able to do "good things". See this comment by Carl Shulman. (I might eventually come back to edit this in, but I don't have the time right now.)

2Yoav Ravid6moOh, yes, that's a good point!
John Vervaeke

The EA Forum wiki has stubs for a bunch of people, including a somewhat detailed article on Carl Shulman. I wonder if you feel similarly unexcited about the articles there (if so, it seems good to discuss this with people working on the EA wiki as well), or if you have different policies for the two wikis.

2habryka7moYep, I feel similarly, though overall think the EA Forum is pursuing a cultural strategy that is somewhat different from ours that makes it a bit less costly, but not much. I have generally been open about various cultural concerns I've had about the EA Forum when talking to CEA.
Probability vs Likelihood

Ah ok, that makes sense. Thanks for clarifying!

Open & Welcome Thread – February 2021

It seems to already be on LW.

Edit: oops, looks like the essay was posted on LW in response to this comment.

Rationality Verification

I'm unable to apply this tag to posts (this tag doesn't show up when I search to add a tag).

4Ruby7moI have now converted this page to be also tag, and not just a wiki. It should work now.
Learn Bayes Nets!

For people who find this post in the future, Abram discussed several of the points in the bullet-point list above in Probability vs Likelihood.

Probability vs Likelihood

Regarding base-rate neglect, I've noticed that in some situations my mind seems to automatically do the correct thing. For example if a car alarm or fire alarm goes off, I don't think "someone is stealing the car" or "there's a fire". L(theft|alarm) is high, but P(theft|alarm) is low, and my mind seems to naturally know this difference. So I suspect something more is going on here than just confusing probability and likelihood, though that may be part of the answer.

Probability vs Likelihood

I understood all of the other examples, but this one confused me:

A scenario is likely if it explains the data well. For example, many conspiracy theories are very likely because they have an answer for every question: a powerful group is conspiring to cover up the truth, meaning that the evidence we see is exactly what they'd want us to see.

If the conspiracy theory really was very likely, then we should be updating on this to have a higher posterior probability on the conspiracy theory. But in almost all cases we don't actually believe the conspiracy theor... (read more)

4abramdemski7moAh, yeah, I agree with your story. Before the data comes in, the conspiracy theorist may not have a lot of predictions, or may have a lot of wrong predictions. After the data comes in, though, the conspiracy theorist will have all sorts of stories about why the data fits perfectly with their theory. My intention in what you quote was to consider the conspiracy theory in its fulness, after it's been all fleshed out. This is usually the version of conspiracy theories I see. That second version of the theory will be very likely, but have a very low prior probability. And when someone finds a conspiracy theory like that convincing, part of what's going on may be that they confuse likelihood and probability. "It all makes sense! All the details fit!" Whereas the original conspiracy theorist is making a very different kind of mistake.
John Vervaeke

Thanks, I like your rewrite and will post questions instead in the future.

I think I understand your concerns and agree with most of it. One thing that does still feel "off" to me is: given that there seems to be a lot of in-person-only discussions about "cutting edge" ideas and "inside scoop" like things (that trickle out via venues like Twitter and random Facebook threads, and only much later get written up as blog posts), how can people who primarily interact with the community online (such as me) keep up with this? I don't want to have to pay attention ... (read more)

1mike_hawke5moI was pretty happy when Ben Pace turned Eliezer's Facebook AMA into a LW post [] ; I might like to see more stuff like that. However, I feel like wiki pages ought to be durable and newcomer-friendly, and therefore must necessarily lag the cutting edge.
2riceissa7moI also just encountered Flashcards for your soul [].
Judgment Day: Insights from 'Judgment in Managerial Decision Making'

For those reading this thread in the future, Alex has now adopted a more structured approach to reviewing the math he has learned.

4TurnTrout8moI think I would have done well to think more carefully about the benefits of implementing a review system, when you posted this comment. It may have been true that I was getting by with my current study setup, but also accumulating "math recall debt" over time. I think I underestimated the importance of knowing lots of math facts. For example, I initially replied: This underestimates the cost of picking it back up, which is linear for each fact in the dependencies I forgot, versus just maintaining the dependencies over time. For example, when I'm studying quantum mechanics, it seems crazy to say "who cares about self-adjointness", but not that crazy to say "who cares about self-adjointness" if I'm just doing reinforcement learning theory. I gestured at another useful habit, but one which only works on skills I use regularly: I couldn't use this technique to help myself remember what a correlated equilibrium is, because I don't use concepts which build on 'correlated equilibria' very frequently. I understood correlated equilibria at one point, but I don't recall anymore, and that makes me sad. Now I'll go back and put that chain of insights into Anki; if I'd been doing this from the beginning, I wouldn't have to do that.
The new Editor

Thanks, that worked and I was able to fix the rest of the images.

The new Editor

I just tried doing this in a post, and while the images look fine in the editor, they come out huge once the post is published. Any ideas on what I can do to fix this? (I don't see any option in the editor to resize the images, and I'm scared of converting the post to markdown.)

2Raemon8moI just fixed some of them (left a couple more for you to fiddle around with). This is pretty silly and we should fix it at some point, but the issue is that until you've manually fiddled with the size of the images slightly, they default to max-size. You can resize them by click the corner of the image and dragging.
John Vervaeke

Some thoughts in response:

  • I agree that it's better to focus on ideas instead of people. I might have a disagreement about how successfully LessWrong has managed this, so that from your perspective it looks like this page is pushing the status quo toward something we don't want vs looking from my perspective like it's just doing things more explicitly/transparently (which I prefer).
  • I agree that writing about people can be dicey. I might have disagreement about how well this problem can be avoided.
  • Maybe I'm misunderstanding what you mean by "defensible style
... (read more)
8habryka8moThanks for these thoughts, and sorry for the delay in responding. I think one thing that feels particularly off about this article, is the degree to which it really doesn't tell me why I should care, if i don't know who Anna Salamon or Tyler Alterman or Alex Ray are. Like, the citations themselves are just references to other people. I would really dislike a wiki that is full of articles like this, and would feel really alienated if I was a new user showing up, since it really feels like it expects me to know who all of these people are, and why I should care about their opinion (Tyler isn't even an active LessWrong user, and in general I want to really avoid forcing all LessWrong users to become familiar with all the in-person rationality community details and drama and social network). I do think this can be relatively easily fixed by even just adding a very rough summary of his ideas and perspectives to the article, and adding meta-data about the degree to which people have talked about his work and thoughts. I think for now, the action I would actually recommend is asking a top-level question, which in some sense gives a place for a bunch of people to voice their thoughts and opinions (though it might not get much engagement). And then use the answers to that question as a basic scaffold to fill in this article while being more idea-focused.
John Vervaeke

Currently the wiki has basically no entries for people (we have one for Eliezer, but none for Scott Alexander or Lukeprog for example)

There do seem to be stubs for both Scott Alexander and Lukeprog, both similar in size to this Vervaeke page. So I think I'm confused about what the status quo is vs what you are saying the status quo is.

1habryka8moHuh, I notice they don't show up in search, probably because they are marked as stubs, or something (somewhat surprised by this). I think we imported them to not break old links, but I am not sure whether I would want to have them actually show up in search and in other wiki-tag lists (both their current state, but even in a more fully completed state).
John Vervaeke

I'm not sure what cluster you are trying to point to by saying "wiki pages like this".

For this page in particular: I've been hearing more and more about Vervaeke, so I wanted to find out what the community has already figured out about him. It seems like the answer so far is "not much", but as the situation changes I'm excited to have some canonical place where this information can be written up. He seems like an interesting enough guy, or at any rate he seems to have caught the attention of other interesting people, and that seems like a good enough reaso... (read more)

9habryka8moThe key thing is something like "do we want to have lots of content organized by people?". Currently the wiki has basically no entries for people (we have one for Eliezer, but none for Scott Alexander or Lukeprog for example), and while I haven't though super much about this question, the status quo seems at least mildly good to me, because there is both an ethos of the site of focusing on ideas instead of people, and because writing about people is often kind of dicey and will push the whole wiki towards a much more defensible style. I would have very little objection to having an article like this that tries to summarize some of his core ideas, or focuses on some kind of intellectual culture around him, but it feels like if we have this article, then we open up a hole of writing hundreds of articles about everyone who is vaguely related to rationality stuff.
Wiki-Tag FAQ

I tried creating a wiki-tag page today, and here are some questions I ran into that don't seem to be answered by this FAQ:

  • Is there a way to add wiki-links like on the old wiki? I tried using the [[double square brackets]] like on MediaWiki, but this did not work (at least on the new editor).
  • Is there a way to quickly see if a wiki-tag page on a topic already exists? On the creation page, typing something in the box does not show existing pages with that substring. What I'm doing right now is to look on the all tags page (searching with my browser) and also
... (read more)
4Ruby8moI seemed to have gotten a submit error when I tried to post these a couple of hours ago. Here's my attempt to answer your questions. Thanks to Raemon for also answering. ------- Good questions! Not currently. I expect we'll implement a new autocomplete/auto-link on existing pages relatively soon, but this won't automatically create new pages if they don't exist. The best way would be with LessWrong's overall search bar (top right of the page). That has a Wiki & Tags section that'll cover all pages. Sorry for not having search clearer or better (I plan to put a prominent search bar on the Concept page), it's a significant shortcoming right now. Unfortunately not currently but we get a lot of questions about footnotes and references, so I'm personally voting on the team that we actually implement something good here. Thanks for asking, sorry we haven't got all the things fleshed out yet, but thanks for your interest!
4Raemon8moI'm not 100% sure what you mean, but I think this just involves creating a regular lesswrong link the same way you would in any other lesswrong post, comment or tag. Ah, this isn't currently possible on the create-tag page, but is possible when adding tags to an individual post (while on a post page, you can click "add tag". You can type words into the search box, and see a dropdown of tags sharing those words. If you end up not finding an appropriate tag, you can then click the 'create tag' button (which doesn't automatically autofill the tag name into the form, but the words you typed will still be very accessible and it'll be reasonably easy to create the tag there) Unfortunately this isn't that useful to creating tags that you don't want to apply to a particular post. I agree we should add a similar dropdown the create-tag page. I don't think we currently do handle references any particular way. If a reference is relevant, add a link to it in the description. (I think it works best if you include a paragraph explaining why it's relevant, but simply adding a link at the bottom is better than not-that)
The Wiki is Dead, Long Live the Wiki! [help wanted]

The Slack invite link seems to have expired. Is there a new one I can use?

2Raemon8moHere's a new one. I'm not sure how to make a longterm one.
Matt Goldenberg's Short Form Feed

That makes sense, thanks for clarifying. What I've seen most often on LessWrong is to come up with reasons for preferring simple interpretations in the course of trying to solve other philosophical problems such as anthropics, the problem of induction, and infinite ethics. For example, if we try to explain why our world seems to be simple we might end up with something like UDASSA or Scott Garrabrant's idea of preferring simple worlds (this section is also relevant). Once we have something like UDASSA, we can say that joke interpretations do not have much ... (read more)

The LessWrong 2019 Review

Thanks! That does make me feel a bit better about the annual reviews.

4Raemon10moOne of the pernicious issues with word-dillution is that often when people try to use a word to mean things, they're... kinda meaning those things "aspirationally." Where, yes part of my original goal with the Review absolutely included Research Debt. But indeed there's a decent chance it won't succeed at that goal. (But, I do intend to put in a fair amount of optimization pressure towards making it succeed)
The LessWrong 2019 Review

I see, that wasn't clear from the post. In that case I am wondering if the 2018 review caused anyone to write better explanations or rewrite the existing posts. (It seems like the LessWrong 2018 Book just included the original posts without much rewriting, at least based on scanning the table of contents.)

6Raemon10moAt least 3 people substantially rewrote their posts in the 2018 review, and my hope is that over time it becomes pretty common for there to be substantial rewriting. (albeit, two of those people were LessWrong team members) But for what it's worth, here's the diff [] between the original version of my own post and the current version I wrote as a result of the review.
The LessWrong 2019 Review

This is a minor point, but I am somewhat worried that the idea of research debt/research distillation seems to be getting diluted over time. The original article (which this post links to) says:

Distillation is also hard. It’s tempting to think of explaining an idea as just putting a layer of polish on it, but good explanations often involve transforming the idea. This kind of refinement of an idea can take just as much effort and deep understanding as the initial discovery.

I think the kind of cleanup and polish that is encouraged by the review process ... (read more)

6abramdemski10moPersonally, I believe I understood "research debt" in the strong way upon first reading (I hadn't encountered the term before, but the post included a definition), but was then immediately struck by the inadequacy of the review process to address the problem. Granted, it's a move in the right direction.
5habryka10moYes, sorry. The concrete mechanism by which I hope to address research debt is not the editing of essays, but the identification of essays that have good ideas and bad presentation, and encouraging other authors to write better new explanations for them, as well as more something like thorough rewrites of existing posts.
Introduction to Cartesian Frames

So the existence of this interface implies that A is “weaker” in a sense than A’.

Should that say B instead of A', or have I misunderstood? (I haven't read most of the sequence.)

2rohinmshah10moIt should, good catch, thanks!
Matt Goldenberg's Short Form Feed

Have you seen Brian Tomasik's page about this? If so what do you find unconvincing, and if not what do you think of it?

4Matt Goldenberg10moHe seems to be trying to formalize the intuition about what types of computational consciousness we already intuitively give moral weight to, but the very thing I'm worried about is that our intuitions are wrong (in the same way that our intuitions about physics don't hold when we think about environments much bigger or smaller than our own). That is, if the computational consciousness theory is true, and computations with higher complexity feel just as much pain and pleasure and dreams and goals etc as things we normally define as conscious, why should we lower their moral weight?
Daniel Kokotajlo's Shortform

Would this work across different countries (and if so how)? It seems like if one country implemented such a tax, the research groups in that country would be out-competed by research groups in other countries without such a tax (which seems worse than the status quo, since now the first AGI is likely to be created in a country that didn't try to slow down AI progress or "level the playing field").

4Daniel Kokotajlo10moYeah, probably not. It would need to be an international agreement I guess. But this is true for lots of proposals. On the bright side, you could maybe tax the chip manufacturers instead of the AI projects? Idk. Maybe one way it could be avoided is if it came packaged with loads of extra funding for safe AGI research, so that overall it is still cheapest to work from the US.
Embedded Interactive Predictions on LessWrong

Is there a way to see all the users who predicted within a single "bucket" using the LW UI? Right now when I hover over a bucket, it will show all users if the number of users is small enough, but it will show a small number of users followed by "..." if the number of users is too large. I'd like to be able to see all the users. (I know I can find the corresponding prediction on the Elicit website, but this is cumbersome.)

2habryka10moAlas, not currently. It's the very next obvious thing to do, but I haven't gotten around to it.
Open & Welcome Thread – November 2020

Ok. Since visiting your office hours is somewhat costly for me, I was trying to gather more information (about e.g. what kind of moral uncertainty or prior discussion you had in mind, why you decided to capitalize the term, whether this is something I might disagree with you on and might want to discuss further) to make the decision.

More generally, I've attended two LW Zoom events so far, both times because I felt excited about the topics discussed, and both times felt like I didn't learn anything/would have preferred the info to just be a text dump so I could skim and move on. So I am feeling like I should be more confident that I will find an event useful now before attending.

2habryka10moYeah, that's pretty reasonable. We'll see whether I get around to typing up my thoughts around this, but not sure whether I will ever get around to it.
Load More