All of KatjaGrace's Comments + Replies

Coherence arguments imply a force for goal-directed behavior

I wrote an AI Impacts page summary of the situation as I understand it. If anyone feels like looking, I'm interested in corrections/suggestions (either here or in the AI Impacts feedback box).  

4rohinmshah8dLooks good to me :)
Coherence arguments imply a force for goal-directed behavior

A few quick thoughts on reasons for confusion:

I think maybe one thing going on is that I already took the coherence arguments to apply only in getting you from weakly having goals to strongly having goals, so since you were arguing against their applicability, I thought you were talking about the step from weaker to stronger goal direction. (I’m not sure what arguments people use to get from 1 to 2 though, so maybe you are right that it is also something to do with coherence, at least implicitly.)

It also seems natural to think of ‘weakly has goals’ as some... (read more)

4rohinmshah18dThanks, that's helpful. I'll think about how to clarify this in the original post.
Coherence arguments imply a force for goal-directed behavior

Thanks. Let me check if I understand you correctly:

You think I take the original argument to be arguing from ‘has goals' to ‘has goals’, essentially, and agree that that holds, but don’t find it very interesting/relevant.

What you disagree with is an argument from ‘anything smart’ to ‘has goals’, which seems to be what is needed for the AI risk argument to apply to any superintelligent agent.

Is that right?

If so, I think it’s helpful to distinguish between ‘weakly has goals’ and ‘strongly has goals’:

  1. Weakly has goals: ‘has some sort of drive toward something,
... (read more)

Yes, that's basically right.

You think I take the original argument to be arguing from ‘has goals' to ‘has goals’, essentially, and agree that that holds, but don’t find it very interesting/relevant.

Well, I do think it is an interesting/relevant argument (because as you say it explains how you get from "weakly has goals" to "strongly has goals"). I just wanted to correct the misconception about what I was arguing against, and I wanted to highlight the "intelligent" --> "weakly has goals" step as a relatively weak step in our current arguments. (In my ori... (read more)

Animal faces

Good points. Though I claim that I do hold the same facial expression for long periods sometimes, if that's what you mean by 'not moving'. In particular, sometimes it is very hard for me not to screw up my face in a kind of disgusted frown, especially if it is morning. And sometimes I grin for so long that my face hurts, and I still can't stop.

4KatjaGrace1mo(Lesswrong version here: https://www.lesswrong.com/posts/JJxxoRPMMvWEYBDpc/why-does-applied-divinity-studies-think-ea-hasn-t-grown [https://www.lesswrong.com/posts/JJxxoRPMMvWEYBDpc/why-does-applied-divinity-studies-think-ea-hasn-t-grown] )
Tentative covid surface risk estimates

It doesn't seem that hard to wash your hands after putting away groceries, say. If I recall, I was not imagining getting many touches during such a trip. I'm mostly imagining that you put many of the groceries you purchase in your fridge or eat them within a couple of days, such that they are still fairly contaminated if they started out contaminated, and it is harder to not touch your face whenever you are eating recently acquired or cold food.

Wordtune review

Yes - I like 'application' over 'potentially useful product' and 'my more refined writing skills' over 'my more honed writing', in its first one, for instance.

Neck abacus

I grab the string and/or some beads I don't want to move together between my thumb and finger on one hand, and push the bead I do want to move with my thumb and finger of the other hand. (I don't need to see it because I can feel it and the beads don't move with my touching it.) I can also do it more awkwardly with one hand.

Neck abacus

Thanks for further varieties! I hadn't seen the ring, and have had such a clicker but have not got the hang of using it non-awkwardly (where do you put it? With your keys? Who knows where those are? In your pocket? Who reliably has a pocket that fits things in? In your bag? Then you have to dig it out..)

Good point regarding wanting to know what number you have reached. I only want to know the exact number very occasionally, like with a bank account, but I agree that's not true of many use cases.

Unpopularity of efficiency

I haven't read Zvi's post, but would have thought that the good of slack can be cashed out in efficiency, if you are optimizing for the right goals (e.g. if you have a bunch of tasks in life which contribute to various things, it will turn out that you contribute to those things better overall if you have spare time between the tasks).  

If you aren't in the business of optimizing for the ultimately right goals though, I'd think you could also include slack as one of your instrumental goals, and thus mostly avoid serious conflict e.g. instead of turnin... (read more)

6Slider2moThere is probably a process to burn slack to get efficiency and to use efficiency to create slack. I am somewhat skeptical that just having spare time would make overall time spent less. The way I would imagine a slack approach working out for greater efficiency would be like having a drinking pause and conversing with a buddy who gives a tip about the cookie making that makes it go more smoothly. I would also think that instead of using only half the time a slack approach would be just bake cookies to achieve mastery better by being deliberate and slow instead of setting tighter and higher bars. Like pausing to wonder about the philosophy of baking. I have also this analog about anticipating things go wrong. In a military setting having a reseve can be used to address when a default operation goes wrong. A optimization focused mind might think that they need to assign the minimum amount of soldiers to get each task done so that reserve is as big as it can get so it can more forcefullly address problems. But overassigning soldiers to tasks makes each of them less likely to fall. So being slack about it could mean that you want a reserve so that there is flexibility if main plan goes awry but you want the main plan to be flexible enough so it doesn't brittle immidietly on the first hiccups. If it was important that the cookies are made (big party or something) I would probably do them slowly rather than doing them quick and then idling. For big things the difference of being able to withstand 0,1 or 2 catasrophes is pretty big. If you do it quick and have some probability of having to do it from scratch again there is some probablity of spending double time on it. So one approach of increasing the expectancy of success would be to buff up the reliability rather than the number of shots. If you can aim that one bullseye it doesn't matter how many arrows per second you can shoot at the target. One could think this as bullseyes per arrow in eficiency terms. But one c
Li’l pots

Thanks. What kind of gloves do you suggest?

2hamnox3moLatex or nitrile, like you'd find at a hospital. Most grocery stores will sell reusable rubbery gloves for cleaning, but those are usually too thick and oversized to get the fine motor control I want when cooking.
Blog plant

I actually know very little about my plants at present, so cannot help you.

Blog plant

It is irrigation actually, not moisture sensors. Or rather, I think it irrigates based on the level of moisture, using a combination of tiny tubes and clay spikes that I admittedly don't fully understand. (It seems to be much better at watering my plants than I am, even ignoring time costs!) I do have to fill up the water container sometimes.

What technologies could cause world GDP doubling times to be <8 years?

I meant: conditional on it growing faster, why expect this is attributable to a small number of technologies, given that when it accelerated previously it was not like that (if I understand)?

3Daniel Kokotajlo4moAnother, somewhat different reply occurs to me: Plausibly the reason why growth rates have been roughly steady for the past sixty years or so is that world population growth has slowed down (thanks to education, birth control, etc.). So on this view, there's technological growth and there's population growth, and combined they equal GWP growth, and combined they've been on a hyperbolic trajectory for most of history but recently are merely on an exponential trajectory thanks to faltering population growth. If this story is right, then in order for GWP growth to accelerate again, we either need to boost population growth, or boost technological growth even more than usual, i.e. even more than was the case in previous periods of GWP acceleration like the industrial revolution or the agricultural revolution or, well, literally any period. So, I'd conclude that it's unlikely for GWP growth to accelerate again, absent specific reasons to think this time will be different. AGI is one such reason. The other answers people are giving are other reasons (though I don't find them plausible.)
1Daniel Kokotajlo4moAh, OK. Good point. I think when it accelerated previously, it was the result of a small number of technologies, so long as we are careful to define our technologies broadly enough. For example, we can say the acceleration due to the agricultural revolution was due to agriculture + a few other things maybe. And we can say the acceleration due to the industrial revolution was due to engines + international trade + mass-production methods + scientific institutions + capitalist institutions + a few other things I'm forgetting. I'm looking for something similar here; e.g. Abram's answer "We automate everything, but without using AGI" is acceptable to me, even though it's only a single technology if we define our tech extremely broadly.
What technologies could cause world GDP doubling times to be <8 years?

If throughout most of history growth rates have been gradually increasing, I don't follow why you would expect one technology to cause it to grow much faster, if it goes back to accelerating.

2Daniel Kokotajlo4moI currently don't expect it to grow much faster, at least not until we have AGI. Is your question why I think AGI would make it grow much faster? Roughly, my answer is "Because singularity." But if you think not even AGI would make it grow much faster -- which in this context means >9% per year -- then that's all the more reason to think "Economy doubles in 4 years before the first 1-year doubling" is a bad metric for what we care about. (To clarify though, I don't intend for the question to be only answerable by single technologies. Answers can list several technologies, e.g. all the ones on my list.)
Why are delicious biscuits obscure?

They are meant to be chewy, not crumbly.

Why are delicious biscuits obscure?

Making them tastier, though not confident about this - originally motivated by not having normal flour, and then have done some of each, and thought the gluten free ones were better, but much randomness at play. 

I did mean 'white' by 'wheat'; sorry (I am a foreigner). I haven't tried anything other than the gluten free one mentioned and white wheat flour.

Automated intelligence is not AI

>Someone's cognitive labor went into making the rabbit mold, and everything from there on out is eliminating the need to repeat that labor, and to reduce the number of people who need to have that knowledge.

Yeah, that's the kind of thing I had in mind in the last paragraph.

My dad got stung by a bee, and is mildly allergic. What are the tradeoffs involved in deciding whether to have him go the the emergency room?

In such a case, you might get many of the benefits without the covid risks from driving to very close to the ER, then hanging out there and not going in and risking infection unless worse symptoms develop, but being able to act very fast if they do.

Soft takeoff can still lead to decisive strategic advantage

1) Even if it counts as a DSA, I claim that it is not very interesting in the context of AI. DSAs of something already almost as large as the world are commonplace. For instance, in the extreme, the world minus any particular person could take over the world if they wanted to. The concern with AI is that an initially tiny entity might take over the world.

2) My important point is rather that your '30 year' number is specific to the starting size of the thing, and not just a general number for getting a DSA. In particular, it does not apply to sma... (read more)

1Daniel Kokotajlo1yI like your point #2; I should think more about how the 30 year number changes with size. Obviously it's smaller for bigger entities and bigger for smaller entities, but how much? E.g. if we teleported 2020 Estonia back into 1920, would it be able to take over the world? Probably. What about 1970 though? Less clear. Military power isn't what I'm getting at either, at least not if measured in the way that would result in AI companies having little of it. Cortez had, maybe, 1/10,000th of the military power of Mexico when he got started. At least if you measure in ways like "What would happen if X fought Y." Probably 1/10,000th of Mexico's military could have defeated Cortez' initial band. If we try to model Cortez' takeover as him having more of some metric than all of Mexico had, then presumably Spain had several orders of magnitude more of that metric than Cortez did, and Western Europe as a whole had at least an order of magnitude more than that. So Western Europe had *many* orders of magnitude more of this stuff, whatever it is, than Mexico, even though Mexico had a similar population and GDP. So they must have been growing much faster than Mexico for quite some time to build up such a lead--and this was before the industrial revolution! More generally, this metric that is used for predicting takeovers seems to be the sort of thing that can grow and/or shrink orders of magnitude very quickly, as illustrated by the various cases throughout history of small groups from backwater regions taking over rich empires. (Warning: I'm pulling these claims out of my ass, I'm not a historian, I might be totally wrong. I should look up these numbers.)
3Matthew Barnett1yThis is a concern with AI, but why is it the concern. If eg. the United States could take over the world because they had some AI enabled growth, why would that not be a big deal? I'm imagining you saying, "It's not unique to AI" but why does it need to be unique? If AI is the root cause of something on the order of Britain colonizing the world in the 19th century, this still seems like it could be concerning if there weren't any good governing principles established beforehand.
Soft takeoff can still lead to decisive strategic advantage

The time it takes to get a DSA by growing bigger depends on how big you are to begin with. If I understand, you take your 30 years from considering the largest countries, which are not far from being the size of the world, and then use it when talking about AI projects that are much smaller (e.g. a billion dollars a year suggests about 1/100,000 of the world). If you start from a situation of an AI project being three doublings from taking over the world say, then most of the question of how it came to have a DSA seems to be the question of how it grew the... (read more)

1Daniel Kokotajlo1yI was thinking of an initially large country growing fast via AI, yes. Still counts; it is soft takeoff leading to DSA. However I am also making much stronger claims than that--I think it could happen with a corporation or rogue AGI. I don't think annual income is at all a good measure of how close an entity is to taking over the world. When Cortez landed in Mexico he had less than 1/100,000th of the income, population, etc. of the region, yet he ruled the whole place three years later. Then a few years after that Pizarro repeated the feat in Peru, good evidence that it wasn't just an amazing streak of luck.
LW For External Comments?

This sounds great to me, and I think I would be likely to sign up for it if I could, but I haven't thought about it for more than a few minutes, am particularly unsure about the implications for culture, and am maybe too enthusiastic in general for things being 'well organized'.

-1agai1yI don't think so. :)
Pieces of time

Oh yeah, I think I get something similar when my sleep schedule gets very out of whack, or for some reason when I moved into my new house in January, though it went back to normal with time. (Potentially relevant features there: bedroom didn't seem very separated from common areas, at first was sleeping on a pile of yoga mats instead of a bed, didn't get out much.)

jacobjacob's Shortform Feed

I think random objects might work in a similar way. e.g. if talking in a restaurant, you grab the ketchup bottle and the salt to represent your point. I've only experimented with this once, with ultimately quite an elaborate set of condiments, tableware and fries involved. It seemed to make things more memorable and followable, but I wasn't much inclined to do it more for some reason. Possibly at that scale it was a lot of effort beyond the conversation.

Things I see around me sometimes get involved in my thoughts in a way that seems related. For ... (read more)

Realistic thought experiments

No, never heard of it, that I know of.

Berkeley: being other people

I'm pretty unsure how much variation in experience there is—'not much' seems plausible to me, but why do you find it so probable?

Moloch in whom I sit alone

I also thought that at first, and wanted to focus on why people join groups that are already large. But yeah, lack of very small groups to join would entirely explain that. Leaving a group signaling not liking the conversation seems like a big factor from my perspective, but I'd guess I'm unusually bothered by that.

Another random friction:

  • If you just sit alone, you don't get to choose the second person who joins you. I think a thing people often do rather than sitting alone is wander alone, and grab someone else also wandering, or have plausible deniability that they might be actually walking somewhere, if they want to avoid being grabbed. This means both parties get some choice.
Moloch in whom I sit alone

Aw, thanks. However I claim that this was a party with very high interesting people density, and that the most obvious difference between me and others was that I ever sat alone.

Epistemic Spot Check: The Dorito Effect (Mark Schatzker)

I share something like this experience (food desirability varies a lot based on unknown factors and something is desirable for maybe a week and then not desirable for months) but haven't checked carefully that it is about nutrient levels in particular. If you have, I'd be curious to hear more about how.

(My main alternative hypothesis regarding my own experience is that it is basically imaginary, so you might just have a better sense than me of which things are imaginary..)

Epistemic Spot Check: The Dorito Effect (Mark Schatzker)

A page number or something for the 'more seasoned' link might be useful. The document is very long and doesn't appear to contain 'season-'.

The 'blander' link doesn't look like it supports the claim much, though I am only looking at the abstract. It says that 'in many instances' there have been reductions in crop flavor, but even this appears to be background that the author is assuming, rather than a claim that the paper is about. If the rest of the paper does contain more evidence on this, could you quote it or something, since the paper is expensive to see?

7Elizabeth3yRe: seasoning. Page 19: "Miscellaneous foods including spices generally increased from 10 pounds per capita in 1909 to 13 pounds per capita in 2000. Spices were not added to the food supply until 1918. The use of spices increased more than fivefold from one-half pound per capita in 1918 to 2.59 pounds per capita in 2000 (data not shown)." Have contacted you out of band with a copy of the paper, which does indeed go into more detail than the abstract.
Reframing misaligned AGI's: well-intentioned non-neurotypical assistants
I am somewhat hesitant to share simple intuition pumps about important topics, in case those intuition pumps are misleading.

This sounds wrong to me. Do you expect considering such things freely to be misleading on net? I expect some intuition pumps to be misleading, but for considering all of the intuitions that we can find about a situation to be better than avoiding them.

6Ben Pace3yI feel like there are often big simplifications of complex ideas that just convey the wrong thing, and I was vaguely worried that in a field primarily dominated by things that are hard-to-read, things that are easy to understand will dominate the conversation even if they're pretty misguided. It's not a big worry for me here, but it was the biggest hesitation I had.
6Raemon3yNot sure what Ben meant, but my own take is "sharing is fine, but intuition pumps without rigor backing them are not something we should curate regularly as an exemplar of what LW is trying to be"
Will AI See Sudden Progress?

Thanks for your thoughts!

I don't quite follow you on the intelligence explosion issue. For instance, why does a strong argument against the intelligence explosion hypothesis need to show that a feedback loop is unlikely? Couldn't we believe that it is likely, but not likely to be very rapid for a while? For instance, there is probably a feedback loop in intelligence already, where humans with better thoughts and equipment are effectively smarter, and can then devise better thoughts and equipment. But this has been true for a while, and is a fairly slow process (at least for now, relative to our ability to deal with things).

1Charlie Steiner3yYeah, upon rereading that response, I think I created a few non sequiturs in revision. I'm not even 100% sure what I meant by some bits. I think the arguments that now seem confusing were me was saying that by putting an intelligence feedback loop in the reference class of "feedback loops in general" and then using that to forecast low impact, the thing that is doing most of the work is simply how low impact most stuff is. A nuclear bomb (or a raindrop forming, or tipping back a little too far in your chair) can be modeled as a feedback loop through several orders of magnitude of power output, and then eventually that model breaks down and the explosion dissipates, and the world might be a little scarred and radioactive, but it is overall not much different. But if your AI increased by several orders of magnitude in intelligence (let's just pretend that's meaningful for a second), I would expect that to be a much bigger deal, just because the thing that's increasing is different. That is, I was thinking that the implicit model used by the reference class argument from the original link seems to predict local advantages in AI, but predict *against* those local advantages being important to the world at large, which I think is putting the most weight on the weakest link. Part of this picture I had comes from what I'm imagining as prototypical reference class members - note that I only imagined self-sustaining feedback, not "subcritical" feedback. In retrospect, this seems to be begging the question somewhat - subcritical feedback speeds up progress, but doesn't necessarily concentrate it, unless there is some specific threshold effect for getting that feedback. Another feature of my prototypes was that they're out-of-equilibrium rather than in-equilibrium (an example of feedback in equilibrium is global warming, where there's lots of feedback effects but they're more or less canceling each other out), but this seems justified. I would agree that one can imagine som
Making yourself small

My example for high status/small was an esteemed teacher unexpectedly dropping in to see to see their student perform, and entering silently and at the last minute, then standing quietly at the back of the room by the door.

2Helen3yTotally.
Person-moment affecting views

I also think they are probably wrong, but this kind of argument is a substantial part of why. So I want to see if they can be rescued from it, since that would affect their probability of being right from my perspective.

Do you think there are more compelling arguments that they are wrong, such that we need not consider ones like this? (Also just curious)

2Jan_Kulveit3yI think this is quite devastating analysis for them, even if you would take a "person" to be well defined object. The Person-Affecting Restriction, Comparativism, and the Moral Status of Potential People, by Gustaf Arrhenius [https://pdfs.semanticscholar.org/c64c/9c5429386e809701bb7555ae871a2e0564e5.pdf] for example see fig. 2 and related argument. Basically, you have 3 worlds (A;B;C), with populations ([x,y];[y,z];[z,x]). You set the welfare of the populations such that So you would have to sacrifice transitivity to "rescue" PAW. Another argument may be from physics - according to many-worlds interpretation of QM, there exists a world where I was not born, because some high energy particle damaged a part of DNA necessary for my birth. Hence, for each person there exists a world where he does not exists. Taken ad absurdum, nobody has moral value.
2Dagon3yI'm not sure if "wrong", "incoherent", or just "incomplete", but this is one major hole in strict person-affecting views. When comparing two future universes, are you disallowed from having a preference if NEITHER of them contains any entity (or consciousness-path or whatever you say is "person" across time) from the current universe? 200 years from now has ZERO person-overlap with now. Does that mean nothing matters?
Multidimensional signaling

>Katja: do people infer that taste and wealth go together?

My weak guess is yes, but not sure.

2ESRogs3yI meant to be paraphrasing you, not asking you a question :P
Multidimensional signaling

I don't follow why you think this dynamic exists because wealth and taste are correlated. I think the dynamic I am describing is independent of that, and caused by it being very hard to find a signal of taste say that you cannot buy with other resources at least somewhat. If in fact taste was anticorrelated with wealth in terms of underlying characteristics, a wealthy person could still buy other people's tasteful guidance for instance.

There's No Fire Alarm for Artificial General Intelligence

Scott's understanding of the survey is correct. They were asked about four occupations (with three probability-by-year, or year-reaching-probability numbers for each), then for an occupation that they thought would be fully automated especially late, and the timing of that, then all occupations. (In general, survey details can be found at https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/)

Gnostic Rationality

"It's not enough to know about the Way and how to walk it; you need gnosis of walking."

Could I have a less metaphorical example of what people need gnosis of for rationality? I'm imagining you are thinking of e.g. what it is like to carry out changing your mind in a real situation, or what it looks like to fit knowing why you believe things into your usual sequences of mental motions, but I'm not sure.

2G Gordon Worley III4yYep, sounds like you got it. It's like when you quit grad school because you realize you were only staying for sunk costs, or start exercising because you believe in its benefits, and you don't have to go through the motion of explicitly figuring this out and then willing yourself into doing it. You knew it, maybe you double check your work to make sure the dark, unobserved processes of your brain didn't make a mistake, and then you just do it because it's the most natural thing in the world, like taking a sip of water when you're thirsty.
Gnostic Rationality

So a gnostically rational person with low epistemic rationality cannot figure things out by reasoning, yet experiences being rational nonetheless? Could you say more about what you mean by 'rational' here? Is it something like frequently having good judgment?

3G Gordon Worley III4yMmmm, these aren't orthogonal dimensions within rationality. We wouldn't call a person who happened to win all the time because they made the right choices without being able to explain why a rationalist; we'd probably just say they are wise or have good judgement. By "rational" and "rationality" I want to point at the same thing Eli(ezer) does, which he also called the "winning Way". It's something like "the ability to take action that you are happy with" although I'd probably describe it in technical terms as "axiologically aligned intention". Rationality is an almost inherently epistemic notion because such alignment requires logical reasoning to judge, and in fact understanding how this works thoroughly rigorously seems to be the core of the AI safety problem. Thus even if someone could be accidentally rational without being a rationalist, this is something that is only interesting to those with sufficient epistemic rationality to assess it, and thus there's not really a strong sense in which you can have someone with lots of gnosis of rationality who doesn't also have episteme of it because they wouldn't know rationality in any sense well enough to have gnosis of it.
For signaling? (Part I)

I wasn't thinking of one of them as the opponent really, but it is inspired by an amalgam of all the casual conversation about signaling I have ever had. For some reason I feel like there is sort of a canonical platonic conversation about signaling, and all of the real conversations are short extracts from it. So I started out tried to write it down. It doesn't seem very canonical in the end, but I figured it might be interesting anyway.

Impression track records

In my terminology, 'impression' is your own sense of what seems true before taking into account other people's views (unless another person's view actually changes your own sense) and 'belief' is what you would actually bet on, given that you are not vastly more reliable than everyone with different impressions.

For example, perhaps my friend is starting a project, and based on talking to her about it a bit I feel like it is stupid and will never work. But several other friends who work on similar projects are really excited ab

... (read more)
I Want To Live In A Baugruppe

Interested in things like this, presently have a partial version that is good.

I Want To Live In A Baugruppe

In my experience this has been less of a problem than you might expect: our landlord likes us because we are reasonable and friendly and only destroy parts of the house when we want to make renovations with our own money and so on. So they would prefer more of us to many other candidates. And since we would also prefer they have more of us, we can make sure our landlord and more of us are in contact.

I Want To Live In A Baugruppe

I and friends have, but pretty newly; there are currently two houses two doors apart, and more friends in the process of moving into a third three doors down. I have found this good so far, and expect to continue to for now, though i agree it might be unstable long term. As an aside, there is something nice about being able to wander down the street and visit one's neighbors, that all living in one house doesn't capture.

Superintelligence 29: Crunch time

Bostrom quotes a colleague saying that a Fields medal indicates two things: that the recipient was capable of accomplishing something important, and that he didn't. Should potential Fields medalists move into AI safety research?

0diegocaleiro6yWhy not actual fields medalists? Tim Ferris lays out a guide for how to learn anything really quickly, which involves contacting whoever was great at that ten years ago and asking them who is great that should not be. Doing that for field medalists and other high achievers is plausibly extremely high value.
Superintelligence 29: Crunch time

The claim on p257 that we should try to do things that are robustly positive seems contrary to usual consequentialist views, unless this is just a heuristic for maximizing value.

Superintelligence 29: Crunch time

Does anyone know of a good short summary of the case for caring about AI risk?

2Paul Crowley6yIt's very surprising to me that this doesn't exist yet. I hope everyone reading this and noticing the lack of answers starts writing their own answer—that way we should get at least one really good one.
0[anonymous]6yI have a write-up [http://effective-altruism.com/r/main/ea/fn/maximizing_longterm_impact/] regarding caring about long-term things in general, AI risk being one example. I'm not sure whether it's good or short.
Superintelligence 29: Crunch time

Did you disagree with anything in this chapter?

Load More