All of Eli Tyre's Comments + Replies

In particular, existing AI training strategies don’t need to handle a “drastic” distribution shift from low levels of intelligence to high levels of intelligence. There’s nothing in the foreseeable ways of building AI that would call for a big transfer like this, rather than continuously training as intelligence gradually increases.

An obvious possible regime change is the shift to training (some) agents that do lifetime learning rather than only incorporating capability from SGD.

That's one thing simple thing that seems likely to generate a sharp left turn.

3paulfchristiano3d
I wouldn't call that a simple thing---"lifetime learning" is a description of a goal, not an algorithm. How are these agents trained? It's hard produce sophisticated long-horizon learning mechanisms by gradient descent using existing techniques (because you don't many lifetimes over which to adjust such mechanisms by gradient descent). So most of the time the mechanism is something built by humans or that transfers from short tasks, and then we need to talk details. Perhaps more importantly, why does lifetime learning go quickly from "doesn't help meaningfully" to "and now the agents are radically superhuman"? I think at a basic level I don't understand the mechanism of the potential sharp left turn (though I may also not fully understand what that term refers to).

The notion of an AI-enabled “pivotal act” seems misguided. Aligned AI systems can reduce the period of risk of an unaligned AI by advancing alignment research, convincingly demonstrating the risk posed by unaligned AI, and consuming the “free energy” that an unaligned AI might have used to grow explosively. No particular act needs to be pivotal in order to greatly reduce the risk from unaligned AI, and the search for single pivotal acts leads to unrealistic stories of the future and unrealistic pictures of what AI labs should do.

We could maybe make the wor... (read more)

2paulfchristiano3d
I agree that small differences in growth rates between firms or countries, compounded over many doublings of total output, will lead to large differences in final output. But I think there are quite a lot of other moving steps in this story before you get to the need for a pivotal act. It seems like you aren't pointing to the concentration of power per se (if so, I think your remedies would look like normal boring stuff like corporate governance!), I think you are making way more opinionated claims about the risk posed by misalignment. Most proximately, I don't think that "modestly reduce the cost of alignment" or "modestly slow the development or deployment of unaligned AI" need to look like pivotal acts. It seems like humans can do those things a bit, and plausibly with no AI assistance can do them at >1 year per year of delay. AI assistance could help humans do those things better, improving our chances of getting over 1 year per year of delay. Modest governance changes could reduce the risk each year of catastrophe. You don't necessarily have to delay that long in calendar time in order to get alignment solutions. etc.

Eliezer appears to expect AI systems performing extremely fast recursive self-improvement before those systems are able to make superhuman contributions to other domains (including alignment research), but I think this is mostly unjustified. If Eliezer doesn’t believe this, then his arguments about the alignment problem that humans need to solve appear to be wrong.

My understanding of Eliezer's view is that some domains are much harder to do aligned cognition in than others, and alignment is among the hardest. 

(I'm not sure I clearly understand wh... (read more)

4paulfchristiano3d
I agree that Eliezer holds that view (and also disagree---I think this is the consensus view around LW but haven't seen anything I found persuasive as a defense). I don't think that's his whole view, since he frequently talks about AI doing explosive improvement before other big scientific changes, and generally seems to be living in a world where this is an obvious and unstated assumption behind many of the other things he says.

One important factor seems to be that Eliezer often imagines scenarios in which AI systems avoid making major technical contributions, or revealing the extent of their capabilities, because they are lying in wait to cause trouble later. But if we are constantly training AI systems to do things that look impressive, then SGD will be aggressively selecting against any AI systems who don’t do impressive-looking stuff. So by the time we have AI systems who can develop molecular nanotech, we will definitely have had systems that did something slightly-less-impr

... (read more)
2paulfchristiano3d
I would say: this objection holds for all AI designs that are being seriously considered to date. I agree it doesn't apply in full generality to "not-modern-ML." That said, you can use gradient descent to build agents that accumulate and manipulate knowledge (e.g. by reading and writing to databases, either in natural language or in opaque neuralese) and my argument applies just as well to those systems. I think you are imagining something more precise. I do agree that once you say "the techniques will just be totally different from ML" then I get more into "all bets are off," and maybe then I could end up with a 50-50 chance of AI systems concealing their capabilities (though that still seems high). That said, I think you shouldn't be confident of "totally different from ML" at this point: 1. I think you have at least a reasonable probability on "modern ML leads to transformative effects and changes the game," especially if transformation happen soon. If this paradigm is supposed to top out, I would like more precision about where and why. 2. Our reasoning about alternatives to ML seems really weak and uninformative, and it's plausible that reasoning about ML is a better guide to what happens in the future even if there is a big transformation. 3. At this point it's been more than 10 years with essentially no changes to the basic paradigm that would be relevant to alignment. Surely that's enough to get to a reasonable probability of 10 more years? 4. This outcome already looked quite plausible to me in 2016, such that it was already worth focusing on, and it seems like evidence from the last 7 years makes it look much more likely.

That's the hard part.

My guess is that training cutting edge models, and not releasing them is a pretty good play, or would have been, if there wasn't huge AGI hype. 

As it is, information about your models is going to leak, and in most cases the fact that something is possible is most of the secret to reverse engineering it (note: this might be true in the regime of transformer models, but it might not be true for other tasks or sub-problems). 

But on the other hand, given the hype, people are going to try to do the things that you're doing anyway,... (read more)

In terms of speeding up AI development, not building anything > building something and keeping it completely secret > building something that your competitors learn about > building something and generating public hype about it via demos > building something with hype and publicly releasing it to users & customers.

I think it is very helpful, and healthy for the discourse, to make this distinction. I agree that many of these things might get lumped together.

But also, I want to flag the possibility that something can be very very bad to do, e... (read more)

2Jeffrey Ladish16d
Yeah, I agree with all of this, seems worth saying. Now to figure out the object level... 🤔

I like the creative thinking here.

I suggest a standard here, where can test our "emulation" against the researcher themselves, to see how much of a diff there is in their answers, and the researcher and rate how good a substitute the model is for themselves, on a number of different dimensions.
 

This continues to be one of the best and most important posts I have every read.

I have multiple references that corroborate that.

Can you share? I would like to have a clearer sense of what happened to them. If there's info that I don't know, I'd like to see it.

things i'm going off:

the pdf archive of Maia's blog posted by Ziz to sinseriously (I have it downloaded to backup as well)
the archive.org backup of Fluttershy's blog
Ziz's account of the event (and how sparse and weirdly guilt ridden it is for her)
several oblique references to the situation that Ziz makes
various reports about the situation posted to LW which can be found by searching Pasek

From this i've developed my own model of what ziz et al have been calling "single-good interhemispheric game theory" which is just extremely advanced and high level beatin... (read more)

I do appreciate the conciseness a lot. 

It seems like I maybe would have gotten the same value from the essay (which would have taken 5 minutes to read?) as from this image (which maybe took 5 seconds).

But I don't want to create a culture that rewards snark, even more than it already does. It seems like that is the death of discourse, in a bunch of communities.

So I'm interested in if there are ways to get the benefits here, without the costs.

2Viliam24d
What about essay first, the image at the bottom?
2lc1mo
Agreed.

Downvoted, because even though I think this is reasonable point worth considering, I'm not excited about a LessWrong dominated by snarky memes, that make points, instead of essays.

4lc1mo
I started an essay version, but decided the meme version was concise without much loss of detail. I see your point though. I'll go ahead and remove my upvote of this and post the essay instead.

Yeah, but that's a crux. Tigers might be awesome, but they're not optimal.

I think this was excellently worded, and I'm glad you said it. I'm also glad to have read all the responses, many of which seem important and on point to me. I strong upvoted this comment as well as several of the responses.

I'm leaving this comment, because I want to give you some social reinforcement for saying what you said, and saying it as clearly and tactfully as you did. 

There wasn't actually any such thing as Security, and if there ever was it would mean that it was time to overthrow the government immediately.

I held back tears at this part.

FYI, he wrote a book in which he describes his process. You want chapter 16: "How to Become and Idea Machine."

I think it would be good if someone could verify if this story is true. Is there someone with a known identity that can verify the author and confirm that this isn't a troll post?

 

I can verify that the owner of the blaked[1] account is someone I have known for a significant amount of time, that he is a person with a serious, long-standing concern with AI safety (and all other details verifiable by me fit), and that based on the surrounding context I strongly expect him to have presented the story as he experienced it.

This isn't a troll.

  1. ^

    (also I get to claim memetic credit for coining the term "blaked" for being affected by this class of AI persuasion)

7blaked2mo
I could make that happen for sure, but I don't see many incentives to - people can just easily verify the quality of the LLM's responses by themselves, and many did. What questions do you want answered, and what parts of the story do you hope to confirm by this?

I retracted this comment, because reading all of my comments here, a few years later, I feel much more compelled by my original take than by this addition.

I think the addition points out real dynamics, but that those dynamics don't take precedence over the dynamics that I expressed the first place. Those seem higher priority to me.

If someone makes correlated errors, they are better explained as part of a strategy.

That does seem right to me.

It seems like very often correlated errors are the result of a mistaken, upstream crux. They're making one mistake, which is flowing into a bunch of specific instances.

This at least has to be another hypothesis, along with "this is a conscious or unconscious strategy to get what they want."

Increasing your output bandwidth in a case like this one would just give the AI more ability to model you and cater to you specifically.

3TekhneMakre2mo
That would be one potential effect. Another potential effect would be that you can learn to manipulate (not in the psychological sense, in the sense of "use one's hands to control") the AI better, by seeing and touching more of the AI with faster feedback loops. Not saying it's likely to work, but I think"hopeless" goes too far.  

This story increases my probability that AI will lead to dead rock instead of a superintelligent sphere of computronium, expanding outwards at near the speed of light.

Manipulating humans into taking wild actions will be a much much much easier task than inventing nanotech or building von neuman probes. I can easily imagine the world ending as too many people go crazy in unprecedented ways, as a result of the actions of superhumanly emotionally intelligent AI systems, but not as part of any coordinated plan.  

9Hgbanana1232mo
Scott Alexander has an interesting little short on human manipulation: https://slatestarcodex.com/2018/10/30/sort-by-controversial/ [https://slatestarcodex.com/2018/10/30/sort-by-controversial/.]  So far everything I'm seeing, both fiction and anecdotes, is consistent with the notion that humans are relatively easy to model and emotionally exploit. I also agree with CBiddulph's analysis, insofar as while the paperclip/stamp failure mode requires the AI to have planning, generation of manipulative text doesn't need to have a goal--if you generate text that is maximally controversial (or maximises some related metric) and disseminate the text, that by itself may already do damage. 

Strong upvote + agree. I've been thinking this myself recently. While something like the classic paperclip story seems likely enough to me, I think there's even more justification for the (less dramatic) idea that AI will drive the world crazy by flailing around in ways that humans find highly appealing.

LLMs aren't good enough to do any major damage right now, but I don't think it would take that much more intelligence to get a lot of people addicted or convinced of weird things, even for AI that doesn't have a "goal" as such. This might not directly cause... (read more)

I'm struck by how much this story drives home the hopelessness of Brain-computer interface "solutions" to alignment. The AI learned to manipulate you through a text channel. In what way would giving the AI direct access to your brain help?

While I'm not particularly optimistic about BCI solutions either, I don't think this story is strong evidence against them. Suppose that the BCI took the form of an exocortex that expanded the person's brain functions and also significantly increased their introspective awareness to the level of an inhumanly good meditator. This would effectively allow for constant monitoring of what subagents within the person's mind were getting activated in conversation, flagging those to the person's awareness in real time and letting the person notice when they were g... (read more)

4TekhneMakre3mo
By increasing your output bandwidth, obviously.

It's like the whole world is about to be on new, personally-tailored, drugs. 

And not being on drugs won't be an option. Because the drugs come with superpowers, and if you don't join in, you'll be left behind, irrelevant, in the dust.

This was and is already true to a lesser degree with manipulative digital socialization. The less of your agency you surrender to network X, the more your friends who have given their habits to network X will be able to work at higher speed and capacity with each other and won't bother with you. But X is often controlled by a powerful and misaligned entity.

And of course these two things may have quite a lot of synergy with each other.

5Vitor2mo
Please do tell what those superpowers are!

The choice of music for this video is superb.

And in general, great work, guys! 
 

Somebody who already knows the precise way in which the constellation Ursa Major outlines a bear might be like "of course!" But someone who's simply told "these points are supposed to form a bear" is unlikely to end up conceiving of this:
 

Um. Do bears have tails?

From googling, it looks like some of them do, but they don't have tails like that

Have bears changed since ancient times? Or are these just the charismatic bears, which all happen to have short tails? [Another google image search suggests its not that one.] Is "bear" a mistranslation of ... (read more)

1MichaelDickens3mo
Without the outline, the stars look really skinny. To my eye, it looks much more like an anteater.

On one hand wikipedia suggests Jewish astronomers saw the three tail stars as cubs. But at the same time, it suggests several ancient civilizations independently saw Ursa Major as a bear. Also confused.

A post making a related point: https://knowingless.com/2017/10/18/me-too-on-sexual-assault/

Absolutely excellent. The most gripping short story I've read in years.

Since this got nominated, now's a good time to jump in and note that I wish that I had chosen different terminology for this post.

I was intending for "final crunch time" to be a riff on Eliezer saying, here, that we are currently in crunch time.

This is crunch time for the whole human species, and not just for us but for the intergalactic civilization whose existence depends on us. This is the hour before the final exam and we're trying to get as much studying done as possible.

I said explicitly, in this post, "I'm going to refer to this last stretch of a fe... (read more)

2the gears to ascension4mo
I think that if we were in crunch time in 2010, your phrasing is fine, because we're in final crunch time now. If you have an alarm, please ring it. Though, also make sure to mention that coprotective safety is looking tractable and likely to succeed if we try! despite the drawbacks, the diplomacy ai gave me a lot of hope that we can solve the hard cooperation problem.
5Algon4mo
I think this [https://www.overcomingbias.com/2022/05/foom-update.html] is his latest comment, but it is on FOOM. Hanson's opinion is that, on the margin, the current amount of people working on AI safety seems adequate. Why? Because there's not much useful work [https://www.overcomingbias.com/2022/06/why-not-wait-on-ai-risk.html]you can do without access to advanced AI, and he thinks the latter is a long time in coming. Again, why? Hanson thinks that FOOM is the main reason to worry about AI risk. He prefers an outside view to predict technologies which we have little empirical information on and so believes FOOM is unlikely because he thinks progress historically doesn't come in huge chunks [https://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html] but gradually. You might question the speed of progress, if not its lumpiness, as deep learning seems to pump out advance after advance. Hanson argues that people are estimating progress poorly and talk of deep learning is over-blown [https://www.overcomingbias.com/2016/12/this-ai-boom-will-also-bust.html].  What would it take to get Hanson to sit up and pay more attention to AI? AI self-monologue [https://www.overcomingbias.com/2022/04/ailanguageprogess.html]used to guide and improves its ability to perform useful tasks.  One thing I didn't manage to fit in here is that I feel like another crux for Hanson would be how the brain works. If the brain tackles most useful tasks using a simple learning algorithm, like Steve Byrnes argues, instead of a grab bag of specialized modules with distinct algorithms for each of them, then I think that would be a big update. But that is mostly my impression, and I can't find the sources I used to generate it.

Maybe something like what Val is pointing at is true of me, but I'm not sure.

I don't resonate with what I take Val to be pointing at here.

I personally resonate with what I take Val to be pointing at here.

Poll: Does your personal experience resonate with what you take Val to be pointing at in this post?

Options are sub-comments of this parent. 

Please vote by agreeing, not upvoting, with the answer that feels right to you. Please don't click the disagree button for options you disagree with, so that we can easily tabulate numbers by checking how many people have voted.

(Open to suggestions for better ways to set up polls, for the future.)

 

2Eli Tyre4mo
Maybe something like what Val is pointing at is true of me, but I'm not sure.
2Eli Tyre4mo
I don't resonate with what I take Val to be pointing at here.
1Eli Tyre4mo
I personally resonate with what I take Val to be pointing at here.

Shouldn't the question be "which is better, a tiger or a designer tiger?"?

2Lone Pine4mo
Do we care that the tiger is a violent dangerous predator? Is that part of what it means to be a tiger? If we remove the predator from the tiger, is he still a tiger?

Isn't yet another surprising result of existing capabilities evidence that general intelligence is itself a surprising result of existing capabilities?

4Jeff Rose4mo
That is too strong a statement.  I think that it is evidence that general intelligence may be easier to achieve than commonly thought.  But, past evidence has already shown that over the last couple of years and I am not sure that this is significant additional evidence in that regard.  

and I am confident that I can back out (and actually correct my intuitions) if the need arises.

Did you ever do this? Or are you still running on some top-down overwritten intuitive models?

If you did back out, what was that like? Did you do anything in particular, or did this effect fade over time? 

Ideally, we would be just as motivated to carry out instrumental goals as we are to carry out terminal goals. In reality, this is not the case. As a human, your motivation system does discriminate between the goals that you feel obligated to achieve and the goals that you pursue as ends unto themselves.

I don't think that this is quite right actually.

If the psychological link between them is strong in the right way, the instrumental goal will feel as appealing as the terminal goal (because succeeding at the instrumental goal feels like making progress on th... (read more)

None of this is explicit, mind you, it's just the nature of goals. I can change the goal and I can drop the goal, but I can't hold the goal and not pursue it.

Does this mean that you only have one goal at a time? It seems like while you're pursuing one goal, you would not be pursing any of the others.

I know full well that my resolution against spending willpower against myself means that once I get addicted to something, it has to run its full course before I can be productive again. This is a nuclear option: because I know that I won't stop, I am very leery of lengthy media.

Does this mean that if you're trapped in an addictive spiral and feels terrible (eg "bloated, but still eating" or continuing to mindlessly watch youtube, even when it's not fun, and is actually painful in a muted way as you distract yourself from something") that you don't do anyt... (read more)

All the ASI-boosted humans one feel a bit tricky for me to answer, because it seems possible that we get strong aligned AI, in a distributed takeoff, but that we deploy it unwisely. Namely that world immediately collapses into Moloch, whereby everyone follows their myopic incentives off a cliff.

That cuts my odds of good outcomes by a factor of two or so.

Much of the value of alien civilizations might well come from the interaction of their civilization and ours, and from the fairness (which may well turn out to be a major terminal human value) of them getting their just fraction of the universe.

Won't the size of the universe-shard that a civilization controls be determined entirely by how early or late they started grabbing galaxies? Which is itself almost entirely determined by how early or late they evolved? 

That doesn't sound like a fair distribution to me.

I guess we could redistribute some of... (read more)

[This comment is no longer endorsed by its author]Reply
2Eli Tyre5mo
Whoops. Answered later in the post.

As someone who said his share of prayers back in his Orthodox Jewish childhood upbringing, I can personally testify that praising God is an enormously boring activity, even if you're still young enough to truly believe in God.  The part about praising God is there as an applause light that no one is allowed to contradict: it's something theists believe they should enjoy, even though, if you ran them through an fMRI machine, you probably wouldn't find their pleasure centers lighting up much.

I think this is typical minding. It really can be joyful to ex... (read more)

Moreover, I suspect that it would be good (in expectation) for humans to encounter aliens someday, even though this means that we’ll control a smaller universe-shard.

I suspect this would be a genuinely better outcome than us being alone, and would make the future more awesome by human standards.

I don't get this. If encountering aliens is so great, we could make it happen, even in an empty universe, by simulating evolution (and the development of civilization up to super-intelligence) and then being friends and partners with those alien civilizations, ... (read more)

3Lone Pine4mo
Which is better, a tiger or a designer housecat?
2Rob Bensinger5mo
Sounds right to me! I dunno Nate's view on this.

That said, I, at least, am not making this error, I think:

Another concern I have is that most people seem to neglect the difference between “exhibiting an external behavior in the same way that humans do, and for the same reasons we do”, and “having additional follow-on internal responses to that behavior”.

An example: If we suppose that it’s very morally important for people to internally subvocalize “I sneezed” after sneezing, and you do this whenever you sneeze, and all your (human) friends report that they do it too, it would nonetheless be a

... (read more)

Yeah. I'd already read the Yudkowsky piece. I hadn't read the Muehlhauser one though!

My guess would be that the most common variety of alien is “unconscious brethren”, followed by “unconscious squiggle maximizer”, then “conscious brethren”, then “conscious squiggle maximizer”.

It might sound odd to call an unconscious entity “brother”, but it's plausible to me that on reflection, humanity strongly prefers universes with evolved-creatures doing evolved-creature-stuff (relative to an empty universe), even if none of those creatures are conscious.

Somehow, thinking of ourselves from the perspective of an unconscious alien really drives home how... (read more)

2Rob Bensinger5mo
Oooh, sounds right to me!

Moreover, it’s observably the case that consciousness-ascription is hyperactive. We readily see faces and minds in natural phenomena. We readily imagine simple stick-figures in comic strips experiencing rich mental lives.

A concern I have with the whole consciousness discussion in EA-adjacent circles is that people seem to consider their empathic response to be important evidence about the distribution of qualia in Nature, despite the obvious hyperactivity.

This post the single most persuasive piece of writing that I have encountered with regard to talk... (read more)

8Eli Tyre5mo
That said, I, at least, am not making this error, I think: Seeing a pig "scream in pain", when you cut off its tail does not make it a foregone conclusion that the pig is experiencing anything at all or something like what pain means to me. But it does seem like a pretty good guess. And I definitely don't look at turtle doing any kind of planning at all and think "there must be an inner life in there!" I'm real uncertain about what consciousness is and where it comes from, and there is an anthropic argument (which I don't know how to think clearly about) that it is rare among animals. But from my state of knowledge, it seems like a better than even chance that many mammals have some kind of inner listener. And if they have an inner listener at all, pain seems like one of the simplest and most convergent experiences to have.  Which makes industrial factory farming an unconscionable atrocity, much worse than American slavery. It is not okay to treat conscious beings like that, no matter how dumb they are, or how little they narrativize about themselves.  My understanding is that (assuming animal consciousness), there are 100 billion experience-years [https://80000hours.org/problem-profiles/factory-farming/] in factory farms every year.  It seems to me that, in my state of uncertainty, it is extremely irresponsible to say "eh, whatever" to the possible moral atrocity. We should shut up and multiply. My uncertainty about animal consciousness only reduces the expected number of experience-years of torture by a factor of 2 or so.  An expected 50 billion moral patients getting tortured as a matter of course is the worst moral catastrophe perpetrated by humans ever (with the exception of our rush to destroy all the value of the future). Even if someone has more philosophical clarity than I do, they have to be confident at at a level of around 100,000 to 1 that livestock animals are not experiencing beings, before the expected value of this moral catastrophe starts b
3Algon5mo
EDIT: Added in the correct links. Assuming Yudkowsky's position is quite similair to Nate's, which it sounds like given what both have written, I'd recommend reading this debate [https://rationalconspiracy.com/2015/12/16/a-debate-on-animal-consciousness/]Yud was in to get a better understanding of this model[1]. Follow up on the posts Yud, Luke and Rob mention if you'd care to know more. Personally, I'm closer to Luke's position on the topic. He gives a clear and thorough exposition here [https://www.openphilanthropy.org/research/2017-report-on-consciousness-and-moral-patienthood/]. Also, I anticipate that if Nate does have a fullly fleshed out model, he'd be reluctant to share it. I think Yud said he didn't wish to give too many specifics as he was worried trolls might implement a maximally suffering entity. And, you know, 4chan exists. Plenty of people there would be inclined to do such a thing to signal disbelief or simply to upset others.  1. ^ I think this kind of model would fall under the illusionism school of thought. "Consciousness is an illusion" is the motto. I parse it as "the concept you have of consciousness is an illusion, a persistent part of your map that doesn't match the territory. Just as you may be convinced these [https://www.google.com/search?q=hat+height+width+illusion&client=firefox-b-d&source=lnms&tbm=isch&sa=X&ved=2ahUKEwi3jLSd_437AhWtM-wKHWefAswQ_AUoAXoECAEQAw&biw=1920&bih=927&dpr=1#imgrc=VN8ofr8UmR5bqM] tables are of different shapes, even after rotating and matching them onto one another, so too may you be convinced that you have this property known as consciousness." That doesn't mean the territory has nothing like consciousness in it, just that it doesn't have the exact form you believed it to. You can understand on a deliberative level how the shapes are the same and the process that generates the illusion whilst still experiencing the illusion. EDIT: The same for your intuitio

And we should expect the time machine and the infrastructure it builds to be well-defended, since "you can't make the coffee if you're dead"

Does that follow? The time machine doesn't do any planning. So I would expect that in one timeline, something happens that accidentally drops an anvil on the time machine, breaking the reset mechanism, and there's no more time loops after that.

Indeed, in practice, I expect this time machine to optimize to destroy itself, not to fill the universe with paperclips.

The "anvil dropped on the time machine" scenario seems lik... (read more)

5So8res5mo
i agree that the reset mechanism has to be ~invulnerable for the pump to work. the thing i was imagining the machine defending is stuff like its output channel (for so long as its outputs are an important part of steering the future).
2Rob Bensinger5mo
Sounds right to me!

My guess is that the aliens-control-the-universe-shard scenario is net-positive, but that it loses orders of magnitude of cosmopolitan utility compared to the “cognitively constrained humans” scenario.

Why? How? 

It seems like something weird is happening if we claim that we expect human values to be more cosmopolitan than alien values. Is that what you're claiming?

[This comment is no longer endorsed by its author]Reply
4Rob Bensinger5mo
That's what he's claiming, because he's claiming "cosmopolitan value" is itself a human value. (Just a very diversity-embracing one.)

If I understand it correctly, what happened is that some people got paid to work on this full-time.

This is about what I was going to say in response, before reading your comment. 

I think the key factor that makes it different from other examples is that it was a competent person's full time job. 

There are some other things that need to go right in addition to that, but I suspect that there are lots of things that people are correctly outside view gloomy about which can just be done, if someone makes it their first priority.

2Viliam5mo
Things that need to go right: * it must be a competent person (as opposed to merely overconfident) * who really cares about the project (more than about other possible projects) * can convince other people of their competence (the ones who have money) * gets sufficient funding (doesn't waste their time and energy working a second job) * has autonomy (no manager who would override or second-guess their decisions) * no unexpected disasters (e.g. getting hit by a bus, patent trolls suing the project,...) Other than the unexpected disasters, it seems like something that a competent civilization should easily do. Once you have competent people, allow them to demonstrate their competence, look for intersection between what they want to do and what you need (or if you are sufficiently rich, just an intersection between what they want to do and what you believe is a good thing), give them money, and let them work. In real life, having the right skills and sending the right signals is not the same thing; people who do things are not the same as people who decide things; time is wasted on meetings and paperwork.
Load More