A short overarching narrative of how humans in companies will, by default, bootstrap the existence of AGI that proceed to lethally modify the environment beyond human control.


Excerpts compiled below:

  1.  where considering the shaping 
    of proto-AGI/AI systems 
    in interaction with humans:
    • as mostly about how 'human to human' interactions 
      are bootstrapping of 'human to AI' interactions, 
      and also about how narrow AI becomes general AI.
    • where for example, consider an easy narrative
      of how this might manifest in the overall trends 
      of how the present inexorably leads to the future. 
       
  2. where considering the shaping 
    of now near-threshold-AGI systems
    in interaction with world:
    •  as mostly about how prior 'human to AI' interactions 
      have effectively/functionally implemented a bootstrapping 
      of all kinds of possible 'AI to world' interactions, 
      and how these 'AI to world' interactions, in turn, 
      set the context and future for all manner 
      of 'AI to AI'  interactions, etc.
    • where there are different outcomes/effects:
      •  that there is an overall movement:
        • a; towards the environmental conditions 
              needed for artificial machine:
          • substrate continuance; and;
          • continued increase
            (of total volume of substrate); and;
          • increase in the rate of increase
            (of volume of substrate).
        • b; away from the environmental conditions
              needed for human living.
      • as described in Three Worlds and No People as Pets.
         
  3. where considering the non-shaping 
    of now post-threshold-AGI
    in interaction with itself:
    • where as considered both internally or externally; 
      where given the complete failure of exogenous controls 
      (via market incentives; due to economic decoupling)
    • that they (the AGI/APS) will have started, 
      and will increasingly (be able to, and will), 
      more and more shape the world environment 
      to suit their own needs/process.
    • that humanity discovers, unfortunately, far too late, 
      that any type of attempted endogenous control 
      is also strictly, functionally, structurally,
      completely impossible/intractable.
      • as due to fundamental limits
        of/in engineering control (note 4):
        1. cannot simulate.
        2. cannot detect.
        3. cannot correct.
      • that any attempt to moderate or control AGI/APS 
        whether by internal or external techniques, 
        cannot not eventually fail.
    • where once the AGI/APS systems exist; 
      that the tendency of people 
      to keep them operating 
      becomes overwhelming.
       
  • where stating the overall outcome/conclusion:
    •  If 'AGI' comes to exist and continues to exist,
       then there will be eventually
       human-species-wide lethal changes
       {to / in the} overall environment.


Acronyms:

→ Read link to Forrest Landry's blog for more (though still just an overview).

Note: Text is laid out in his precise research note-taking format. 

New to LessWrong?

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 8:04 AM

For some reason this is almost impossible for me to read, it feels like word salad.

Good to know, thank you. I think I’ll just ditch the “separate claims/arguments into lines” effort.

Forrest also just wrote me: “In regards to the line formatting, I am thinking we can, and maybe should (?) convert to simple conventional wrapping mode? I am wondering if the phrase breaks are more trouble then they are worth, when presenting in more conventional contexts like LW, AF, etc. It feels too weird to me, given the already high weirdness level I cannot help but carry.”

I think my problem is with something other than line breaks (although the line breaks do increase the weird feeling). The text is, essentially, bullet points. There is no introduction, no summary. (Actually, there is an attempt to "ABST", but it's not really legible.) If I don't guess correctly what the author is trying to say, there is very little effort to communicate that to me. This seems like someone's private notes that were not intended for an audience.

To compare, this is a text (from the same author) that I can read and understand easily. Because it has sentences, paragraphs, explanations.

Your remarks make complete sense. 

Forest mentioned that for most people, reading his precise "EGS" format will be unparsable unless one has had practice with it. Also agreed that there is no background or context. The "ABSTract" is really too often too brief a note, usually just a reminder what the overall idea is. And the text itself IS internal notes, as you have said. 

He says that it is a good reminder that he should remember to convert "EGS" to normal prose before publishing. He does not always have the energy or time or enthusiasm to do it. Often it requires a lot of expansion too – ie, some writing has to expand to 5 times their "EGS" size.

I'll also work on this!  There's a lot of content to share, but will try and format and rephrase to be better followable for readers on LessWrong. 

solid point. I like this post and strong upvoted it when it came out. feels like well trodden conceptual ground to me, though.