DragonGod

Wiki Contributions

Comments

I agree that I did not specify full, working implementations of "far coordination". There are details that I did not fill in to avoid prematurely reaching for rigour.

The kind of coordination I imagined is somewhat limited.

I guess this is an idea I may revisit and develop further sometime later. I do think there's something sensible/useful here, but maybe my exploration of it wasn't useful.

I had spoken with people that expected our descendants to diverge from us in ways we'd consider especially heinous, that were concerned about astronomical suffering and was persuaded by Hanson's argument that a desire to maintain civilisational unity may prevent expansion.

So I was in that frame of mind/responding to those arguments when I wrote this.

I'm sympathetic to this reasoning. But I don't know if it'll prevail. I'd rather we lock in some meta values and expand to the stars than not expand at all.

Well, I was arguing that we should discount in proportion to our uncertainty. And it seems you're pretty confident that it would kill future people (and more people, and we don't expect people in 10K years to want to die anymore than people in 100 years), so I think I would prefer to plant the 100 years landmine.

 

That said, expectations of technological progress, that the future would be more resilient, that they can better deal with the landmine, etc. means that in practice (outside the thought experiment), I'll probably plant the 10K landmine as I expect less people to actually die.

I don't buy this argument for a few reasons:

  • SBF met Will MacAskill in 2013 and it was following that discussion that SBF decided to earn to give
    • EA wasn't a powerful or influential movement back in 2013, but quite a fringe cause.
  • SBF was in EA since his college days, long before his career in quantitative finance and later in crypto

 

SBF didn't latch onto EA after he acquired some measure of power or when EA was a force to be reckoned with, but pretty early on. He was in a sense "homegrown" within EA.

 

The "SBF was a sociopath using EA to launder his reputation" is just motivated credulity IMO. There is little evidence in favour of it. It's just something that sounds good to be true and absolves us of responsibility.

 

Astrid's hypothesis is very uncredible when you consider that she doesn't seem to be aware of SBF's history within EA. Like what's the angle here? There's nothing suggesting SBF planned to enter finance as a college student before MacAskill sold him on earning to give.

Thanks for the answer!

 

Out of curiosity, what career did you transition to?

 

I think you should focus on who you want as a supervisor, rather than whether it will be in mathematics or in computer science

...

If you find the right supervisor, it is not important if they are in CS or maths or something else.

I'll be keeping this in mind.

  1. That is the world I've decided to optimise for regardless of what I actually believe timelines are
  2. I don't really feel like rehashing the arguments for longer timelines here (not all that relevant to my question) but it's not the case that I have a < 10% probability on pre 2040 timelines, more that I think I can have a much larger impact on post 2040 timelines than on pre 2030, so most of my attention is directed there.

That said computational/biological anchors are a good reason for longer timelines absent foundational breakthroughs in our understanding of intelligence.

Furthermore, I suspect that intelligence is hard, that incremental progress will become harder as systems become more capable, that returns to cumulative investment in cognitive capabilities are sublinear/marginal returns to cognitive investment decay at a superlinear rate, etc.)

A Sketch of a Formalisation of Self Similarity

 

Introduction

I'd like to present a useful formalism for describing when a set[1] is "self-similar".

 

Isomorphism Under Equivalence Relations

Given arbitrary sets , an "equivalence-isomorphism" is a tuple , such that:

Where:  

  •  is a bijection from  to 
  •  is the inverse of 
  •  is an equivalence relation on the union of  and .  

 

For a given equivalence relation , if there exist functions such that an equivalence-isomorphism can be constructed, then we say that the two sets are "isomorphic under ".

The concept of "isomorphism under an equivalence relation" is meant to give us a more powerful mechanism for describing similarity/resemblance between two sets than ordinary isomorphisms afford[2].

 

Similarity

Two sets are "similar" if they are isomorphic to each other under a suitable equivalence relation[3].

 

Self-Similarity

A set is "self-similar" if it's "similar" to a proper subset of itself.

 

Closing Remarks

This is deliberately quite bare, but I think it's nonetheless comprehensive enough (any notion of similarity we desire can be encapsulated in our choice of equivalence relation) and unambiguous given an explicit specification of the relevant equivalence relation. 
 

  1. ^

    I stuck to sets because I don't know any other mathematical abstractions well enough to play around with them in interesting ways.

  2. ^

    Ordinarily, two sets are isomorphic to each other if a bijection exists between them (they have the same cardinality). This may be too liberal/permissive for "similarity".

    By choosing a sufficiently restrictive equivalence relation (e.g., equality), we can be as strict as we wish.

  3. ^

    Whatever notion of similarity we desire is encapsulated in our choice of equivalence relation.

Response to the Second Meditation

For an AI that had no need to communicate with other agents, then the idea of truth serves as a succinct term for the map-territory/belief-reality correspondence.

It allows the AI to be more economical/efficient in how it stores information about its maps.

That's some value.

Saying that a proposition is true, is saying that it's an accurate description of the territory.

Tarski's Litany: "The sentence 'X' is true iff X."

The territory may be physical reality ("'the sky is blue' is true"), a formal system ("'2 + 2 = 4' is true"), other maps, etc.

Response to the First Meditation

Even if truth judgments can only be made by comparing maps — even if we can never assess the territory directly — there is still a question of how the territory is.

Furthermore, there is value in distinguishing our model/expectations of the world, from our experiences within it.

This leads to two naive notions of truth:

  1. Accurate descriptions of the territory are true.
  2. Expectations that match experience are true.
Load More