LESSWRONG
LW

1342
Mateusz Bagiński
2260Ω481557317
Message
Dialogue
Subscribe

Agent foundations, AI macrostrategy, civilizational sanity, human enhancement.

I endorse and operate by Crocker's rules.

I have not signed any agreements whose existence I cannot mention.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
2Mateusz Bagiński's Shortform
3y
33
The Doomers Were Right
Mateusz Bagiński8d3-2

I think the downvotes are from the general norm of not posting comments with memes as the only/main content.

Reply1
Which side of the AI safety community are you in?
Mateusz Bagiński9d*50

I agree that some people have this preference ordering, but I don't know of any difference in specific actionable recommendations that would be given by "don't until safely" and "don't ever" camps.

Reply
Which side of the AI safety community are you in?
Mateusz Bagiński9d52

Camp B) “Don’t race to superintelligence”: People in this group typically argue that “racing to superintelligence is bad because of Y”. Here Y is typically some combination of “uncontrollable”, “1984”, “disempowerment” and “extinction”.

The main split is about whether racing in the current regime is desirable, so both "never build ASI" and "don't build ASI until it can be done safely" fall within the scope of camp B. Call these two subcamps B1 and B2. I think B1 and B2 give the same prescriptions within the actionable timeframe.

Reply
Why's equality in logic less flexible than in category theory?
Mateusz Bagiński20d20

Why do you want this notion of equivalence or adjunction, rather than the stricter notion of isomorphism of categories?

As far as I understand/can tell, the context of discovery in category theory is mostly category theorists noticing that a particular kind of abstract structure occurs in many different contexts and thus deserves a name. The context of justification in category theory is mostly category theorists using a particular definition in various downstream things and showing how things fit nicely, globally, everything being reflected in everything else/the primordial ooze, that sort of stuff.

To give an example, if you have a category C with all products and coproducts, you can conceive them as functors from the product category C×C to C itself. We can define a "diagonal functor", Δ:C→C×C that just "copies" each object and morphism, ΔA=⟨A,A⟩. It turns out the coproduct is its left adjoint and the product is its right adjoint: ⊔⊣Δ⊣×.

Now, if you fix any particular object X and think of the product as an endofunctor on C, (−×X):C→C and of exponentiation as another endofunctor (−)X:C→C, then these two again form an adjunction: (−×X)⊣(−)X. Using the definition of an adjunction in terms of hom-set isomorphisms, this is just the currying thing: HomC(A×X,B)≅HomC(A,BX). In fact, this adjunction can be used as the basis to define the exponential object. For example, here's an excerpt from Sheaves in Geometry and Logic.

Reply
Finite Factored Sets
Mateusz Bagiński20d20

Actually, there's a more interesting way to go from partition-as-api to partition-as-coproduct-of-inclusions. You just pull the identity morphism 1I along the epic e:S↠I. The pullback of the identity functor is the partition-as-coproduct-of-inclusions.

Reply
The Most Common Bad Argument In These Parts
Mateusz Bagiński21d172

Good post!

Why did you call it "exhaustive free association"? I would lean towards something more like "arguing from (falsely complete) exhaustion".

Re it being almost good reasoning, a main thing making it good reasoning rather than bad reasoning is having a good model of the domain so that you actually have good reasons to think that your hypothesis space is exhaustive.

Reply
Jan_Kulveit's Shortform
Mateusz Bagiński23d20

As far as I understand, at least one of the authors has an unusual moral philosophy such as not believing in consciousness or first-person experiences, while simultaneously believing that future AIs are automatically morally worthy simply by having goals.

[narrow point, as I agree with most of the comment]

For what it's worth, I think this seems to imply that illusionism (roughly, people who, in a meaningful sense, "don't believe in consciousness") makes people more inclined to act in ethically deranged ways, but, afaict, this mostly isn't the case, because I've known a few illusionists (was one myself until ~1 year ago) and, afaict, they were all decent people, not less decent than the average of my social surroundings.

To give an example, Dan Dennett was an illusionist and very much not a successionist. Similarly, I wouldn't expect any successionist aspirations from Keith Frankish.

There are caveats, though in that I do think that a sufficient combination of ideas which are individually fine, even plausibly true (illusionism, moral antirealism, ...), and some other stuff (character traits, paycheck, social milieu) can get people into pretty weird moral positions.

Reply
TsviBT's Shortform
Mateusz Bagiński24d40

So there's steelmanning, where you construct a view that isn't your interlocutor's but is, according to you, more true / coherent / believable than your interlocutor's.

[nitpick] while also being close to your interlocutor's (perhaps so that your interlocutor's view could be the steelmanned view with added noise / passed through Chinese whispers / degenerated).

A proposed term

Exoclarification? Alloclarification? Democlarification (dēmos - "people")?

Reply1
Ontological Cluelessness
Mateusz Bagiński25d40

Another perhaps example, though not quite analytic philosophy, but rather a neo-religion: Discordianism.

Specifically, see here: https://en.wikipedia.org/wiki/Principia_Discordia#Overview 

Reply
Shortest damn doomsplainer in world history
Mateusz Bagiński25d20

Computers are getting smarter and making entities smarter than yourself, which you don't understand is very unsafe.

Reply
Load More
83Reasons to sign a statement to ban superintelligence (+ FAQ for those on the fence)
18d
4
237Safety researchers should take a public stance
1mo
65
23Counter-considerations on AI arms races
6mo
0
14Comprehensive up-to-date resources on the Chinese Communist Party's AI strategy, etc?
Q
6mo
Q
6
35Goodhart Typology via Structure, Function, and Randomness Distributions
7mo
1
24Bounded AI might be viable
8mo
4
57Less Anti-Dakka
1y
5
9Some Problems with Ordinal Optimization Frame
1y
0
7What are the weirdest things a human may want for their own sake?
Q
2y
Q
16
26Three Types of Constraints in the Space of Agents
Ω
2y
Ω
3
Load More
5-and-10
4 months ago
Alien Values
5 months ago
(+23/-22)
Corrigibility
7 months ago
(+119)
Corrigibility
7 months ago
(+12/-13)
AI Services (CAIS)
9 months ago
(+7/-8)
Tool AI
10 months ago
Quantilization
a year ago
(+111/-5)
Tiling Agents
a year ago
(+15/-9)
Tiling Agents
a year ago
(+3/-2)
Updateless Decision Theory
2 years ago
(+9/-4)
Load More