LESSWRONG
LW

3578
Mateusz Bagiński
2214Ω481557017
Message
Dialogue
Subscribe

Agent foundations, AI macrostrategy, civilizational sanity, human enhancement.

I endorse and operate by Crocker's rules.

I have not signed any agreements whose existence I cannot mention.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
2Mateusz Bagiński's Shortform
3y
33
Why's equality in logic less flexible than in category theory?
Mateusz Bagiński2d20

Why do you want this notion of equivalence or adjunction, rather than the stricter notion of isomorphism of categories?

As far as I understand/can tell, the context of discovery in category theory is mostly category theorists noticing that a particular kind of abstract structure occurs in many different contexts and thus deserves a name. The context of justification in category theory is mostly category theorists using a particular definition in various downstream things and showing how things fit nicely, globally, everything being reflected in everything else/the primordial ooze, that sort of stuff.

To give an example, if you have a category C with all products and coproducts, you can conceive them as functors from the product category C×C to C itself. We can define a "diagonal functor", Δ:C→C×C that just "copies" each object and morphism, ΔA=⟨A,A⟩. It turns out the coproduct is its left adjoint and the product is its right adjoint: ⊔⊣Δ⊣×.

Now, if you fix any particular object X and think of the product as an endofunctor on C, (−×X):C→C and of exponentiation as another endofunctor (−)X:C→C, then these two again form an adjunction: (−×X)⊣(−)X. Using the definition of an adjunction in terms of hom-set isomorphisms, this is just the currying thing: HomC(A×X,B)≅HomC(A,BX). In fact, this adjunction can be used as the basis to define the exponential object. For example, here's an excerpt from Sheaves in Geometry and Logic.

Reply
Finite Factored Sets
Mateusz Bagiński2d20

Actually, there's a more interesting way to go from partition-as-api to partition-as-coproduct-of-inclusions. You just pull the identity morphism 1I along the epic e:S↠I. The pullback of the identity functor is the partition-as-coproduct-of-inclusions.

Reply
The Most Common Bad Argument In These Parts
Mateusz Bagiński2d172

Good post!

Why did you call it "exhaustive free association"? I would lean towards something more like "arguing from (falsely complete) exhaustion".

Re it being almost good reasoning, a main thing making it good reasoning rather than bad reasoning is having a good model of the domain so that you actually have good reasons to think that your hypothesis space is exhaustive.

Reply
Jan_Kulveit's Shortform
Mateusz Bagiński5d20

As far as I understand, at least one of the authors has an unusual moral philosophy such as not believing in consciousness or first-person experiences, while simultaneously believing that future AIs are automatically morally worthy simply by having goals.

[narrow point, as I agree with most of the comment]

For what it's worth, I think this seems to imply that illusionism (roughly, people who, in a meaningful sense, "don't believe in consciousness") makes people more inclined to act in ethically deranged ways, but, afaict, this mostly isn't the case, because I've known a few illusionists (was one myself until ~1 year ago) and, afaict, they were all decent people, not less decent than the average of my social surroundings.

To give an example, Dan Dennett was an illusionist and very much not a successionist. Similarly, I wouldn't expect any successionist aspirations from Keith Frankish.

There are caveats, though in that I do think that a sufficient combination of ideas which are individually fine, even plausibly true (illusionism, moral antirealism, ...), and some other stuff (character traits, paycheck, social milieu) can get people into pretty weird moral positions.

Reply
TsviBT's Shortform
Mateusz Bagiński5d40

So there's steelmanning, where you construct a view that isn't your interlocutor's but is, according to you, more true / coherent / believable than your interlocutor's.

[nitpick] while also being close to your interlocutor's (perhaps so that your interlocutor's view could be the steelmanned view with added noise / passed through Chinese whispers / degenerated).

A proposed term

Exoclarification? Alloclarification? Democlarification (dēmos - "people")?

Reply1
Ontological Cluelessness
Mateusz Bagiński7d40

Another perhaps example, though not quite analytic philosophy, but rather a neo-religion: Discordianism.

Specifically, see here: https://en.wikipedia.org/wiki/Principia_Discordia#Overview 

Reply
Shortest damn doomsplainer in world history
Mateusz Bagiński7d20

Computers are getting smarter and making entities smarter than yourself, which you don't understand is very unsafe.

Reply
IABIED and Memetic Engineering
Mateusz Bagiński11d20

Scott criticizes the Example ASI Scenario as the weakest part of the book; I think he’s right, it might be a reasonable scenario but it reads like sci-fi in a way that could easily turn off non-nerds. That said, I’m not sure how it could have done better.

I think the scenario in VIRTUA requires remarkably little suspension of disbelief; it's still "sci-fi-ish", but less "sci-fi-ish" than the one in IABIED (according to my model of the general population), and leads to ~doom anyway.

(I feel like I’m groping for a concept analogous to an orthogonal basis in linear algebra -- a concept like “the minimal set of words that span an idea” -- and the title “If Anyone Builds It, Everyone Dies” almost gets there)

You don't need orthogonality to get a minimal set that spans some idea/subspace/whatever.

Reply
shortplav
Mateusz Bagiński11d63

Also, it would be good to deconflate the things that these days go as "AI agents" and "Agentic™ AI", because it makes people think that the former are (close to being) examples of the latter. Perhaps we could rename the former to "AI actors" or something.

(Sidenote: Both "agent" and "actor" derive from Latin agere, meaning “to drive, lead, conduct, manage, perform, do”. Coincidentally, the word "robot" was coined from the Czech "robota", meaning "work", and also related to "robit", meaning "to do" (similar words mean "to do" in many other Slavic languages).)

Reply
shortplav
Mateusz Bagiński11d90
  1. "SLT" as "Singular Learning Theory" →"SiLT"
  2. "SLT" as "Statistical Learning Theory" →"StaLT"
  3. "SLT" as "Sharp Left Turn" →"ShaLT"

https://www.lesswrong.com/posts/thXohzXrWCA2EhZCH/mateusz-baginski-s-shortform?commentId=nacqGC5aHii7yzJCg 

Reply
Load More
56Reasons to sign a statement to ban superintelligence (+ FAQ for those on the fence)
6h
3
230Safety researchers should take a public stance
24d
65
23Counter-considerations on AI arms races
5mo
0
14Comprehensive up-to-date resources on the Chinese Communist Party's AI strategy, etc?
Q
6mo
Q
6
35Goodhart Typology via Structure, Function, and Randomness Distributions
7mo
1
24Bounded AI might be viable
7mo
4
55Less Anti-Dakka
1y
5
9Some Problems with Ordinal Optimization Frame
1y
0
7What are the weirdest things a human may want for their own sake?
Q
2y
Q
16
26Three Types of Constraints in the Space of Agents
Ω
2y
Ω
3
Load More
5-and-10
3 months ago
Alien Values
5 months ago
(+23/-22)
Corrigibility
7 months ago
(+119)
Corrigibility
7 months ago
(+12/-13)
AI Services (CAIS)
9 months ago
(+7/-8)
Tool AI
9 months ago
Quantilization
10 months ago
(+111/-5)
Tiling Agents
a year ago
(+15/-9)
Tiling Agents
a year ago
(+3/-2)
Updateless Decision Theory
2 years ago
(+9/-4)
Load More