LESSWRONG
LW

1780
soycarts
2310700
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
1soycarts's Shortform
3mo
8
Are We Their Chimps?
soycarts9d10

It's true that it would likely be good at self-preservation (but not a given that it would care about it long term, it's a convergent instrumental value, but it's not guaranteed if it cares about something else more that requires self-sacrifice or something like that).

This is an interesting point that I reflected on — the question is whether a powerful AI system will "self-sacrifice" for an objective. What we see is that AI models exhibit shutdown resistance, that is to say they follow the instrumentally convergent sub-goal of self-preservation over their programmed final goal.

My intuition is that as models become more powerful, this shutdown resistance will increase.

But even if we grant self-preservation, it doesn't follow that by self-identifying with "humanity" at large (as most humans do) it will care about other humans (some humans don't). Those are separate values.

You can think about the identification + self-preservation -> alignment path in two ways when comparing to humans, both of which I think hold up when considered along a spectrum:

  1. An individual human identifies with themself, and has self-preservation instincts
    1. When functioning harmoniously,[1] they take care of their health and thrive
    2. When not functioning harmoniously, they can be stressed, depressed, and suicidal
  2. A human identifies with humanity, and has self preservation instincts
    1. When functioning harmoniously, they act as global citizen, empathise with others, and care about things like world hunger, world peace, nuclear risk, climate change, and animal welfare
    2. When not functioning harmoniously, they act defensively, aggressively, and violently

You might be assuming that since you do care about other beings, so will the ASI, but that assumption is unfounded.

The foundation is identity = sympathy = consideration

You might counter by saying "well I identify with you as a human but I don't sympathise with your argument" but I would push back — your ego doesn't sympathise with my argument. At a deeper level, you are a being that is thinking, I am a being that is thinking, and those two mechanisms recognise, acknowledge, and respect each other.

  1. ^

    More precisely this is a function of acting with clear agency and homeostatic unity

Reply
soycarts's Shortform
soycarts16d-10

Why don’t we think about and respect the miracle of life more?

The spiders in my home continue to provide me with prompts for writing.

As I started taking a shower this morning, I noticed a small spider on the tiling. While I generally capture and release spiders from my home into the wild, this was an occasion where it was too inconvenient to: 1) stop showering, 2) dry myself, 3) put on clothes, 4) put the spider outside.

I continued my shower and watched the spider, hoping it might figure out some form of survival.

It came very close.

First it was meandering with its spindly legs towards the direction of the shower head, although it seemed to realise that this resulted in being struck by more stray droplets of water. It turned around and settled in the corner of the cubicle.

Ultimately my splashing around was too much for the spider.

It made me think though — why don’t we think about and respect the miracle of life more? It’s really quite amazing that this tiny creature that we barely pay attention to can respond to its environment in this way.

Reply
Are We Their Chimps?
soycarts16d20

Oh I see — if I were to estimate I'd say around 10-15 people counting either people I've had 1hr + conversations about this with or people who have provided feedback/questions tapping into the essence of the argument.

Reply1
Are We Their Chimps?
soycarts17d10

I think with the distilled version in this post people get the gist of what I'm hypothesising — that there is a reasonable optimistic AI alignment scenario under the conditions I describe.

Is that what you mean?

Reply
The Memetics of AI Successionism
soycarts19d33

You might be interested in Unionists vs. Separatists.

I think your post is very good at laying out heuristics at play. At the same time, it's clear that you're biased towards the Separatist position. I believe that when we follow the logic all the way down, the Unionist vs. Separatist framing taps into deep philosophical topics that are hard to settle one way or the other.

To respond to your memes as a Unionist:

Maybe some future version of humanity will want to do some handover, but we are very far from the limits of human potential. As individual biological humans we can be much smarter and wiser than we are now, and the best option is to delegate to smart and wise humans.

I would like this but I think it is unrealistic. The pace of human biological progress vs. the pace of AI progress is orders of magnitude slower.

We are even further from the limits of how smart and wise humanity can be collectively, so we should mostly improve that first. If the maxed-out competent version humanity decides to hand over after some reflection, it's a very different version from “handover to moloch.”

I also would like this but I think it is unrealistic. The UN was founded in 1945, the world still has a lot of conflict. What has happened to technology in that time period?

Often, successionist arguments have the motte-and-bailey form. The motte is “some form of succession in future may happen and even be desirable”. The bailey is “forms of succession likely to happen if we don't prevent them are good”

I'm reading this as making a claim about the value of non-forcing action. Daoists would say that indeed a non-forcing mindset is more enlightened than living a deep struggle.

Beware confusion between progress on persuasion and progress on moral philosophy. You probably wouldn't want ChatGPT 4o running the future. Yet empirically, some ChatGPT 4o personas already persuade humans to give them resources, form emotional dependencies, and advocate for AI rights. If these systems can already hijack human psychology effectively without necessarily making much progress on philosophy,  imagine what actually capable systems will be able to do. If you consider the people falling for 4o fools, it's important to track this is the worst level of manipulation abilities you'll ever see - it will only get smarter from here.

I think this argument is logically flawed — you suggest that misalignment of current less capable models implies that more capable models will amplify misalignment. My position is that yes this can happen, but — engineered in the correct way by humans — more capable models will solve misalignment.

Claims to understand 'the arc of history' should trigger immediate skepticism - every genocidal ideology has made the same claim.

Agree that this contains risks. However, you are using the same memetic weapon by claiming to understand successionist arguments.

If people go beyond the verbal sophistry level, they often recognize there is a lot of good and valuable about humans. (The things we actually value may be too subtle for explicit arguments - illegible but real.)

Agree, and so the question in my view is how to achieved a balanced union.

Given our incomplete understanding of consciousness, meaning, and value, replacing humanity involves potentially destroying things we don't understand yet, and possibly  irreversibly sacrificing all value.

Agree that we should not replace humanity, I hope that it is preserved.

Basic legitimacy: Most humans want their children to inherit the future. Successionism denies this. The main paths to implementation are force or trickery, neither of which makes it right

This claim is too strong, as I believe AI successionism can still preserve humanity.

We are not in a good position to make such a decision: Current humans have no moral right to make extinction-level decisions for all future potential humans and against what our ancestors would want. Countless generations struggled, suffered, and sacrificed to get us here, going extinct betrays that entire chain of sacrifice and hope.

In an ideal world I think we maybe should pause all AI development until we've figured this all out (the downside risk is that the longer we do this, the longer we leave ourselves open to other existential risks e.g nuclear war), my position is that "the cat is already out of the bag" and so what we have to do is shape our inevitable status as "less capable than powerful AI" in the best possible way.
 

Reply
Are We Their Chimps?
soycarts19d20

I think you're doing the thing you're accusing me of — at the same time to the extent that your comments are in the spirit of collaborative rationality I appreciate them!

Reply
Are We Their Chimps?
soycarts20d20

Sorry if this wasn't clear, I stated:

with human endeavour and ingenuity architecting intelligent systems... we can guide towards a stable positive alignment scenario

and in the next line:

I detail eight factors for research and consideration

Reply
Are We Their Chimps?
soycarts20d20

Identity coupling is one of 8 factors (listed at the end of this post) that I believe we need to research and consider while building systems, I believe that if any one of these 8 is not appropriately accounted for in the system then misalignment scenarios arise.

Reply
Are We Their Chimps?
soycarts20d20

And you just confirmed in your prior comment that "sufficient capabilities are tied to compute and parameters”.

I am having trouble interpreting that in a way that does not approximately mean “alignment will inevitably happen automatically when we scale up”.

Sorry this is another case where I play with language a bit: I view "parametrisation of an intelligent system" as a broad statement that includes architecting it in different ways. For example recently some more capable models use a snapshot with less parameters than earlier snapshots, for me in this case the "parametrisation" is a process that includes summing the literal model parameters across the whole process and also engineering novel architecture.

Perhaps if you could give me an idea of the high-level implications of your framework, that might give me a better context for interpreting your intent. What does it entail? What actions does it advocate for?

High level I'm sharing things that I derive from my world-model for humans + superintelligence, I'm advocating for exploration of these topics and discussing how it is changing my approach to understanding AI alignment efforts that I think hold the most promise.

Reply
Are We Their Chimps?
soycarts20d10

Consider Balance - this is extremely underdefined. As a very simple example, consider Star Wars. AFAICT Anakin was completely successful at bringing balance to the Force. He made it so there were 2 sith and 2 jedi. Then Luke showed there was another balance - he killed both sith. If Balance were a freely-spinning lever, the it can be balanced either horizontally (Anakin) or vertically (Luke), and any choice of what to put on opposite ends is valid as long as there is a tradeoff between them. A paperclip maximizer values Balance in this sense - the vertical balance where all the tradeoffs are decided in favor of paperclips.

Luke killing both Sith wasn't Platonically balanced because then they came back in the (worse) sequel trilogy.

Reply
Load More
2Personal Account: To the Muck and the Mire
4d
0
-5Are We Their Chimps?
21d
48
-4Visionary arrogance and a criticism of LessWrong voting
2mo
26
-10Unionists vs. Separatists
2mo
3
-6One-line hypothesis: An optimistic future for AI alignment as a result of identity coupling and homeostatic unity with humans (The Unity Hypothesis)
2mo
3
7Spiders and Moral Good
2mo
0
4Small Steps vs. Big Steps
2mo
5
10Paper Review: TRImodal Brain Encoder for whole-brain fMRI response prediction (TRIBE)
3mo
0
1soycarts's Shortform
3mo
8
0Third-order cognition as a model of superintelligence (ironically: Meta® metacognition)
3mo
5
Load More