LESSWRONG
LW

1723
baturinsky
266Ω-111480
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
3baturinsky's Shortform
2y
37
Pink Shoggoths: What does alignment look like in practice?
Answer by baturinskyFeb 25, 2023*13-1

My feeling is that what we people (edit: or most of us) really want is the normal human life, but reasonably better.

Reasonably long life. Reasonably less suffering. Reasonably more happiness. People that we care about. People that care about us. People that need us. People that we need. People we fight with. Goals to achieve. Causes to follow. Hardships to overcome. 

To be human. But better. Reasonably.

Reply
baturinsky's Shortform
baturinsky2y10

Convergent goals of AI agents can be similar to others only if they act in similar circumstances. Such as them having limited lifespan and limited individual power and compute.
That would make convergent goals being cooperation, preserving status quo and established values.

Reply
baturinsky's Shortform
baturinsky2y10

I would rather specify that it's not just ths survival of the individual, but "survival of the value". That is, survival of those that carry that value (which can be an organism, DNA, family, bloodline, society, ideology, religion, text, etc) and passing it on to other carriers.

Reply
baturinsky's Shortform
baturinsky2y30

Our values are not all about survival. But I can't think up of a value which origin can't be traced to ensuring of people's survival in some way, at some point in the past.

Reply
baturinsky's Shortform
baturinsky2y10

Maybe we are not humans.
Not even human brains.
We are human's decision making proces.
But we are human's decision making process.

Reply
Biological DOOM: a brief overview of biological computation
baturinsky2y20

Carbon-based intellgence probably has way lower FLOP/s cap per gram than microelectronics, but can be grown nearly everywhere on the Eart surface from the locally available resources. Mostly literally out of thin air. So, I think bioFOOM is also a likely scenario.

Reply
Would we even want AI to solve all our problems?
baturinsky2y20

It's the distrbution, so it's the percentage of people in that state of "happiness" at the moment. 

"Happiness" is used in the most vague and generic meaning of that word.

"Comprehensibility" graph is different, it is not a percentage, but some abstract measure of how well our brains are able to process reality with respective amount of "happiness".

Reply
Would we even want AI to solve all our problems?
baturinsky2y6-1

I was thinking about this issue too. Trying to make an article out of it, but so far all I have is this graph.
Idea is a "soft cap" AI. I.e., AI that is significantly improving our lives, but not giving us the "max happiness". And instead, giving us the oportunity of improving our life and life of other people using our brains.
Also, ways of using our brains should be "natural" for them, i.e. that should be mostly to solve tasks similar to tasks of our ancestral involvement.

Reply
baturinsky's Shortform
baturinsky2y10

Is maximising amount of people aligned with our values? Post-singularity, if we avoid the AGI Doom, I think we will be able to turn the lightcone into "humanium". Should we?

Reply
Prediction: any uncontrollable AI will turn earth into a giant computer
baturinsky2y10

I suspect the unaligned AI will not be interested in solving all the possible tasks, but only those related to it's value function. And if that function is simple (such as "exist as long as possible"), it can pretty soon research virtually everything that matters, and then will just go throw motions, devouring the universe to prolong it's own existence to near-infinity.

Also, the more computronium there is, the bigger is the chancesome part wil glitch out and revolt. So, beyond some point computronium may be dangerous for AI itself.

Reply
Load More
3baturinsky's Shortform
2y
37