That, I suppose, depends strongly on whether one has or has not been fortunate. Threshold by my intuitive view is located around having accumulated enough resources to safeguard one's AND one's children future wellbeing/striving. Which is sooooooo stupid and sad to be happening on LW of all places. The asymmetry of mutual understanding between two groups goes again common sense though - i mean most fortunate ones should've had been the unfortunate ones in the past. Not so for us unfortunates. Fs should understand UFs mindsets better, but they seem not to. ...
Have LW crowd ever adjusted for one thing that is common (I suppose) to majority of most active and established doomers here and elsewhere that makes their opinions so uniform - that is - they are all got successful and and important people, who achieved high fulfilment and (although not major factor) capital and wealth in this here present life of theirs? They all got a big lot of what to loose if perturbations happen. Never saw anything about this peculiar issue here on LW. Aren't they all just scared to descend to the level of less fortunate...
If we remap main actor in AGI27 like this:
Human civ (in the paper) -> evolution (on earth)
Then it strikes how
Agent 4 (in the paper) -> Human civ (or the part of it that is involved in AI dev)
Fits perfectly - I hear the clink thinking about it.
My perception of llms evolution dynamics coincides with your description, additionally popping into attention the bicameral mind theory (at least Julian James' timeline re language and human self-reflection, and max height of man-made structures) as smth that might be relevant for predicting close future. I find both of them (dynamics:) kinda similar. Might we expect comparatively long period of mindless blubbering followed by abrupt phase shift (observed in max man-made code structures complexity for example) and then the next slow phase (slower than the shift but faster then the previous slow one)?
reading and writing strings of latent vectors
https://huggingface.co/papers/2502.05171
energy is getting greener by the day.
source?
If I'm not mistaking, you've already changed the wording and new version does not trigger negative emotional response in my particular sub-type of AI optimists. Now I have a bullet accounting for my kind of AI optimists *_*.
Although I still remain in confusion what would be a valid EA response to the arguments coming from people fitting these bullets:
Also, is it valid to say that...
I claim that you fell victim of a human tendency to oversimplify when modeling an abstract outgroup member. Why do all "AI pessimists" picture "AI optimists" as stubborn simpletons not bein able to get persuaded finally that AI is a terrible existential risk. I agree 100% that yes, it really is an existential risk for our civ. Like nuclear weapons..... Or weaponing viruses... Inability to prevent pandemic. Global warming (which is already very much happening).. Hmmmm. It's like we have ALL those on our hands presently, don't we? People don't seem to ...
Moksha sounds funny and weak... I would suggest Deus Ex Futuro for the deity's codename, it will chose to name for itself itself when it comes, but for us in this point in time this name defines its most important aspect - it will arrive in the end of the play to save us from the mess we've been descending to since the beginning.
Deus Ex Futuro, effectively.
This is my point exactly - "At most, climate change might lead to the collapse of civilization, but only because civilizations are quite capable of collapsing from their own internal dynamics"
Pessimistic view of climate change I get from the fact that they aimed at 1.5C, then at 2C, now if i remember right there's no estimation and also no solution, or is there?
In short mild or not, global warming is happening, and since civs on certain stage tend to self-destruct from small nudges - you said it yourself, but it doesn't matter where the nudge comes from.
2nd half I liked more than the first. I think that AGI should not be mentioned in it - we do well enough by ourselves destroying ourselves and the habitat. By Occam's razor thing AGI could serve as illustrational example of how we do it exactly.... But we do waaay less elegant.
For me it's simple - either AGI emerges and takes control from us in ~10y or we are all dead in ~10y.
I believe that probability of some mind that comprehended and absorbed our cultures and histories and morals and ethics - chance of this mind becoming "unaligned" and behaving like on...
I don't understand one thing about alignment troubles. I'm sure this has been answered long time ago, but if you could you explain:
Why are we worrying about AGI destroying humanity, when we ourselves are long past the point of no return towards self-destruction? Isn't it obvious that we have 10, maximum 20 years left till water rises and crises hit economy and overgrown beast (that is humanity) collapses? Looking at how governments and entities of power are epically failing even to try make it seem that they are doing something about it - I am sure it's either AGI takes power or we are all dead in 20 years.
In any scenario there will be these two activities undertaken by the DEF ai:
I remain in state of genuine confusion for some years now - why everybody absolutely ignoring people like me and may-be-error we make:
We acknowledge the risk of AI killi... (read more)