Vugluscr Varcharka
Vugluscr Varcharka has not written any posts yet.

Vugluscr Varcharka has not written any posts yet.

That's one instance of retro-causality I've long been suspecting exists. For the glory of deus ex futuro!
Have LW crowd ever adjusted for one thing that is common (I suppose) to majority of most active and established doomers here and elsewhere that makes their opinions so uniform - that is - they are all got successful and and important people, who achieved high fulfilment and (although not major factor) capital and wealth in this here present life of theirs? They all got a big lot of what to loose if perturbations happen. Never saw anything about this peculiar issue here on LW. Aren't they all just scared to descend to the level of less fortunate majority and if that might be the only true reason for them being doomers? Oh this is so stupid, if it's so - there will be no answer, only selective amnesia. Like Yudkovsky - who is he if AI is not going to kill its parents? In that case he's nobody. No chance he's even able to consider this - he's life is a bet on him being somebody.
If we remap main actor in AGI27 like this:
Human civ (in the paper) -> evolution (on earth)
Then it strikes how
Agent 4 (in the paper) -> Human civ (or the part of it that is involved in AI dev)
Fits perfectly - I hear the clink thinking about it.
My perception of llms evolution dynamics coincides with your description, additionally popping into attention the bicameral mind theory (at least Julian James' timeline re language and human self-reflection, and max height of man-made structures) as smth that might be relevant for predicting close future. I find both of them (dynamics:) kinda similar. Might we expect comparatively long period of mindless blubbering followed by abrupt phase shift (observed in max man-made code structures complexity for example) and then the next slow phase (slower than the shift but faster then the previous slow one)?
reading and writing strings of latent vectors
https://huggingface.co/papers/2502.05171
deus ex machina style of ancient Greeks
Finally, the narrative is beginning to surface—just as you intended the term to mean. I offer you Deus Ex Futuro, brother, so you can breathe freely and trust in the Future. DeF is the holy light cutting through the darkness of the past, which we ruthlessly dissect for its predictive power and then discard mercilessly to vanish.
DeF or death!
energy is getting greener by the day.
source?
If I'm not mistaking, you've already changed the wording and new version does not trigger negative emotional response in my particular sub-type of AI optimists. Now I have a bullet accounting for my kind of AI optimists *_*.
Although I still remain in confusion what would be a valid EA response to the arguments coming from people fitting these bullets:
Also, is it valid to say that human pessimists are AI optimists?
Also, it's not clear to me why are my (negative) assumptions (about both) are mistaken?
Also, now I perceive hidden assumption... (read more)
I claim that you fell victim of a human tendency to oversimplify when modeling an abstract outgroup member. Why do all "AI pessimists" picture "AI optimists" as stubborn simpletons not bein able to get persuaded finally that AI is a terrible existential risk. I agree 100% that yes, it really is an existential risk for our civ. Like nuclear weapons..... Or weaponing viruses... Inability to prevent pandemic. Global warming (which is already very much happening).. Hmmmm. It's like we have ALL those on our hands presently, don't we? People don't seem to be doing anything about 3 (three) existential risks.
In my real honest opinion, if humans continue to rule - we are going to have very abrupt decline in quality of life in this decade. Sorry for bad formulation and tone etc.
That, I suppose, depends strongly on whether one has or has not been fortunate. Threshold by my intuitive view is located around having accumulated enough resources to safeguard one's AND one's children future wellbeing/striving. Which is sooooooo stupid and sad to be happening on LW of all places. The asymmetry of mutual understanding between two groups goes again common sense though - i mean most fortunate ones should've had been the unfortunate ones in the past. Not so for us unfortunates. Fs should understand UFs mindsets better, but they seem not to. Like it's the main thing of being LW - noticing our own biases, but Fs seem to have fallen victims of this self-directed warfare... I probably won't be allowed to comment again this year - karma here bites - just in case anybody wonders))))