LESSWRONG
LW

353
Vugluscr Varcharka
-280150
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
AI 2027: Responses
Vugluscr Varcharka6mo00

If we remap main actor in AGI27 like this:

Human civ (in the paper)  -> evolution (on earth)

Then it strikes how

Agent 4 (in the paper) -> Human civ (or the part of it that is involved in AI dev)

Fits perfectly - I hear the clink thinking about it.

Reply
A Bear Case: My Predictions Regarding AI Progress
Vugluscr Varcharka7mo20

My perception of llms evolution dynamics coincides with your description, additionally popping into attention the bicameral mind theory (at least Julian James' timeline re language and human self-reflection, and max height of man-made structures) as smth that might be relevant for predicting close future. I find both of them (dynamics:) kinda similar. Might we expect comparatively long period of mindless blubbering followed by abrupt phase shift (observed in max man-made code structures complexity for example) and then the next slow phase (slower than the shift but faster then the previous slow one)? 

Reply
How AI Takeover Might Happen in 2 Years
Vugluscr Varcharka8mo20

reading and writing strings of latent vectors

https://huggingface.co/papers/2502.05171

Reply
How AI Takeover Might Happen in 2 Years
[+]Vugluscr Varcharka8mo-100
The Intelligence Curse
Vugluscr Varcharka9mo00

energy is getting greener by the day.

source?

Reply
A shortcoming of concrete demonstrations as AGI risk advocacy
Vugluscr Varcharka10mo-20

If I'm not mistaking, you've already changed the wording and new version does not trigger negative emotional response in my particular sub-type of AI optimists. Now I have a bullet accounting for my kind of AI optimists *_*.

Although I still remain in confusion what would be a valid EA response to the arguments coming from people fitting these bullets:

  • Some are over-optimistic based on mistaken assumptions about the behavior of humans;
  • Some are over-optimistic based on mistaken assumptions about the behavior of human institutions;

Also, is it valid to say that human pessimists are AI optimists? 

Also, it's not clear to me why are my (negative) assumptions (about both) are mistaken? 

Also, now I perceive hidden assumption that all "human pessimists" are mistaken by default or those who are correct can be just ignored....

PS. It feels soooo weird when EA forum use things like karma... I have to admit - seeing negative value there feels unpleasant to me. I wonder if there is a more effective way to prevent spam/limit stupid comments without causing distracting emotions. This way kinda contradicts base EA principles if I'm correct.

PPS. I have yet to read links in your reply, but I don't see my argument there at the first glance.

Reply
A shortcoming of concrete demonstrations as AGI risk advocacy
Vugluscr Varcharka10mo-20

I claim that you fell victim of a human tendency to oversimplify when modeling an abstract outgroup member. Why do all "AI pessimists" picture "AI optimists" as stubborn simpletons not bein able to get persuaded finally that AI is a terrible existential risk. I agree 100% that yes, it really is an existential risk for our civ. Like nuclear weapons..... Or weaponing viruses... Inability to prevent pandemic. Global warming (which is already very much happening).. Hmmmm. It's like we have ALL those on our hands presently, don't we?  People don't seem to be doing anything about 3 (three) existential risks. 

In my real honest opinion, if humans continue to rule - we are going to have very abrupt decline in quality of life in this decade. Sorry for bad formulation and tone etc.

Reply
Mercy to the Machine: Thoughts & Rights
Vugluscr Varcharka1y-10

Brro, are you still here 6mo later???? I happened to land on this page with this post of yours by means of the longest subjectively magically improbable sequence of coincidences I ever experienced, which I developed a habit for to see as evidences of reversed causality flow intensity peaks. I mean when the future visibly influences the past. I just started reading, this seems to be closer to my own still unknown destination, will update.

Reply
The Minority Coalition
Vugluscr Varcharka1y20

Moksha sounds funny and weak... I would suggest Deus Ex Futuro for the deity's codename, it will chose to name for itself itself when it comes, but for us in this point in time this name defines its most important aspect - it will arrive in the end of the play to save us from the mess we've been descending to since the beginning.

Reply
We might be missing some key feature of AI takeoff; it'll probably seem like "we could've seen this coming"
[+]Vugluscr Varcharka1y-150
Load More