LESSWRONG
LW

497
niplav
45973784632
Message
Dialogue
Subscribe

I operate by Crocker's rules.

Website.

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Yes, AI Continues To Make Rapid Progress, Including Towards AGI
niplav3d40

Right, this helps. I guess I don't want to fight about definitions here. I'd just say "ah, any software that you can run on computers that can cause the extinction of humanity even if humans try to prevent it" would fulfill the sufficiency criterion for AGIniplav, and then there's different classes of algorithms/learners/architectures that fulfill that criterion, and have different properties.

(I wouldn't even say that "can omnicide us" is necessary for AGIniplav membership—"my AGI timelines are -3 years"30%.)

One crux here may be that you are more certain that "AGI" is a thing? My intuition goes more in the direction of "there's tons of different cognitive algorithms, with different properties, among the computable ones they're on a high-dimensional set of spectra, some of which in aggregate may be called 'generality'."

I think no free lunch theorems point at this, as well as the conclusions from this post. Solomonoff inductors' beliefs look like they'd look messy and noisy, and current neural networks look messy and noisy too. I personally would find it more beautiful and nice if Thinking was a Thing, but I've received more evidence I interpret as "it's actually not".

But my questions have been answered to the degree I wanted them answered, thanks :-)

Reply1
Yes, AI Continues To Make Rapid Progress, Including Towards AGI
niplav4d31

I've still found them useful. If METR's trend actually holds, they will indeed become increasingly more useful. If it actually holds to >1-month tasks, they may actually become transformative within the decade. Perhaps they will automate the within-paradigm AI R&D, and it will lead to a software-only Singularity that will birth an AI model capable of eradicating humanity.

But that thing will still not be an AGI. This would be the face of our extinction:

We should pause to note that a Clippy² still doesn’t really think or plan. It’s not really conscious. It is just an unfathomably vast pile of numbers produced by mindless optimization starting from a small seed program that could be written on a few pages. [...] When it ‘plans’, it would be more accurate to say it fake-plans; when it ‘learns’, it fake-learns; when it ‘thinks’, it is just interpolating between memorized data points in a high-dimensional space, and any interpretation of such fake-thoughts as real thoughts is highly misleading; when it takes ‘actions’, they are fake-actions optimizing a fake-learned fake-world, and are not real actions, any more than the people in a simulated rainstorm really get wet, rather than fake-wet. (The deaths, however, are real.)

This seems unlikely to me on balance. I think compute scaling will run out well before that. I think it's possible to scale LLMs far enough to achieve this, but that it's "possible" in a very useless way. A Jupiter Brain-sized LLM can likely do it (and probably just an Earth Brain-sized one), but we are not building a Jupiter Brain-sized LLM.

Uh… what? Why do you define "AGI" through its internals, and not through its capabilities? That seems to be a very strange standard, and an unhelpful one. If I didn't have more context I'd be suspecting you of weird goalpost-moving. I personally care whether

  1. AI systems are created that lead to human extinction, broadly construed, and
  2. Those AI systems then, after leading to human extinction, fail to self-sustain and "go extinct" themselves

Maybe you were gesturing at AIs that result in both (1) and (2)??

And the whole reason why we talk about AGI and ASI so much here on Less Wrong dot com is because those AI systems could lead to drastic changes of the future of the universe. Otherwise we wouldn't really be interested in them, and go back to arguing about anthropics or whatever.

Whether some system is "real" AGI based on its internals is not relevant to this question. (The internals of AI systems are of course interesting in themselves, and for many other reasons.)

(As such, I read that paragraph by gwern to be sarcastic, and mocking people who insist that it's "not really AGI" if it doesn't function in the way they believe it should work.)

Now, a fair question to ask here is: does this matter? If LLMs aren't "real general intelligences", but it's still fairly plausible that they're good-enough AGI approximations to drive humanity extinct, shouldn't our policy be the same in both cases?

I think if the lightcone looks the same, it should, if it doesn't, our policies should look different. It would matter if the resulting AIs fall over and leave the lightcone in the primordial state, which looks plausible from your view?

Reply
Yes, AI Continues To Make Rapid Progress, Including Towards AGI
niplav4d32

It is certainly true that Dario Amodei’s early predictions of AI writing most of the code, as in 90% of all code within 3-6 months after March 11. This was not a good prediction, because the previous generation definitely wasn’t ready and even if it had been that’s not how diffusion works, and has been proven definitively false, it’s more like 40% of all code generated by AI and 20%-25% of what goes into production.

I think it was a bad prediction, yes, but mainly because it was ambiguous about the meaning of "writes 90% of the code", it's still not clear if he was claiming at the time that this would be the case at Anthropic (where I could see that being the case) or in the wider economy. So a bad prediction because imprecise, but not necessarily because it was wrong.

Reply
Parker Conley's Shortform
niplav10d30

Unfortunately not :-/ SWE jobmarket seems tough right now, maybe less so in old programming languages like COBOL? But that's banks so they may require a full-time position.

Reply
Parker Conley's Shortform
niplav11d30

Attention conservation notice: Not answering your question, instead making a different suggestion.

If you're willing to commit to meditating 12-18 hours/day (so 2×-3× your current goal), you could also go on a long-term meditation retreat. Panditarama Lumbini in Nepal offers long-term retreats for whatever one can afford.

(I haven't gone there, and they have a very harsh schedule with some sleep deprivation.)

Reply
I am trying to write the history of transhumanism-related communities
niplav15d20

I believe you wanted to write "Salmon" for CFAR? Otherwise great graph. Honestly having a hard time thinking of what you missed.

Reply
Enlightenment AMA
niplav16d50

That said, I've never heard of someone being born into this condition.

I didn't find any case of someone being born with it, but there's something called athymhormic syndrome which sounds a lot like something like enlightenment, and is acquired by having a stroke or injury. See also Shinzen Young on the syndrome.

Reply
Marcio Díaz's Shortform
niplav21d63

I agree that it's better. I often try to explain my downvotes, but sometimes I think it's a lost cause so I downvote for filtering and move on. Voting is a public good, after all.

Reply
Marcio Díaz's Shortform
niplav21d31

People have to make a tradeoff between many different options, one of which is providing criticism or explanations for downvotes. I guess there could be a simple "lurk moar", "read the sequences" or "get a non-sycophantic LLM to provide you feedback" (I recommend Kimi k2) buttons, but at the end of the day downvotes allow for filtering bad content from good. By Sturgeon's law there is too much bad content to give explanations for all of it.

(I've weakly downvoted your comment.)

Reply
Banning Said Achmiz (and broader thoughts on moderation)
niplav21d96

I join the choir of people pronouncing they are sad to see you go.

Reply
Load More
Acausal Trade
3shortplav
5y
286
21Anti-Superpersuasion Interventions
2mo
1
36Meditation and Reduced Sleep Need
5mo
8
24Logical Correlation
7mo
7
39Resolving von Neumann-Morgenstern Inconsistent Preferences
11mo
5
14Pomodoro Method Randomized Self Experiment
1y
2
21How Often Does Taking Away Options Help?
1y
7
47Michael Dickens' Caffeine Tolerance Research
1y
5
7Thoughts to niplav on lie-detection, truthfwl mechanisms, and wealth-inequality
1y
8
69An AI Race With China Can Be Better Than Not Racing
1y
36
53Fat Tails Discourage Compromise
1y
5
Load More
AI-Assisted Alignment
4 months ago
(+54)
AI-Assisted Alignment
4 months ago
(+127/-8)
Recursive Self-Improvement
4 months ago
(+68)
Alief
5 months ago
(+11/-11)
Old Less Wrong About Page
6 months ago
Successor alignment
8 months ago
(+26/-3)
Cooking
a year ago
(+26/-163)
Future of Humanity Institute (FHI)
a year ago
(+11)
Future of Humanity Institute (FHI)
a year ago
(+121/-49)
Axiom
a year ago
(+112/-82)
Load More