LESSWRONG
LW

852
jtuffy117@gmail.com
2030
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
Deconfusing ‘AI’ and ‘evolution’
jtuffy117@gmail.com3mo1-1

To be honest, you are not actually responding to ideas in this essay. That's okay, just want to flag this.

I’m sorry this was your takeaway, but feel free to return to my OP for deeper reflection at any point. The general idea, which I tried to put sensitively, is you are the one “misunderstanding evolution” at a rather deep level.


And yes I saw your comment that was partly what I was replying to

Reply1
Deconfusing ‘AI’ and ‘evolution’
jtuffy117@gmail.com3mo1-1

Responding to some of the overarching evolution ideas.

A while back I went down the rabbit hole of “how life started” and how this incredible evolutionary process, in feats only describable as magic, was somehow able to turn dirt into conscious, questioning beings, which then might eventually evolve into “basically god”. Drawing a path all the way from the original LUCA. What an astounding phenomena of the universe.

Maybe what helped lift this naivety, and I think it was explained well by Nexus, is that “evolutionary” selection algorithms are one type of optimization tool/pattern. Comparatively, not even a very good one. Evolution’s “prime quality” is just that it is the only one which seems to occur naturally (in absence of design). Actually (again referencing Nexus), evolutionary algorithms are rather brutal by comparison: think predator / prey scenarios we see in nature.

Responding to the essay, I’m not sure there is good reason to believe evolution or evolutionary algorithms will play too much a role in future AI, at least not in the manner suggested. Software is already copied with perfect fidelity a gazzillion times a day and we’ve solved this control problem handily - cosmic radiation is not going to “flip bits” to cause evolution nor certainly would hardware churn. Even extending to model weights.

The other primary ingredient for evolution, aside from random deviation which we control for, is a selection function which also loses meaning when AI is temporal / short lived, etc.

So it seems more likely that development beyond this point is going to be intentional, and if we do eventually enter some new paradigm where model weights (or whatever substrate) can change dynamically and fluidly, this would still not imply evolution. 
 

Reply1
A Bear Case: My Predictions Regarding AI Progress
jtuffy117@gmail.com7mo30

I don’t agree that there is no conceivable path forward with current technology. This perspective seems too focused on base LLM models diminishing returns (eg 4.5 to 4). You brought up CoT and limited reasoning window, but I could imagine this solved pretty easily with some type of master / sub task layering. I also believe some of those issues could in fact be solved with brute scale anyway. You also critique the newer models as “Frankenstein” but I think OAI is right about that as an evolution. Basic models should have basic inputs and output functionality like we have for computers. “Models” don’t need to be pure token generators and can have templates: that is good and fine. A model should eg “clean” or interpret user inputs before acting on them and that can be two separate functions. Also, some of the diminishing returns you notice in text output are a consequence of the mode perfecting; there are no grammar or text mistakes, writing is rich and structured, and it even has markdown.

When we talk about AGI (not ASI) this is a line I think we have already effectively crossed. We don’t have sentient AI, but we have human level reasoning and intelligence that can act as directed on 99% of human domain tasks. Whether or not the AI questions life on its own isn’t qualifier. 

For ASI, here are my thoughts: o series models are effective step functions (or patterns of subprocesses or templates). These templates are human defined reasoning currently, as someone at OAI probably wrote them. If an AI orchestrator can begin experimenting on and updating its own logic functions iteratively, we might see novel thought patterns form, sub functions we don’t fully understand working in concert, and the forming of new reasoning patterns on its own. There is still some theoretical boundary where this process begins self-replicating / adapting on its own, which may still be ways away, but at that point it is a matter of feeding it energy and compute resources and letting it do its thing. Realistically, I doubt we will ever require a scale of energy so unreasonable as to stop progress, assuming we see Moores law improvements like we do with everything else in tech. 

Ultimately I think you’re not valuing how effective functions working together in concert could be in overcoming the few, finer frictions that remain, and counter that I could conceive of a realistic pathway to ASI without needing that many further breakthroughs 

Reply