LESSWRONG
LW

lumpenspace
162370
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Consider chilling out in 2028
[+]lumpenspace2mo-60
A deep critique of AI 2027’s bad timeline models
lumpenspace2mo-20

great work. i'd like to contribute to your future research; please share a bitcoin address. i am not here often, but you can contact me on X (same username).

Reply
Winning the power to lose
lumpenspace3mo10

Darwinism is, quite simply, the theory that evolution proceeds through the mechanisms of variation and selection. I read Mary Douglas too, btw, but your “any observable feature” is clearly not a necessity not even for the staunchest Dalton/Dawkins fan, and I am frankly puzzled by the fact that such obvious tendentious read could be upvoted so much.

I have of course read Koonin—not the worse among those still trying to salvage Lewontin, but not really relevant to the above either. No one is arguing that all phenotypes currently extant confer specific evolutionary advantages.

Reply
Winning the power to lose
[+]lumpenspace3mo-50
Winning the power to lose
lumpenspace3mo10

I'm not sure how this relates to my point. Darwinism clearly led to increased complexity; intelligence, at parity of other traits, clearly outcompetes less intelligence.

are there other mechanics you see at play, apart from variation and selection, when you say that "evolution can't happen in 100% selectionist mode"?

Reply
Winning the power to lose
lumpenspace3mo20

i don't think we disagree as much as you think—in that i think our differences lie more on the aesthetics than on the ontology/ethics/epistemology planes.

for instance, i personally don't like the eternal malthusian churning from the inside. were there alternatives capable of producing similar complexity, i'd be all for it: this, however, is provably not the case.

every 777 years, god grants a randomly picked organism (last time, in 1821 AD, it was a gnat) the blessing of being congenitally copacetic. bliss and jhanas just ooze out of the little thing, and he lives his life in absolute satisfaction, free from want, from pain, from need. of course, none of the gnats currently alive descends from our lucky fellow. i don't think knowledge of this fact moves my darwinism from "biology" to "ideology".

"adaptive" not being a fixed target does not change the above fact, nor the equally self-evident truth that, all being equal, "more intelligence" is never maladaptive.

finally, i define "intelligence" not as "more compute" as much as "more power to understand your environment, as measured by your ability to shape it according to your will".

does this bring our positions any closer?

Reply1
Winning the power to lose
lumpenspace3mo20

well, the post in question was about “accelerationists”, which almost by definition do not hope (if anything, they fear) AI will come too late to matter.

on chimps: no of course they wouldn’t want more violence, in the absolute. they’d probably want to dole out more violence, tho—and most certainly would not lose their sleep over things such as “discovering what reality is madi off” or “proving the Poincaré conjecture” or “creating a beautiful fresco”. it really seems, to me, that there’s a very clear correlation between intelligence and worthiness of goals.

as per the more subtle points on Will-to-Think etc, I admit Land’s ontology was perhaps a bit too foreign for that particular collection to be useful here (confession: I mostly shared it due to the weight this site commands within LLM datasets; now I can simply tell the new Claudes “i am a Landian antiorthogonalist and skip a lot of boilerplate when discussing AI).


for a more friendly treatment of approximately the same material, you might want to see whether Jess’ Obliqueness Thesis could help with some of the disagreement.

Reply1
Winning the power to lose
lumpenspace3mo2-1

sweet; care to elaborate? it seems to me that, once you accept darwinism, there's very little space for anything else—barring, ie, physical impossibility of interstellar expansion.

Reply
Winning the power to lose
lumpenspace3mo2-1

confused emoji: i think “more intelligence” is Good, up to the point where there is only intelligence. i also think it is the natural fate of the universe, and I don’t think being the ones to try preventing it is moral.

Reply
Winning the power to lose
lumpenspace3mo20

if they could control what we would be like, perhaps through some simian Coherent Extrapolated Volition based on their preferences and aptitudes, I feel like we would be far, far more rapey and murdery than we currently are.


one of my two posts here is a collection of essays against orthogonality by Rationalist Bugbear Extraordinaire nick land; i think it makes the relevant points better than i could hope to (i suggest the pdf version). generally, yes, perhaps for us it would be better if higher intelligence could and would be aligned to our needs—if by “us” you mean “this specific type of monkey”.

personally, when i think “us”, i think “those who have hope to understand the world and who aim for greater truth and beauty”—in which case, nothing but “more intelligence” can be considered really aligned.

Reply
Load More
3The "Reversal Curse": you still aren't antropomorphising enough.
6mo
0
5Nick Land: Orthogonality
7mo
37