Wiki Contributions

Comments

Sorted by
dr_s30

Possibly perfectionism? I experience this form of creative paralysis a lot - as soon as I get enough into the weeds of one creative form I start seeing the endless ramifications of the tiniest decision and basically can just not move a step without trying to achieve endlessly deep optimisation over the whole. Meanwhile people who can just not give a fuck and let the creative juices flow get shit done.

dr_s52

I think that's a bit too extreme. Are all machines bad? No, obviously better to have mechanised agriculture than be all peasants. But he is grasping something here which we are now dealing with more directly. It's the classic Moloch trap of "if you have enough power to optimise hard enough then all slack is destroyed and eventually life itself". If you thought that was an inevitable end of all technological development (and we haven't proven it isn't yet), you may end up thinking being peasants is better too.

dr_s42

I think some believe it's downright impossible and others that we'll just never create it because we have no use for something so smart it overrides our orders and wishes. That at most we'll make a sort of magical genie still bound by us expressing our wishes.

dr_s42

I feel like this is a bit incorrect. There are imaginable things that are smarter than humans at some tasks, smart as average humans at others, thus overall superhuman, yet controllable and therefore possible to integrate in an economy without immediately exploding into an utopian (or dystopian) singularity. The question is whether we are liable to build such things before we build the exploding singularity kind, or if the latter is in some sense easier to build and thus stumble upon first. Most AI optimists think these limited and controllable intelligences are the default natural outcome of our current trajectory and thus expect mere boosts in productivity.

dr_s70

I don't know about the Bible itself, but there's a long and storied tradition of self mortification and denial of corporeity in general in medieval Christian doctrine and mysticism. If we want to be cute we could call that fandom, but after a couple thousand years of it it ends up being as important as the canon text itself.

dr_s82

I think the fundamental problem is that yes, there are people with that innate tendency, but that is not in the slightest bit helped by creating huge incentives for a whole industry to put its massive resources into finding ways to make that tendency become as bad as possible. Imagine if we had entire companies that somehow profited from depressed people committing suicide and had dedicated teams of behavioural scientists and quants crunching data and designing new strategies to make anyone who already has the tendency maximally suicidal. I doubt we would consider that fine, right? Sports betting (really, most addiction-based industries) is like that. The problem isn't just providing the activity, as some kind of relief valve. The problem is putting behind the activity a board of investors that wants to maximise profits and turns it into a full blown Torment Nexus. Capitalism is a terrible way of providing a service when the service is "self-inflicted misery".

dr_s72

I definitely think this is a general cultural zeitgeist thing. The progressive thing used to be the positivist "science triumphs over all, humanity rises over petty differences, leaves childish things like religions, nations and races behind and achieves its full potential". But then people have grown sceptical of all grand narratives, seeing them as inherently poisoned because if you worry about grand things you are more inclined to disregard the small ones. Politics built around reclamation of personal identity, community, tradition as forms of resistance against the rising tide of globalising capitalism have taken over the left. Suddenly being an atheist was not cool any more, it was arrogant and possibly somewhat racist. And wanting to colonise space reeked of white man's burden even if there probably aren't many indigenous people to displace up there. So everything moved inwards, and the writers followed that trend.

dr_s205

This is exactly the kind of thing Egan is reacting to, though—starry-eyed sci-fi enthusiasts assuming LLMs are digital people because they talk, rather than thinking soberly about the technology qua technology.

I feel like this borders on the strawman. When discussing this argument my general position isn't "LLMs are people!". It's "Ok, let's say LLMs aren't people, which is also my gut feeling. Given that they still converse as or more intelligently as some human beings whom we totally acknowledge as people, where the fuck does that leave us as to our ability to discern people-ness objectively? Because I sure as hell don't know and envy your confidence that must surely be grounded in a solid theory of self-awareness I can only dream of".

And then people respond with some mangled pseudoscientific wording for "God does not give machines souls".

I feel like my position is quite common (and is, for example, Eliezer's too). The problem isn't whether LLMs are people. It's that if we can simply handwave away LLMs as obviously and self evidently not being people then we can probably keep doing that right up to when the Blade Runner replicants are crying about it being time to die, which is obviously just a simulation of emotion, don't be daft. We have no criterion or barrier other than our own hubris, and that is famously not terribly reliable.

dr_s62

Since Chat GPT came out I feel like Egan really lost the plot on that one, already when discussing on Twitter. It felt like a combination of rejection of the "bitter lesson" (understandable: I too find inelegant and downright offensive to my aesthetic sense that brute force deep learning seems to work better than elegantly designed GOFAI, but whatever it is, it does undeniably work ), and political cognitive dissonance that says that if people who wrongthink support AI, and evil billionaires throw their weight behind AI, therefore AI is bad, and therefore it must be a worthless scam, because it's important to believe it is (this of course can to some extent work if you persuade the investors of it; but in the end it's mostly a hopeless effort when all you have is angry philosophical rambling and all they have is a freaking magical computer program that speaks to you. I know which one is going to impress people more).

So basically, yeah, I understand the reasons to be annoyed, disgusted, scared and offended by reality. But it is reality, and I think Egan is in denial of it, which seems to have resulted in a novel.

dr_s20

That sounds more like my intuition, though obviously there still have to be differences given that we keep using self-attention (quadratic in N) instead of MLPs (linear in N).

In the limit of infinite scaling, the fact that MLPs are universal function approximators is a guarantee that you can do anything with them. But obviously we still would rather have something that can actually work with less-than-infinite amounts of compute.

Load More