Rana Dexsin

You see either something special, or nothing special.

Wiki Contributions

Comments

While I appreciate the attempt to bring in additional viewpoints, the “Sign-in Required” is currently an obstacle.

I claim that very few people actually understand what they are using and what it effects it has on their mind.

How would you compare your generative-AI focus to the “toddlers being given iPads” transition, which seems to have already happened?

This SMBC from a few years ago including an “entropic libertarian” probably isn't pointing at what people call “e/acc”… right? My immediate impression is that it rhymes though. I'm not sure how to feel about that.

The first sentence here is very confusing and I think inverts a comparison—I think you mean “would make the world enough worse off”.

The first somewhat contrary thing that comes to mind here is whether visible spending that looks like a status grab or is class-dissonant would also impact your social capital in terms of being able to source (loaned or gifted) money from your networks in case of a crunch or shock. If your friends will feel “well I sure would've liked to have X, but I was the ‘responsible’ one and you weren't, so now I'm not going to put money in when you're down” and that's what you rely on as a safety net, then maybe you do need to pay attention to that kind of self-policing. If you're reliant on less personal sources of credit, insurance, etc. or if your financially relevant social groups are themselves receptive to your ideas on not caring as much about class policing, then the self-policing can be mainly self-sabotage like you say.

Facepalm at self. You're right, of course. I think I confused myself about the overall context after reading the end-note link there and went off at an angle.

Now to leave the comment up for history and in case it contains some useful parts still, while simultaneously thanking the site designers for letting me un-upvote myself. 😛

(Epistemic status: mostly observation through heavy fog of war, partly speculation)

From your previous comment:

The "educated savvy left-leaning online person" consensus (as far as I can gather) is something like: "AI art is bad, the real danger is capitalism, and the extinction danger is some kind of fake regulatory-capture hype techbro thing which (if we even bother to look at the LW/EA spaces at all) is adjacent to racists and cryptobros".

So clearly you're aware of / agree with this being a substantial chunk of what's happening in the “mass social media” space, in which case…

Given this, plus anchoring bias, you should expect and be very paranoid about the "first thing people hear = sets the conversation" thing.

… why is this not just “お前はもう死んでいる” (that is, you are already cut off from this strategy due to things that happened before you could react) right out of the gate, at least for that (vocal, seemingly influential) subpopulation?

What I observe in many of my less-technical circles (which roughly match the above description) is that as soon as the first word exits your virtual mouth that implies that there's any substance to any underlying technology itself, good or bad or worth giving any thought to at all (and that's what gets you on the metatextual level, the frame-clinging plus some other stuff I want to gesture at but am not sure whether that's safe to do right now), beyond “mass stealing to create a class divide”, you instantly lose. At best everything you say gets interpreted as “so the flood of theft and soulless shit is going to get even worse” (and they do seem to be effectively running on a souls-based model of anticipation even if their overt dialogue isn't theistic, which is part of what creates a big inferential divide to start with). But you don't seem to be suggesting leaning into that spin, so I can't square what you're suggesting with what seem to be shared observations. Also, the less loud and angry people are still strongly focused on “AI being given responsibility it's not ready for”, so as soon as you hint at exceeding human intelligence, you lose (and you don't then get the chance to say “no no, I mean in the future”, you lose before any further words are processed).

Now, I do separately observe a subset of more normie-feeling/working-class people who don't loudly profess the above lines and are willing to e.g. openly use some generative-model art here and there in a way that suggests they don't have the same loud emotions about the current AI-technology explosion. I'm not as sure what main challenges we would run into with that crowd, and maybe that's whom you mean to target. I still think getting taken seriously would be tricky, but they might laugh at you more mirthfully instead of more derisively, and low-key repetition might have an effect. I do kind of worry that even if you start succeeding there, then the x-risk argument can get conflated with the easier-to-spread “art theft”, “laundering bias”, etc. models (either accidentally, or deliberately by adversaries) and then this second crowd maybe gets partly converted to that, partly starts rejecting you for looking too similar to that, and partly gets driven underground by other people protesting their benefiting from the current-day mundane-utility aspect.

I also observe a subset of business-oriented people who want the mundane utility a lot but often especially want to be on the hype train for capital-access or marketing reasons, or at least want to keep their friends and business associates who want that. I think they're kind of constrained in what they can openly say or do and might be receptive to strategic thinking about x-risk but ultimately dead ends for acting on it—but maybe that last part can be changed with strategic shadow consensus building, which is less like mass communication and where you might have more leeway and initial trust to work with. Obviously, if someone is already doing that, we don't necessarily see it posted on LW. There's probably some useful inferences to be drawn from events like the OpenAI board shakeup here, but I don't know what they are right now.

FWIW, I have an underlying intuition here that's something like “if you're going to go Dark Arts, then go big or go home”, but I don't really know how to operationalize that in detail and am generally confused and sad. In general, I think people who have things like “logical connectives are relevant to the content of the text” threaded through enough of their mindset tend to fall into a trap analogous to the “Average Familiarity” xkcd or to Hofstadter's Law when they try truly-mass communication unless they're willing to wrench things around in what are often very painful ways to them, and (per the analogies) that this happens even when they're specifically trying to correct for it.

You're right; I'd forgotten about the indicator. That makes sense and that is interesting then, huh.

and I'm faintly surprised it knows so much about it

GPT-4 via the API, or via ChatGPT Plus? Didn't they recently introduce browsing to the latter so that it can fetch Web sources about otherwise unknown topics?

The porno latent space has been explored so thoroughly by human creators that adding AI to the mix doesn't change much.

Something about this feels off to me. One of the salient possibilities in terms of technology affecting romantic relationships, I think, is hyperspecificity in preferences, which seems like it has a substantial social component to how it evolves. In the case of porn, with (broadly) human artists, the r34 space still takes a substantial delay and cost to translate a hyperspecific impulse into hyperspecific porn, including the cost of either having the skills and taking on the workload mentally (if the impulse-haver is also the artist) or exposing something unusual plus mundane coordination costs plus often commission costs or something (if the impulse-haver is asking a different artist).

With interactively usable, low-latency generative AI, an impulse-haver could not only do a single translation step like that much more easily, but iterate on a preference and essentially drill themselves a tunnel out of compatibility range. No? That seems like the kind of thing that makes an order-of-magnitude difference. Or do natural conformity urges or starting distributions stop that from being a big deal? Or what?

Having written that, I now wonder what circumstances would cause people to drill tunnels toward each other using the same underlying technology, assuming the above model were true…

Load More