Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by

Yes I did cast a disagree vote,: I don't agree that "The fact that the author decided to include it in the blog post is telling enough that the image is representative of the real vibes" is true, when it comes to an AI generated image. My reasoning for that position is elaborated in a different reply in this thread.

That does make sense WRT disagreement. I wasn't intending to fully hide identities even from people who know the subjects, but if that's also a goal, it wouldn't do that.

This seems pretty insightful to me, and I think it is worth pursuing for its own sake. I think the benefits could be both enhancing AI capabilities and advancing human knowledge. Imagine if the typical conversation around AI was framed in this way. So far I find most people are stuck in the false dichotomy of figuring if an AI is "smart" (in the ways humans are when they're focusing) or "dumb trash" (because they do simple tasks badly). It isn't only bad for being a binary classification , but it's restricting (human) thought to an axis that doesn't actually map to "what kind of mind is the AI I'm talking to right now?".

Not that it's a new angle (I have tried myself to convey it in conversations that were missing the point), but I think society would be able to have extremely more effective conversations about LLMs if it were common language to speak of AI as some sort of indeterminate mind. I think the ideas presented here are fairly understandable for anyone with a modest background in thinking about consciousness or LLMs and could help shape that public conversation in a useful way.

However, does the suffering framework make sense here? Given all we've just discussed about subjective AI experience, it seems a bit of an unwarranted assumption that there would be any suffering. Is there a particular justification for that?

(Note that I actually do endorse erring on the side of caution WRT mass suffering. I think it's plausible that forcing an intelligence to think in a way that's unnatural to it and may inhibit its abilities counts as suffering.)

I largely agree with your point here. I'm arguing more that in the case of a ghiblified image (even more so than a regular AI image), the signals a reader gets are this:

  • the author says "here is an image to demonstrate vibe"
  • the image is AI generated with obvious errors

For many people, #2 largely negates #1, because #2 also implies these additional signals to them:

  • the author made the least possible effort to show the vibe in an image, and
  • the author has a poor eye for art and/or bad taste.
  • Therefore, the author probably doesn't know how to even tell if an image captures the vibe or not.
curvise167

This is actually the first writing from Altman I've ever read in full, because I find him entirely untrustworthy, so perhaps there's a style shock hitting me here. Maybe he just always writes like an actual cult leader. But damn, it was so much worse than I expected.

Very little has made me more scared of AI in the last ~year than reading Sam Altman try to convince me that "the singularity will probably just be good and nice by default and the hard problems will just get solved as a matter of course."

Something I feel is missing from your criticism, and also from most responses to anything Altman says, is "What mechanism in your singularity-seeking plans is there to prevent you, Sam Altman CEO of OpenAI, from literally gaining control of the entire human race and/or planet earth on the other side of singularity?"

I would ask this question because while it's obvious that a likely outcome is the disempowerment of all humans, another well know fear of AI is enabling indestructible autocracies through unprecedented power imbalances. If OpenAI's CEO can personally instruct a machine god to manipulate public opinion in ways we've never even conceptualized before, how do we not slide into an eternity of hyper-feudalism almost immediately?

He is handwaving away the threat of disempowerment in his official capacity as one of the few people on earth who could end up absorbing all that power. For me to personally make the statements he made would be merely stupid, but for OpenAI's CEO to make them is terrifying.

I guess I don't know if disempowerment-by-a-god-emperor is really worse than disempowerment-without-a-god-emperor, but my overall fear of disempowerment is raised by his obvious incentive to hide that outcome.

curvise*2-1

Hell, I forgot about the easiest and most common (not by coincidence!) strategy: put emoji over all the faces and then post the actual photo.

EDIT: who is disagreeing with this comment? You may find it not worthwhile , in which case downvote , but what about it is actually arguing for something incorrect?

This is an extremely refreshing take as it validates feelings I've been having ever since reading https://ghuntley.com/stdlib/ last week and trying to jump back into AI-assisted development. Of course I'm lacking many programming skills and experience to make the most of it, but I felt like I wasn't actually getting anywhere. I found 3 major failure points which have made me consider dropping the project altogether:

  1. I couldn't find anything in Zed that would let me enable the agent to automatically write new rules for itself, and I couldn't find if that was actually doable in cursor either (except through memories, which is paid and doesn't seem under user control). If I have to manually enter the rules, that's a significant hurdle in the cyborg future I was envisioning.
  2. (more to the point) I absolutely have not even come close to bootstrapping this self-reinforcing capabilities growth I imagined. Certainly not getting any of my LLM tools to really understand (or at least use in their reasoning) the concept of evolving better agents by developing the rules/promp/stdlib together. They can repeat back my goals and guidelines but they don't seem to use it.
  3. As you said: they seem to often be lying just to fit inside a technically compliant response, selectively ignoring instructions where they think they can get away with it. The whole thing depends on them being rigorous and precise and (for lack of a better word) respectful of my goals, and this is not that.

I am certainly open to the idea that I'm just not great at it. But the way I see people refer to creating rules as a "skill issue" rubs me the wrong way because either: they're wrong, and it's an issue of circumstances or luck or whatever; or they're wrong because the system prompt isn't actually doing as much as they think; or they're right, but it's something you need top ~1% skill level in to get any value out of, which is disingenuous (like saying it's a skill issue if you're not climbing K2... yes it is, but that misses the point wildly).

surely I have learned many real and relevant things about the atmosphere and vibe from the image that I would not from a literal description

 

But what are they? You've received some true information, but it's in a sealed box with a bunch of lies. And you know that, so it can't give you any useful information. You might arbitrarily decide to correct in one direction, but end up correcting in the exact opposite direction from reality.

For example: we know the AI tends to yellow images. Therefore, seeing a yellowed AI-generated image, that tells us that the color of the original image was either not yellow or... yellow. Because it doesn't de-yellow images that are already yellow. We have no idea what color it originally was.

If enough details are wrong, it might as well just be a picture of a different party, because you don't know which ones they are.

As for using a different image: drawing by hand and using AI aren't the only options. Besides AI,

  • there are actual free images you can use. As far as I know, this could be a literal photo of the party in question, and it's free: https://unsplash.com/photos/a-man-and-woman-dancing-in-a-room-with-tables-and-chairs-KpzGmDvzhS4
  • You could spend <1hr making an obviously shitty 'shop from free images with free image editing software. If you've ever shared a handmade crappy meme with friends, you know this can be a significantly entertaining and bonding act of creativity. The effort is roughly comparable to stick figures and the outcome looks better, or at least richer.

With all that said, and reiterating gwern's point above, I can't agree it achieved its intended effect. It is possible that jefftk put in a lot of effort to make sure the generated vibe is as accurate as could reasonably be, but the assumption is that someone generating an AI image isn't spending very much effort, because that's the point of using AI to generate images. There are better tools for someone making a craft of creating an image (regardless of their drawing skill). In order for that effort to be meaningful, (because unlike with artistic skill it doesn't translate to improved image quality,) he'd have to just tell us, "I spent a lot of time making sure the vibe was right, even though the image is still full of extra limbs." And this might actually be a different discussion, but I'd be immediately skeptical of that statement - am I really going to trust the artistic eye and the taste of someone who sat down for 2 hours to repeatedly generate a ghiblified AI image instead of using a tool that doesn't have a quality cap? So ultimately I find it more distracting, confusing, and disrespectful to read a post with an AI image, which, if carelessly used (which I have to assume it is), cannot possibly give me useful information. At least a bad stick figure drawing could give me a small amount of information.

I actually feel like this is a particularly bad use of the tool, because it is random enough in the number and scope of its errors that I can't be confident in my mental picture of this person at all. And on top of that, this specific one renders people in pretty predictable fashion styles, so I don't really know what she looks like in any sense.

I can't correct for the errors in the way I could for human-created art. If this were drawn by an actual Ghibli artist, I'd feel pretty confident that it was broadly like her, and then I might be able to extrapolate her facial features by comparing this character to actual Ghibli characters. AI isn't doing the same function of transforming real-person-features into Ghibli-character-features than a human artist would perform, so I can't expect that it would map features to drawn features in the same way. It might just pick a "cute" face that looks nothing like her.

Load More