Wiki Contributions

Comments

Phib3mo30

“they serendipitously chose guinea pigs, the one animal besides human beings and monkeys that requires vitamin C in its diet.“

This recent post I think describes this same phenomena but not from the same level of ‘necessity’ as, say, cures to big problems. Kinda funny too: https://www.lesswrong.com/posts/oA23zoEjPnzqfHiCt/there-is-way-too-much-serendipity.

Phib3mo30

So here was my initial quick test, I haven't spent much time on this either, but have seen the same images of faces on subreddits etc. and been v impressed. I think asking for emotions was a harder challenge vs just making a believable face/hand, oops



I really appreciate your descriptions of the distinctive features of faces and of pareidolia, and do agree that faces are more often better represented than hands, specifically hands often have the more significant/notable issues (misshapen/missing/overlapped fingers). Versus with faces where there's nothing as significant as missing an eye, but it can be hard to portray something more specific like an emotion (though same can be said for, e.g. getting Dalle not to flip me off when I ask for an index finger haha).

Rather difficult to label or prompt a specific hand orientation you'd like as well, versus I suppose, an emotion (a lot more descriptive words for the orientation of a face than a hand)

So yeah, faces do work, and regardless of my thoughts on uncanny valley of some faces+emotions, I actually do think hands (OP subject) are mostly a geometric complexity thing, maybe we see our own hands so much that we are more sensitive to error? But they don't have the same meaning to them as faces for me (minute differences for slightly different emotions, and benefitting perhaps from being able to accurately tell).

 

Phib3mo3-1

I think if this were true, then it would also hold that faces are done rather poorly right now which, maybe? Doing some quick tests, yeah, both faces and hands at least on Dalle-3 seem to be similar levels of off to me.

Phib4mo60

Wow, I’m impressed it caught itself, was just trying to play with that 3 x 3 problem too. Thanks!

Phib4mo60

I don’t know [if I understand] full rules so don’t know if this satisfies, but here:

https://chat.openai.com/share/0089e226-fe86-4442-ba07-96c19ac90bd2

Phib7mo10

Kinda commenting on stuff like “Please don’t throw your mind away” or any advice not to fully defer judgment to others (and not intending to just straw man these! They’re nuanced and valuable, just meaning to next step it).

In my circumstance and I imagine many others who are young and trying to learn and trying to get a job, I think you have to defer to your seniors/superiors/program to a great extent, or at least to the extent where you accept or act on things (perform research, support ops) that you’re quite uncertain about.

Idk there’s a lot more nuance here to this conversation as with any, of course. Maybe nobody is certain of anything and they’re just staking a claim so that they can be proven right or wrong and experiment in this way, producing value in their overconfidence. But I do get a sense of young/new people coming into a field that is even slightly established, requiring to some extent to defer to others for their own sake.

Phib8mo30

I don’t mean to present myself as the “best arguments that could be answered here” or at all representative of the alignment community. But just wanted to engage. I appreciate your thoughts!

Well, one argument for potential doom doesn’t necessitate an adversarial AI, but rather people using increasingly powerful tools in dumb and harmful ways (in the same class of consideration for me as nuclear weapons; my dumb imagined situation of this is a government using AI to continually scale up surveillance and maybe we eventually get to a position like in 1984)

Another point is that a sufficiently intelligent and agentic AI would not need humans, it would probably eventually be suboptimal to rely on humans for anything. And it kinda feels to me like this is what we are heavily incentivized to design, the next best and most capable system. In terms of efficiency, we want to get rid of the human in the loop, that person’s expensive!

Phib10mo10

Idk the public access of some of these things, like with nonlinear's recent round, but seeing a lot of apps there and organized by category, reminded me of this post a little bit.

edit - in terms of seeing what people are trying to do in the space. Though I imagine this does not capture the biggest players that do have funding.

Phib10mo10

btw small note that I think accumulations of grant applications are probably pretty good sources of info.

Phib11mo30

BTW - this video is quite fun. Seems relevant re: Paperclip Maximizer and nanobots.

Load More