Razied

Wiki Contributions

Comments

But if you are right that you only respond to a limited set of story types, do you therefore aspire to opening yourself to different ones in future, or is your conclusion that you just want to stick to films with 'man becomes strong' character arcs?

Not especially, for the same reason that I don't plan on starting to eat 90% dark chocolate to learn to like it, even if other people like it (and I can even appreciate that it has a few health benefits). I certainly am not saying that only movies that appeal to me be made, I'm happy that Barbie exists and that other people like it, but I'll keep reading my male-protagonist progression fantasies on RoyalRoad.

Greta Gerwig obviously thinks so: when she says: "I think equally men have held themselves to just outrageous standards that no one can meet. And they have their own set of contradictions where they’re walking a tightrope. I think that’s something that’s universal."

I have a profound sense of disgust and recoil when someone tells me to lower my standards about myself. Whenever I hear something like "it's ok, you don't need to improve, just be yourself, you're enough", I react strongly, because That Way Lay Weakness. I don't have problems valuing myself, and I'm very good at appreciating my achievements, so that self-acceptance message is generally not properly aimed at me, it would be an overcorrection if I took that message even more to heart than I do right now. 

I watched Barbie and absolutely hated it. Though it did provide some value to me after I spent some time thinking about why precisely I hated it. Barbie really showed me the difference between the archetypal story that appeals to males and the female equivalent, and how much just hitting that archetypal story is enough to make a movie enjoyable for either men or women.

The plot of the basic male-appealing story is "Man is weak. Man works hard with clear goal. Man becomes strong". I think men feel this basic archetypal story much more strongly than women, so that even an otherwise horrible story can be entertaining if it hits that particular chord well enough (evidence: most isekai stories), if the man is weak enough at the beginning, or the work especially hard. I'm not exactly clear what the equivalent story is for women, but it's something like "Woman thinks she's not good enough, but she needs to realise that she is already perfect". And the Barbie movie really hits on that note, which is why I think the women in my life seemed to enjoy it. But that archetype just doesn't resonate with me at all.

The apparent end-point for the Kens in the movie is that they "find themselves". This was (to me) a clear misunderstanding by the female authors of what the masculine instinct is like. Men don't "find themselves", they decide who they want to be and work towards climbing out of their pitiful initial states. (There was also the weird Ken obsession with horses, which are mostly a female-only thing)

I'm fairly sure that there's architectures where each layer is a linear function of the concatenated activations of all previous layers, though I can't seem to find it right now. If you add possible sparsity to that, then I think you get a fully general DAG.

Their paper for the sample preparation (here) has a trademark sign next to the "LK-99" name, which suggests they've trademarked it... strongly suggesting that the authors actually believe in their stuff.

There are a whole bunch of ways that trying to optimise for unpredictability is not a good idea:

  1. Most often technical discussions are not just exposition dumps, they're a part of the creative process itself. Me telling you an idea is an essential part of my coming up with the idea. I essentially don't know where I'm going before I get there, so it's impossible for me to optimise for unpredictability on your end.
  2. This ignores a whoooole bunch of status-effects and other goals of human conversation. The point of conversation is not solely to transmit information. In real life information-transfer is a minuscule part of most conversations: try telling your girlfriend to "speak unpredictably" when she gets home and wants to vent to you about her boss.
  3. People often don't say what they mean. The process of translating a mental idea into words on-the-fly often results in sequences of words that are very bad at communicating the idea. The only solution to this is to be redundant, repeat the idea multiple times in different ways until you hit one that your interlocutor understands.

Humans are not Vulcans, and we shouldn't try to optimise human communication the way we'd optimise a network protocol. 

I think you might want to look at the litterature on "sparse neural networks", which is the right search term for what you mean here.

I'm really confused about how anybody thinks they can "license" these models. They're obviously not works of authorship.

I'm confused why you're confused, if I write a computer program that generates an artifact that is useful to other people, obviously the artifact should be considered a part of the program itself, and therefore subject to licensing just like the generating program. If I write a program to procedurally generate interesting minecraft maps, should I not be able to license the maps, just because there's one extra step of authorship between me and them?

The word "curiosity" has a fairly well-defined meaning in the Reinforcement Learning literature (see for instance this paper). There are vast numbers of papers that try to come up with ways to give an agent intrinsic rewards that map onto the human understanding of "curiosity", and almost all of them are some form of "go towards states you haven't seen before". The predictable consequence of prioritising states you haven't seen before is that you will want to change the state of the universe very very quickly.

Not too sure about the downvotes either, but I'm curious how the last sentence misses the point? Are you aware of a formal definition of "interesting" or "curiosity" that isn't based on novelty-seeking? 

According to reports xAI will seek to create a "maximally curious" AI, and this also seems to be the main new idea how to solve safety, with Musk explaining: "If it tried to understand the true nature of the universe, that's actually the best thing that I can come up with from an AI safety standpoint," ... "I think it is going to be pro-humanity from the standpoint that humanity is just much more interesting than not-humanity."

Is Musk just way less intelligent than I thought? He still seems to have no clue at all about the actual safety problem. Anyone thinking clearly should figure out that this is a horrible idea within at most 5 minutes of thinking.

Obviously pure curiosity is a horrible objective to give to a superAI. "Curiosity" as currently defined in the RL literature is really something more like "novelty-seeking", and in the limit this will cause the AI to keep rearranging the universe into configurations it hasn't seen before, as fast as it possibly can... 

Load More