Posts

Sorted by New

Wiki Contributions

Comments

Sterrs11

Human relationships should be challenging. Refusing to be challenged by those around you is what creates the echo chambers we see online, where your own opinions get fed back to you, only reassuring you of what you already believe. These were created by AI recommendation algorithms whose only goal was to maximise engagement.

Why would an AI boyfriend or girlfriend be any different? They would not help you develop as a person, they would only exist to serve your desires, not to push you to improve who you are, not to teach you new perspectives, not to give you opportunities to bring others joy.

Sterrs85

Women will find AI partners just as addicting and preferable to real partners as men do.

Sterrs45

I personally think you massively underestimate the dangers posed by such relationships. We are not talking about people living healthy well-adjusted lives but choosing not to have any intimate relationships with other humans. We're talking about a severely addictive drug, perhaps on the level of some of the most physiologically addictive substances we know of today. Think social media addiction but with the obsession and emotions we associate with a romantic crush, then multiply it by one hundred.

Sterrs10

Very interesting. I'll play around with the code next time I get the chance.

2.1)

Being able to solve crosswords requires you to know how long words are. I have no idea how common they were in the training data though. Aligning things in text files is sometimes desirable, Python files are supposed to limit line lengths to 80 characters, some Linux system files store text data in tables with whitespace to make things line up. ASCII art also uses specific line lengths.

My guess for linearity would be so that the sum of the vectors has the length of their concatenation e.g. to work out the lengths of sentences.

I wonder if "number of syllables" is a feature, and whether this is consistent between languages?

2.2)

If the language model has finished outputting a word, it needs to be able to guarantee a space comes next to avoid writingtextlikethis. I guess one would expect tokens to be close in the embedding to their copies with spaces in front, so to control the position of spaces in text the model would like a separate direction to encode that information.

Sterrs40

I'm not sure you can call someone else's work a "huge achievement" based on your own uncertainty about whether their conclusions are correct.