LESSWRONG
LW

3843
grist
2040
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
Accelerando as a "Slow, Reasonably Nice Takeoff" Story
grist13d10

It was amazing reading it back then—I still go back to the world to soak in the prose from time to time. I know your take on genai (you made it abundantly clear when you answered a question of mine on reddit) but what matters to me is being able to thank someone who gave me a world to explore. I’m sure this comment does nothing for the discourse on Less Wrong but here it is. . . 

The dragon I chase is finding the high of reading “Hardfought” for the first time. Accelerando was the same feeling on a much larger scale. Thank you for sharing your worlds. 

Reply
Shortform
grist1y20

this falls perfectly into a thought/feeling “shape” in my mind. i know simple thanks are useless. but thank you.

i will now absorb your words and forget you wrote them

Reply
UFO Betting: Put Up or Shut Up
grist2y20

Your post led me down an interesting path. Thank you. I would love to know your thoughts of the congressional hearing.

Reply
SolidGoldMagikarp (plus, prompt generation)
grist2y10

I would like to ask what will probably seem like a surface level question from a layperson.

It is because I am—but I appreciate reading as much as I can on LW.

The end-of-text prompt causes the model to “hallucinate”? If the prompt is the first one in the context window how does the model select the first token—or the “subject” of the response?

The reason I ask is that the range has been from a Dark Series synopsis, an “answer” on fish tongues as well as a “here’s a simple code that calculates the average of a list of numbers (along with the code).”

I’ve searched online and have not found an answer. Is this because endoftext is well known, not a “glitch” and just how GPT works? I apologize for asking here but if someone can point to a post with the answer (“endoftext causes the model to…”) it would be greatly appreciated.

Note: I found this below—but how does it select the “uncorrelated text.? How does it “choose” the first token that begins the uncorrelated text?

“You will see that it starts to answer like "The <lendoftext|> " and after that it simply answers with an uncorrelated text. That is because it learned to not attend to tokens that are before the [EOS] token.”

Reply