Anecdote: The Seattle meetup has a game night 1 week & a rationality meetup the next week, & enjoys this pattern.
Is this a thing that still happens? I'm in Seattle but I'm only aware of the reading group.
Multimodal LLMs convert patches of image inputs into embeddings, similar to how they handle text input. The pieces that do that conversion can be pre-trained image classifiers or can be trained as part of the system. https://sebastianraschka.com/blog/2024/understanding-multimodal-llms.html
So, no, LLMs don't have an internal OCR system that converts images to text, but yes LLMs have an internal OCR system that processes images into embeddings (similar to how they have a system that processes tokens into embeddings). The difference is that you might process an image patch into an embedding which doesn't match the embedding for any text token.
Whether that piece is trained as part of the system or dropped in is a design choice.
Isn't Sam Altman basically trying to do this with Stargate?
That's what I did, but sometimes it didn't work well because all of the obvious places were full, and if I went somewhere else people would think I was intentionally avoiding the crowds. It also meant I only talked to people who sat in certain places.
It was particularly hard to find people to talk to during lunch.
I just tried this and got a pretty decent song about learning from market signals, although I did need to mess around with inpainting to fix messed up lyrics (and you can see see a few places where it gets confused near the end).
I'll have to play around with this more. Thanks for the idea!
I read something a while back (wish I remembered the source) about how the rotten meat thing is sort-of less gross than you're thinking, since fermented meat can taste good if you do it right (think: sausage and aged steak), and presumably ancient people weren't constantly sick.
Edit: I think the source is this: https://earthwormexpress.com/the-prehistory-of-food/in-prehistory-we-ate-fermented-foods/
Although the descriptions might make you appreciate modern food even more.
If you think your house's location is too dangerous to live in, it seems weird to convince a friend to move there.
After reading https://bessstillman.substack.com/p/please-be-dying-but-not-too-quickly, I've been wondering why no one has made a semantic search tool for clinical trials, or at least something that generates consistent tags. I've been wanting to make one but haven't had time yet.
This "learning from a teacher" failures also seem to point at the same problem where LLM's can't learn well from their own output. Sometimes you get output where a model correctly explains why its current approach doesn't work, and then it does the same thing over and over again anyway.
I read the whole thing and disagree. I think the disconnect might be what you want this article to be (a guide to baldness) vs what it is (the author's journey and thoughts around baldness mixed with gallows humor regarding his potential fate).
The Norword/forest comparison gets used consistently throughout (including the title) and is the only metaphor used this way. Whether you like this comparison or not, it's not a case of AI injecting constant flowery language.
That said, setting audience expectations is also an important part of writing, and I think "A Rationalist's Guide to Male Pattern Baldness" is probably setting the wrong expectation.
I upvoted since I thought it was interesting and I learned a little bit.