Seems possible, but the post is saying "being politically involved in a largely symbolic way (donating small amounts) could jeopardize your opportunity to be politically involved in a big way (working in government)"
Yeah, I feel like in order to provide meaningful information here, you would likely have to be interviewed by the journalist in question, which can't be very common.
At first I upvoted Kevin Roose because I like the Hard Fork podcast and get generally good/honest vibes from him, but then I realized I have no personal experiences demonstrating that he's trustworthy in the ways you listed, so I removed my vote.
I remember being very impressed by GPT-2. I think I was also quite impressed by GPT-3 even though it was basically just "GPT-2 but better." To be fair, at the moment that I was feeling unimpressed by ChatGPT, I don't think I had actually used it yet. It did turn out to be much more useful to me than the GPT-3 API, which I tried out but didn't find that many uses for.
It's hard to remember exactly how impressed I was with ChatGPT after using it for a while. I think I hadn't fully realized how great it could be when the friction of using the API was removed, even if I didn't update that much on the technical advancement.
I remember seeing the ChatGPT announcement and not being particularly impressed or excited, like "okay, it's a refined version of InstructGPT from almost a year ago. It's cool that there's a web UI now, maybe I'll try it out soon." November 2022 was a technological advancement but not a huge shift compared to January 2022 IMO
Which part do people disagree with? That the norm exists? That the norm should be more explicit? That we should encourage more cross-posting?
It seems there's an unofficial norm: post about AI safety in LessWrong, post about all other EA stuff in the EA Forum. You can cross-post your AI stuff to the EA Forum if you want, but most people don't.
I feel like this is pretty confusing. There was a time that I didn't read LessWrong because I considered myself an AI-safety-focused EA but not a rationalist, until I heard somebody mention this norm. If we encouraged more cross-posting of AI stuff (or at least made the current norm more explicit), maybe the communities on LessWrong and the EA Forum would be more aware of each other, and we wouldn't get near-duplicate posts like these two.
(Adapted from this comment.)
Side note - it seems there's an unofficial norm: post about AI safety in LessWrong, post about all other EA stuff in the EA Forum. You can cross-post your AI stuff to the EA Forum if you want, but most people don't.
I feel like this is pretty confusing. There was a time that I didn't read LessWrong because I considered myself an AI-safety-focused EA but not a rationalist, until I heard somebody mention this norm. If we encouraged more cross-posting of AI stuff (or at least made the current norm more explicit), maybe we wouldn't get near-duplicate posts like these two.
I believe image processing used to be done by a separate AI that would generate a text description and pass it to the LLM. Nowadays, most frontier models are "natively multimodal," meaning the same model is pretrained to understand both text and images. Models like GPT-4o can even do image generation natively now: https://openai.com/index/introducing-4o-image-generation. Even though making 4o "watch in real time" is not currently an option as far as I'm aware, uploading a single image to ChatGPT should do basically the same thing.
It's true that frontier models are still much worse at understanding images than text, though.
This was fun to read! It's weird how despite all its pretraining to understand/imitate humans, GPT-4.1 seems to be so terrible at understanding humor. I feel like there must be some way to elicit better judgements.
You could try telling GPT-4.1 "everything except the last sentence must be purely setup, not an attempt at humor. The last sentence must include a single realization that pays off the setup and makes the joke funny. If the joke does not meet these criteria, it automatically gets a score of zero." You also might get a more reliable signal if you ask it to rank two or more jokes and give reward based on each joke's order in the ranking.
Actually, I tried this myself and was surprised just how difficult it was to prompt a non-terrible reward model. I gave o4-mini the "no humor until the end" requirement and it generated the following joke:
What does this even mean? It makes no sense to me. Is it supposed to be a pun on "pedigree" and "lineage?" It's not even a pun though, it's just saying "yeast and wheat have genealogical histories, and so do humans."
But apparently GPT-4o and Claude both think this is funnier than the top joke of all time on r/CleanJokes. (Gemini thought the LLM-written joke was only slightly worse.) The joke from Reddit isn't the most original, but at least it makes sense.
Surely this is something that could be fixed with a little bit of RLHF... there's no way grading jokes is this difficult.