We shipped "draft comments" earlier today. Next to the "Submit" button, you should see a drop-down menu (with only one item), which lets you save a comment as a draft. Draft comments will be visible underneath the comment input on the posts they're responding to, and all of them will be visible on your profile page, underneath your post draft list. Big thanks to the EA forum for building the feature!
Please let us know if you encounter any bugs/mishaps with them.
Concerning! Intercom shows up for me on Firefox (macos), will see if there's anything in the logs. How does the LLM integration being broken present itself?
You have a typo where the second instance of let belief = null;
should presumably be let belief = undefined;
.
(Also, I think "It'd print an error saying that foobar
is not defined" is false? Confirmed by going to the browser console and running that two-liner; it just prints undefined
to the console.)
Interesting mapping, otherwise!
- They very briefly discuss automated AI alignment research as a proposal for mitigating AI risk, but their arguments against that plan do not respond to the most thoughtful versions of these plans. (In their defense, the most thoughtful versions of these plans basically haven't been published, though Ryan Greenblatt is going to publish a detailed version of this plan soon. And I think that there are several people who have pretty thoughtful versions of these plans, haven't written them up (at least publicly), but do discuss them in person.)
Am a bit confused by this section - did you think that part 3 was awful because it didn't respond to (as yet unpublished) plans, or for some other reason?
there is very much demand for this book in the sense that there's a lot of people who are worried about AI for agent foundations shaped reasons and want an introduction they can give to their friends and family who don't care that much.
This is true, but many of the surprising prepublication reviews are from people who I don't think were already up-to-date on these AI x-risk arguments (or at least hadn't given any prior public indication of their awareness, unlike Matt Y).
This is a valid line of critique but seems moderately undercut by its prepublication endorsements, which suggest that the arguments landed pretty ok. Maybe they will land less well on the rest of the book's target audience?
(re: Said & MIRI housecleaning: Lightcone and MIRI are separate organizations and MIRI does not moderate LessWrong. You might try to theorize that Habryka, the person who made the call to ban Said back in July, was attempting to do some 4d-chess PR optimization on MIRI's behalf months ahead of time, but no, he was really nearly banned multiple times over the years and he was finally banned this time because Habryka changed his mind after the most recent dust-up. Said practically never commented on AI-related subjects, so it's not even clear what the "upside" would've been. From my perspective this type of thinking resembles the constant noise on e.g. HackerNews about how [tech company x] is obviously doing [horrible thing y] behind-the-scenes, which often aren't even in the company's interests, and generally rely on assumptions that turn out to be false.)
I don't believe that you believe this accusation. Maybe there is something deeper you are trying to say, but given that I also don't believe you've finished reading the book in the 3(?) hours it's been released, I'm not sure what it could be. (To say it explicitly, Said's banning had nothing to do with the book.)
Yeah, sadly this is an existing bug.
Thanks, fixed!
(Also, to clarify, we were already on React - it's mostly other bits of framework glue that got tossed out/replaced/etc.)