Also known as Max Harms. (I post AI alignment content under my other account.)
Not the same person as MaxH!
With Crystal, I just slammed them out there with pretty minimal effort. I gave Society away for free, and didn't make paperback copies until just recently. For Red Heart I thought the story might have broader appeal, and wanted to get over my allergy to marketing, so I reached out to a bunch of literary agents early this year. Very few were interested, and most gave no reason. One was kind enough to explain that as a white guy writing a book about China, it would be an uphill battle to find a publisher, and that I'd probably need a Chinese co-author to make it work. She estimated that optimistically I might be able to get it in stores in 2027. From my perspective that was way too slow, and since I already had experience self-publishing, I went down that route. Self-publishing is extremely easy these days, and can produce a product of comparable quality if you are competent and/or have a team. The main issue is marketing and building awareness; traditional publishing still acts as a gatekeeper in many ways. So I'm still extremely dependent on word-of-mouth recommendations.
Lovely to find yet another person who benefited from my stories. I hope you enjoy Red Heart! ❤️
Yeah, these are good questions. I mostly don't suggest people try to support themselves writing unless they already know they're very good at storytelling, and even then it's hard/rare. Instead, I think it's good for people to experiment with it as a side-thing, ideally in addition to some useful technical work. (I'm very blessed that I get to work as a researcher at MIRI, for example, and then go home and write stories that are inspired by my research.) Don't wait to be discovered by a literary agent; if you write something good, post it online! Only try to seriously monetize after you already have some success.
Regarding how to tell if your stories are good, I think the main thing is to get them in front of people who will be blunt, and find out what they say. LLMs are a good stepping-stone to this, if you're hesitant to get a real human to read your work, though you'll have to shape their prompt so that they're critical and not sycophantic. Writing groups can also be a good resource for testing yourself.
Just wanted to remind folks that this is coming up on Saturday! I'm looking forward to seeing y'all at the park. It should be sunny and warm. Feel free to send me requests for snacks or whatever.
Is there a minimal thing that Claude could do which would change your mind about whether it’s conscious?
Edit: My question was originally aimed at Richard, but I like Mikhail’s answer.
Value of information
Thanks for such a glowing review!! I'm so glad you heart the book!
I'd be curious for the specific ways in which you feel that Yunna is unrealistically strong or competent for a model around the size of GPT-6.5 (which is where I was aiming for in the story). LessWrong has spoiler tags in case you want to get into
the ending. (Use >! at the start of a line to black it out.)
The story actually starts in an alternate-timeline October 2023. I knew the book would be a period-piece and wanted to lampshade that it's unrealistically early without making it distracting. Glad to hear you didn't pick up on the exact date.
Just to defend myself about AI 2027 and timelines, I think a loss-of-control event in 2028 is very plausible, but as I explain in the piece you link in the footnote, my expectation is actually in the early 2030s, due to various bottlenecks and random slowdowns. But also, error bars are wide. I think the onus should be on people to explain why they don't think a loss of control in 2028 is possible, given the natural uncertainty of the future and the difficulty of prediction.
Regardless, thanks again. :)