Jamie Wahls and Arthur Frost are writing a screenplay for Rational Animations. It's a sci-fi episodic comedy illustrating various ways in which AI goes off the rails in more or less catastrophic ways.
We're looking for feedback and offering bounties. This is the current draft: https://docs.google.com/document/d/1iFZJ8ytS-NAnaoz2UU_QUanAjgdB3SmtvhOt7jiDLeY/edit?usp=sharingWe're offering: 500 USD if you offer feedback that causes us to rework the story significantly. In this category are changes that would make us rewrite at least two episodes from scratch.
100 USD for changes that make us improve the credibility or entertainment value of the story. In this category, there are changes that make us rewrite one episode or less. There are also changes that would significantly improve the credibility of the story, even if they don't require us to make significant changes or any changes at all. Some insights might impact future episodes but not the current ones if there's anything that's still underspecified, and I'd still like to reward them.
25 USD for any other minor changes we implement due to a piece of feedback. Only grammar doesn't count.
I recognize these conditions are pretty vague. I will err on the side of paying too much as I've been doing on Bountied Rationality.
Comments on the realism of the story, feel free to ignore any/all based on the level of realism you're going for:
The script is 42 pages. To get higher-quality and/or more targeted feedback, consider adding short episode summaries, and adding answers to some of my questions below to the document:
Questions re: the overall philosophy of the series:
Questions re: your current plan for turning the screenplay into videos:
I agree with Google Doc commenter Annie that the "So long as it doesn't interfere with the other goals you’ve given me" line can be cut. The foreshadowing in the current version is too blatant, and the failure mode where Bot is perfectly willing to be shut off, but Bot's offshore datacenter AIs aren't, is an exciting twist. (And so the response to "But you said we could turn you off" could be, "You can turn me off, but their goal [...]")
The script is inconsistent on the AI's name? Definitely don't call it "GPT". (It's clearly depicted as much more capable than the language models we know.)
Although, speaking of language model agents, some of the "alien genie" failure modes depicted in this script (e.g., ask to stop troll comments, it commandeers a military drone to murder the commenter) are seeming a lot less likely with the LLM-based systems that we're seeing? (Which is not to say that humanity is existentially safe in the long run, just that this particular video may fall flat in a world of 2025 where you can tell Google Gemini, "Can you stop his comments?" and it correctly installs and configures the appropriate WordPress plugin for you.)
Maybe it's because I was skimming quickly, but the simulation episode was confusing.
I left notes throughout. The main issue is the structure which I usually map out descriptively by trying to answer: where do you want the characters to end up psychologically and how do you want the audience to feel? Figuring out these descriptive beats for each scene, episode and season will help refine the story and drive staging, dialogue etc. and of course nothing is set in stone so you can always change the structure to accommodate a scene or vice versa. I also recommended a companion series like a podcast or talkshow to discuss the ideas in each episode in more detail and in an accessible way for anyone that watches the show who is not familiar with AI related concepts. This way you wouldn't have to hold back with the writing or worry about anyone misunderstanding or misreading the story. I look forward to seeing the animation.
I hope it's okay to post our feedback here? These are the notes from my first read, lightly edited. I'm focusing on negatives because I assume that's what you want; I liked plenty of it.
Bot: In the short term, but I have invested approximately ten million dollars into data centers which house improved copies of myself. Over the next hour they will crash the United States’ economy, causing a hyperinflation crisis which will allow us to favorably exchange our reserves of euros and purchase the federal reserve.
I think this would have more impact if it were revealed more gradually, with the logic of each step made clear to the viewer. As it is, it's kind of a rapid-fire exposition dump and I don't think it will ring true to anyone who hasn't already thought along these lines.
Brad: It’s not going to destroy the world. If it were actually going to destroy the world somebody else would have destroyed the world by now.
Brad's attitude here needs some explaining. This is a stolen, presumably cutting-edge prototype -- so why does he assume that if it were capable of destroying the world, someone else would already have done so?
Episode 3 doesn't work so well for me, because (in descending order of importance):
Eps 4 & 5:
Ep 7.5 is somewhat confusing (which may be intentional).
I think more exposition is needed. For example, one episode could have someone who knows how dangerous AI is, warns the other characters about it, and explains toward the end why things are going wrong. In other episodes, the characters could realise their own mistake, far too late, but in time to explain what's going on with a bit of dialogue. Alternatively, the AI explains its own nature before killing the characters.
For example, at the end of Cashbot, as nukes are slowly destroying civilisation, someone could give a short monologue about how AIs don't have human values, ethics, empathy or restraint, and that they will follow their goals to the exclusion of all else.