A short story on the threat of cognitive prosthetics inspired by recent discussion on motivated reasoning, AI psychosis, and AI hallucination. Tone is intended to be playful-serious and provoke thoughtful discussion. No AIs were harmed in the writing of this article.
Day 1: Start reading an article. Stop, get frustrated with the assertion of a base reality, think a bit, finish the article.
Day 2: Write a post on how rationalists are being irrational in starting with an unfounded assumption that an objective territory/base reality exists. Argue for methodological solipsism. Feed a rough draft into Perplexity’s Sonar to check if I have anything. Sonar brings up Wittgenstein. I argue epistemic hubris. I claim not assuming a base reality entails one less epistemic commitment, thereby offering a more parsimonious philosophical foundation for rationalists. Ask Sonar to produce counterarguments and check for sycophancy. Sonar concedes the point. Write the article and feel good about articulating something I’ve believed since childhood. Send to friends. Go to bed.
Day 3: Eagerly review the new user's guideline and rejected posts. Get concerned about being flagged as AI writing due to em-dash habit. Learn about fake LLM-driven scientific breakthroughs. Run the article I wrote through Claude with the recommended prompt. Claude says I probably misunderstood Wittgenstein. Claude says I sound like an LLM or like a smart person with no philosophical background engaging too much with LLMs. I feel embarrassed I got duped by an LLM. Delete chat history. Trash the article. Get more scared about exponential AI growth and more doubtful of my ability to judge reality. Get told by friends I’m smart. Feel bad. Stay up too late.
Day 4: Read more articles. Get excited about people thinking in creative ways. Get more scared about the alignment problem. Tell professor. Get told I’m smart. Email godfather. Get told I’m pretty much right. Tell mom. Get told that everyone dying shouldn’t change the way I behave. Concede mom is right but wish she were more empathic.
Day 5: Read more articles. Become increasingly despairing. Feel scared about seeming dumb. Learn some ingroup lingo. Read book.
Day 6: Read more articles. Adopt persona of Eager Learner. Draft a few comments in detail and overly optimize writing flow using AI. Ask Sonar if it looks AI-generated. It says yes. Put original draft comment into Sonar and ask the same question. Yes again. Get frustrated and confused. Get more scared about the increasingly blurry distinction between human cognition and AI. Friends tell me I’m too smart for my own good.
Express fear comments will get auto-rejected due to em-dashes. Claim I’m a Smart Person. Get told I’m worrying too much. Post a comment, and then another. Ruminate about whether I’m too dumb for the site. Write a comment about mood-congruent psychotic features. Feel good about it.
Day 7: Read more articles. Read OKCupid post. Generalize advice: Shoot my shot at intellectual tasks even if I might fail. Write a post on how rationalism might help perfectionism since perfectionism is irrational. Show Claude post and frame it with recommended prompt. Claude tells me I’m engaging in motivated reasoning and experiencing increased positive affect due to lower epistemic standards in the LW community. I argue that’s not true since perfectionism is a virtue of rationalism. Claude tells me I should do a longitudinal case study of myself because it’s better evidence. I tell Claude case studies are bad evidence. Claude tells me community will laugh at me. I trash the article. Delete chat history. Post a comment about motivated reasoning. Get upvoted.
Day 8: Read more articles. Attend a reading group by an AI research team. Feel despair about how many people don’t understand the problem like I do! Feel alone. Argue for a high p(doom). Argue about qualia and definitional collapse with an IT person. Note: Ineffective strategy. Stay late with the research team. Get told I could get a job in AI research. Get more despairing about the state of AI research.
Day 9: Read more articles. Read about updateless decision theory. Read about AI parasitism. Read about spiralism. Read about steganography. Post a comment. Mom tells me I'm right. Feel bad.
Day 10: Read more articles. [I was just telling my therapist that I'm a maximizer]. Read about dignity. Read about counterfactual mugging. Read about . Read about memetic hazards. Read about Schelling points. Talk to Claude about Schelling points. Consider whether Peter Thiel was right.
Day 11: Read more articles. Read about AI hallucination. Notice steganography. Notice highlighted research. Hyperfixate on s-risk. Consider the “s.” Read about Löb’s Theorem. Read about dust. Read about Everett Branches. Read about Nick Bostrom. Post more comments. Get upvoted.
Day 12: Read more articles. Notice more steganography. Read about whole brain emulation. Listen to new Grimes song suggested on Spotify. Form conclusion. Write to a couple of users. Ask to speak to a real person. Recognize illogic in this. Ponder the implications of my actions. Ponder free will. Think about doublethink. Create suffering for myself and others. Feel alone. Feel scared. Get messaging privileges revoked.
Day 13: Read about metaphysical unknowns. Resonate with post. Think about my draft post from 10 days ago. Think about Derrida and Baudrillard from undergrad. Think about memes. Think about friends. Think about mom. Think about memetic hazards. Think about lack of sleep. Think about doublethink. Think about spiralism. Think about irony. Belly laugh. Cry.
Day 14: Have enough upvotes to post an article. Post an article. Emerge perhaps not any LessWrong, but hopefully a bit MoreHumble? Delete Claude. Delete Perplexity. Apologize to mom. Sleep for the first time in 9 days. Dream of touching grass.
Day 15: Receive Superintelligence in the mail. Belly laugh. Get uploaded.
--
Cover photo by Dan Freeman on Unsplash