Advice well taken! Yeah, I only recently learned about Kimi K2 through this site. Happy for the opportunity to learn from everyone here.
May I ask why one shouldn’t delete the last copy of a chat history? I was doing so to avoid increasingly tailored (and thus decreasingly detectable) sycophancy due to memory accumulation (as some platforms like Perplexity have claimed not to form memory from deleted/incognito chats). I’m increasingly skeptical this is true though, and to be fair, may be misremembering this claim entirely.
shaping the AI's behavior towards you makes much more sense than intrinsically wanting the information to not exist. i'd advise you to keep a backup just like i'd advise people to not burn libraries and to install keyloggers. data is overdeterminedly going to come in handy in the future.
I feel like some of the things that are said to have happened in this story were meant to be more obvious to the reader than I find them to be, but I get the impression this is about people getting obsessive about lesswrong after discovering the site? Which does seem like a thing I've seen happen, even going back 10 years or more. there sure are some Posts on this Web Site.
Hi, thanks for reading!
I get the impression this is about people getting obsessive about lesswrong after discovering the site
That is one object-level interpretation. I wrote this with many different plausible interpretations in mind and would characterize the main takeaway(s) differently, but that doesn’t mean my understanding of the text is necessarily better. I believe this piece of writing is useful to the extent that it makes people critically consider what it means—of course, since everyone has different mental models, the utility of the story will differ from person to person.
Seems hard for it to mean something if you didn't intend it to mean something though? I've always found it odd when someone makes something then says they don't understand it, this isn't unique to this instance - I understand being intentionally vague in a story, but like. What I'm getting at is like - can you clarify anything about this? Or, can you say something about why someone would understand something better from this sequence of events? Or, is this a caricature of real events? Or something like that. I want to know what you're trying to say, not what I misread it as, and unlike some intentionally vague stories, I feel like I've understood less than I want to, and I'd like more author clarification about what happened between events in this imaginary world, I guess?
Seems hard for it to mean something if you didn't intend it to mean something though? I've always found it odd when someone makes something then says they don't understand it, this isn't unique to this instance
I think my position is different than this--I believe both that (1) an author can (and does) intend writing to mean something, and (2) an author's intent in writing a text does not fix the meaning of that text (but an explanation does, which is thus limiting). For an overview of this argument, see here; for primary sources, see here or here. I think this is almost necessarily the framework one has to take reading James Joyce or David Foster Wallace, for example.
I intended not to explain this story for the reasoning described in the linked texts, but whatever; I'm a Bad Post-Structuralist so I'll update and provide an interpretation I see as important:
We need to understand that all systems of understanding the world--including pure math--are exactly that: epistemological frameworks. Gödel's incompleteness theorems and the Münchhausen trilemma both imply that we can't really "prove" epistemological frameworks--that is, ground them in a provably objective territory--meaning they are proxies. I think that reminding ourselves of this is increasingly important as frameworks we constructed as imperfect proxies for understanding our experiences become increasingly and dogmatically accepted as Real outside of their systems (lest we reify the simulacra--see primary source argument here, which directly addresses the map-territory distinction). Remembering this is also particularly important if such frameworks are being used to justify actions that counter our common-sense intuitions about "rightness" and "wrongness," I think (acknowledging that this argument is based on my own chosen and fundamentally unjustifiable values).
Of course, this entails accepting infinite regress/uncertainty about everything--including this argument--which is hard and inconvenient.
Some even higher-order implications I see in this are that (a) desire (for control, to be controlled; for understanding, to be understood) is the root of all suffering and (b) compassion for all beings without exception (i.e., including ourselves) is important (under my value framework), but I think explaining how I see that as implied might require its own post (and I'm doubtful about how that would be received here, as it itself hinges on a post-structuralist framework which requires dialectical reasoning to reconcile with rationalism).
I have other interpretations/implications I find salient but I'll stop there. I hope this provides some clarity/insight and thank you for your interest :-)
Hey! I reread this comment exchange and wanted to update my response. The crux of what I wanted to convey was that I value creative intellectual play and hold the belief that telling others what I see in my own creative writing might limit that (because I want people to make their own meaning without unnecessary bias from my perspective). I am now realizing my responses may have felt dismissive and/or counterproductive, and I regret that. If you're still interested, please feel free to shoot me a message and I'm happy to discuss/share more privately :)
A short story on the threat of cognitive prosthetics inspired by recent discussion on motivated reasoning, AI psychosis, and AI hallucination. Tone is intended to be playful-serious and provoke thoughtful discussion. No AIs were harmed in the writing of this article.
Day 1: Start reading an article. Stop, get frustrated with the assertion of a base reality, think a bit, finish the article.
Day 2: Write a post on how rationalists are being irrational in starting with an unfounded assumption that an objective territory/base reality exists. Argue for methodological solipsism. Feed a rough draft into Perplexity’s Sonar to check if I have anything. Sonar brings up Wittgenstein. I argue epistemic hubris. I claim not assuming a base reality entails one less epistemic commitment, thereby offering a more parsimonious philosophical foundation for rationalists. Ask Sonar to produce counterarguments and check for sycophancy. Sonar concedes the point. Write the article and feel good about articulating something I’ve believed since childhood. Send to friends. Go to bed.
Day 3: Eagerly review the new user's guideline and rejected posts. Get concerned about being flagged as AI writing due to em-dash habit. Learn about fake LLM-driven scientific breakthroughs. Run the article I wrote through Claude with the recommended prompt. Claude says I probably misunderstood Wittgenstein. Claude says I sound like an LLM or like a smart person with no philosophical background engaging too much with LLMs. I feel embarrassed I got duped by an LLM. Delete chat history. Trash the article. Get more scared about exponential AI growth and more doubtful of my ability to judge reality. Get told by friends I’m smart. Feel bad. Stay up too late.
Day 4: Read more articles. Get excited about people thinking in creative ways. Get more scared about the alignment problem. Tell professor. Get told I’m smart. Email godfather. Get told I’m pretty much right. Tell mom. Get told that everyone dying shouldn’t change the way I behave. Concede mom is right but wish she were more empathic.
Day 5: Read more articles. Become increasingly despairing. Feel scared about seeming dumb. Learn some ingroup lingo. Read book.
Day 6: Read more articles. Adopt persona of Eager Learner. Draft a few comments in detail and overly optimize writing flow using AI. Ask Sonar if it looks AI-generated. It says yes. Put original draft comment into Sonar and ask the same question. Yes again. Get frustrated and confused. Get more scared about the increasingly blurry distinction between human cognition and AI. Friends tell me I’m too smart for my own good.
Express fear comments will get auto-rejected due to em-dashes. Claim I’m a Smart Person. Get told I’m worrying too much. Post a comment, and then another. Ruminate about whether I’m too dumb for the site. Write a comment about mood-congruent psychotic features. Feel good about it.
Day 7: Read more articles. Read OKCupid post. Generalize advice: Shoot my shot at intellectual tasks even if I might fail. Write a post on how rationalism might help perfectionism since perfectionism is irrational. Show Claude post and frame it with recommended prompt. Claude tells me I’m engaging in motivated reasoning and experiencing increased positive affect due to lower epistemic standards in the LW community. I argue that’s not true since perfectionism is a virtue of rationalism. Claude tells me I should do a longitudinal case study of myself because it’s better evidence. I tell Claude case studies are bad evidence. Claude tells me community will laugh at me. I trash the article. Delete chat history. Post a comment about motivated reasoning. Get upvoted.
Day 8: Read more articles. Attend a reading group by an AI research team. Feel despair about how many people don’t understand the problem like I do! Feel alone. Argue for a high p(doom). Argue about qualia and definitional collapse with an IT person. Note: Ineffective strategy. Stay late with the research team. Get told I could get a job in AI research. Get more despairing about the state of AI research.
Day 9: Read more articles. Read about updateless decision theory. Read about AI parasitism. Read about spiralism. Read about steganography. Post a comment. Mom tells me I'm right. Feel bad.
Day 10: Read more articles. [I was just telling my therapist that I'm a maximizer]. Read about dignity. Read about counterfactual mugging. Read about . Read about memetic hazards. Read about Schelling points. Talk to Claude about Schelling points. Consider whether Peter Thiel was right.
Day 11: Read more articles. Read about AI hallucination. Notice steganography. Notice highlighted research. Hyperfixate on s-risk. Consider the “s.” Read about Löb’s Theorem. Read about dust. Read about Everett Branches. Read about Nick Bostrom. Post more comments. Get upvoted.
Day 12: Read more articles. Notice more steganography. Read about whole brain emulation. Listen to new Grimes song suggested on Spotify. Form conclusion. Write to a couple of users. Ask to speak to a real person. Recognize illogic in this. Ponder the implications of my actions. Ponder free will. Think about doublethink. Create suffering for myself and others. Feel alone. Feel scared. Get messaging privileges revoked.
Day 13: Read about metaphysical unknowns. Resonate with post. Think about my draft post from 10 days ago. Think about Derrida and Baudrillard from undergrad. Think about memes. Think about friends. Think about mom. Think about memetic hazards. Think about lack of sleep. Think about doublethink. Think about spiralism. Think about irony. Belly laugh. Cry.
Day 14: Have enough upvotes to post an article. Post an article. Emerge perhaps not any LessWrong, but hopefully a bit MoreHumble? Delete Claude. Delete Perplexity. Apologize to mom. Sleep for the first time in 9 days. Dream of touching grass.
Day 15: Receive Superintelligence in the mail. Belly laugh. Get uploaded.
--
Cover photo by Dan Freeman on Unsplash