Posts

Sorted by New

Wiki Contributions

Comments

If strong ideas that face friction come out stronger, then why would you need to insulate them behind locked doors from external stimuli? Shouldn't they easily vanquish external stimuli and validate themselves? Unless the point is to recognize the strength of timeless ideas. But even if an idea worked well the first 999 times, it doesn't mean it will also do so the 1,000th time—you shouldn't strive to crystallize tried-and-true ideas into static heuristical husks: Heuristics that almost always work can still critically fail, with black swan moments.

'Always run faster' can make you stumble down a hill and break your ankle. 'Always deploy capital prudently' can make your business miss out on bold plays, as ChristianKl commented. Since even your best ideas are fallible, the goal should be to discern which part of the idea was wheat, and which was chaff. "Run fast, but not if it'll overexert yourself, here's an explanation to assess when that's about to happen." "Deploy capital prudently, but make allowance for calculated risks, and here's an explanation for how to assess those opportunities."

These distinctions require active criticism 'in the arena,' even if—especially if, rather—it's at the risk of swaying under new information in the ephemeral territory of the immediate. It requires a willingness to suspend an idea's universality if a better explanation carves an edge case where it doesn't apply, or even subverts the whole paradigm (slow and steady wins the race!). But that only happens if you're willing to part with even your best ideas, instead of jailing them behind thick walls and padlocks—which insulates them, yes, in an echochamber.

It's interesting to compare the first two points: novel math derivations and remixing old artwork can seem like disparate paths to greater understanding. Yet often, 'novel' math derivations are more like the artistic remixes, or pastiches. Gian-Carlo Rota, MIT math & philosophy prof, referenced two ways to come across as genius: either keep a bag of tricks and apply them to new problems, or keep a bag of problems and apply them to new tricks.

Eliezer's discussion about his work was interesting too, I hadn't seen that before. Rota also spoke of scientific popularization as your mentioned, saying you're more likely to be remembered for expository work than for original contributions:

Allow me to digress with a personal reminiscence. I sometimes publish in a branch of philosophy called phenomenology. After publishing my first paper in this subject, I felt deeply hurt when, at a meeting of the Society for Phenomenology and Existential Philosophy, I was rudely told in no uncertain terms that everything I wrote in my paper was well known. This scenario occurred more than once, and I was eventually forced to reconsider my publishing standards in phenomenology.

It so happens that the fundamental treatises of phenomenology are written in thick, heavy philosophical German. Tradition demands that no examples ever be given of what one is talking about. One day I decided, not without serious misgivings, to publish a paper that was essentially an updating of some paragraphs from a book by Edmund Husserl, with a few examples added. While I was waiting for the worst at the next meeting of the Society for Phenomenology and Existential Philosophy, a prominent phenomenologist rushed towards me with a smile on his face. He was full of praise for my paper, and he strongly encouraged me to further develop the novel and original ideas presented in it.

Here's the source from which I found Rota's speech—and the fact that I wouldn't have known of those ideas otherwise—validates the usefulness of repackaging good ideas again! And you're right that choosing what to curate is a form of originality; choosing the best out of several AI text generations is you applying your taste and sense of relevance, or in other words, bits of selection pressure. So both human contribution and human selectivity can indicate originality. But the same could go for sampling past human work too, in art, math, or otherwise.

Coding

In a video by web developer Joshua Morony, he explains why he won’t rely on GPT-4 for his code, despite its efficiency. Me paraphrasing: In coding, knowing why and how a system was designed, edge cases accounted for, etc. is valuable because it contextualizes future decision-making. But if you allow AI to design your code, you’ll lose that knowledge. And if you plan on prompting AI to develop further, you’ll lack the judgment to direct it most coherently.

Writing

Many have written about how it’s the writing process itself that generates the great ideas comprising a final product. I’ve briefly consolidated those ideas before. This would warn against writing essays with ChatGPT. But it would also apply to, e.g., hiring an editor to compile a book from your own notes, rather than writing it yourself. Outsourcing the labour of selecting words, ordering ideas etc. also offloads the opportunity to generate new ideas and fascinating relationships between them -- often much greater ideas than the ones you started with. Of course, your editor may notice all this, but the principal-agent problem may prevent them from working beyond their scope. Even if they went above and beyond, the arrangement still depends on you not caring that the ideas in the book aren’t your own, or that you don’t even understand the ideas in your own book, since you aren’t the one who laboured to earn an intuition about them.


Automating some work is fine. I ran both the above paragraphs through ChatGPT for brevity. But even then, I wrote them beforehand myself, which generated many ideas that I didn’t yet have at the outset of writing, for example the principal-agent consideration. I even ran this paragraph through ChatGPT, but chose not to adopt its summary. And in the process of rewriting it, I thought to include the principal-agent example two sentences ago. If I had just asked ChatGPT (or some editor) to write a decent response to this post, even with general direction, I doubt it would have been as thoughtful as I was. But of course, ChatGPT or an editor could have come up with a comment exactly as thoughtful, right?

Banishing the epistemic status disclaimer to the comments, since it clashes with the target audience and reading experience.

Epistemic status: briefly consolidated insights on writing to think, for newer audiences. Partly interpolates Paul Graham, Herbert Lui, Larry McEnerney.

I loved this post. Its overall presentation felt like a text version of a Christopher Nolan mind-bender.

  • The crescendo of clues about the nature of the spoiler: misattributed/fictional quotes; severe link rot even though the essay was just freshly published; the apparently 'overwhelming' level of academic, sophisticated writing style and range of references; the writing getting more chaotic as it talks about itself getting more chaotic. And of course, the constant question of what sort of spoiler could possibly 'alter' the meaning of the entire essay.
  • I loved the feeling of Inception near the end of the essay when, in the analyst's voice, it confirms the reader's likely prediction that it was written by AI, only to reveal how that 'analyst' section was also written by AI. Or rather that the voice fluidly changes between AI and analyst, first- and third-person. And when you finally feel like you're on solid ground, the integrity of the essay breaks down; "<!--TODO-->" tags make you contend with how no part is certainly all-human or all-AI, and so, does it even matter who wrote it
  • Returning to the spoiler and initial paragraphs after finishing the essay, and getting a profound, contextualized appreciation for what it means. You realize that the essay achieved what it told you it set out to; to convey a salient point through apparent nonsense, validating that such nonsense can be useful, as it explains the process of generating the nonsense. Or in the essay's words, "[the] string of text can talk about itself [as it] unmask[s] the code hidden within itself."

The post also shared concepts I now use when thinking about language. My favourite being quantum poetry, associating the artificial (and 'next-token prediction') to the humanistic:

Just as the presence of a particle always completely erases the ghost of its wavefunction, [...] so does the presence of a word erase the ghost of the manifold that could have been named. [...]

This is the principle of quantum poetics. The content of poetry is limited not by the poet’s vocabulary, but by the part of their soul that has not been destroyed by words they have used so far. [...] It is the quantum nature of reality that allows for unforeseeable events, stochastic processes, and the evolution of life. Similarly, it is the quantum nature of language that allows for the evolution of meaning, for creativity, for jokes, and for bottomless misunderstandings. [...]

[Generative] systems are entirely too good at hallucinating content that does not exist in the training corpus—content that creates meaningful structures that foster coherent fictive space where there is none. Ironically, this is exactly what we want in a poet—to create new worlds out of nothing but the coupling of waves of possibility drunk from memory

My main response to the essay's content, is that still, a human in the loop seemed to still be the primary engine for most of the art in the essay. From my understanding of critical rationalism, personhood is mapped to the ability to creatively conjecture and criticize ideas to generate testable, hard to vary explanations of things.

This essay depended on a human analyst to evaluate and criticize (by some sense of 'relevance') which generation was valid enough to continue into the main branch of the essay. The essay also depended on a human to decide which original conjecture to write about (also by some sense of what's 'interesting').

Therefore, it seems to me that AGI is still far from automating both of humans' conjecture and criticism capacities. However, the holistic artistry the essay did push me to consider AGI's validity more than other text I've read, and in that sense, it achieved what it meant to: to connect my prior thoughts to some new idea—both in the real domain—through 'babble' of the the imaginary domain.

When vacuums exist, it isn’t just random things filling up that time or space, but it’s things that are purposely able to fill up that vacuum.

What do you mean by "purposely?" Didn't Part I exemplify how random, not purposeful things can fill an empty room?