Disclaimer: English is not my first language. I used language models (Claude sonnet 4, GPT-4o) to help edit grammar and clarity after writing my ideas, and AI tools (OpenAI Deep Search, Google NotebookLM) for research assistance. All core concepts, experiments, and technical work are my own. Unedited AI content is marked in collapsible sections.
TL;DR This article challenges the dominant "instruction-following assistant" paradigm in prompt engineering. Instead, I explore the possibility to interact with language models as navigators of semantic space—focusing on placing strategic coordinates rather than issuing detailed instructions.
Key insights
- Human insight follows a predictable neural pattern: expand possibilities, filter noise, then bind distant concepts together
- Language models appear to use a remarkably similar
... (read 12187 more words →)
Definitely worth spending at least a few minutes on each of these. This is the kind of information that ends up saving you hours of work, sooner or later, over the coming months.
Coming fresh from the ARENA fellowship, I strongly feel that even a single dedicated day on tooling (possibly structured as an iterative, hands-on exercise) would pay off a lot. It would help anchor these references in memory, so they’re actually available when you need them later, rather than rediscovered ad hoc under time pressure.