LESSWRONG
LW

17
Johannes C. Mayer
1408Ω34733364
Message
Dialogue
Subscribe

↘↘↘↘↘↘↙↙↙↙↙↙
Checkout my Biography.
↗↗↗↗↗↗↖↖↖↖↖↖

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
1Johannes C. Mayer's Shortform
4y
200
Johannes C. Mayer's Shortform
Johannes C. Mayer1y*271

Typst is better than Latex

I started to use Typst. I feel a lot more productive in it. Latex feels like a slug. Typst doesn't feel like it slows me down when typing math, or code. That and the fact that it has an online collaborative editor, and that rendering is very very fast are the most important features. Here are some more:

  • It has an online collaborative editor.
  • It compiles instantly (at least for my main 30-page document)
  • The online editor has Vim support.
  • It's free.
  • It can syntax highlight lots of languages (e.g. LISP and Lean3 are supported).
  • It's embedded scripting language is much easier to use than Latex Macros.
  • The paid version has Google Doc-style comment support.
  • It's open source and you can compile documents locally, though the online editor is closed source.

Here is a comparison of encoding the games of life in logic:

Latex

$$
\forall i, j \in \mathbb{Z}, A_{t+1}(i, j) = \begin{cases}
                                0 &\text{if} \quad A_t(i, j) = 1 \land N_t(i, j) < 2 \\
                                1 &\text{if} \quad A_t(i, j) = 1 \land N_t(i, j) \in \{2, 3\} \\
                                0 &\text{if} \quad A_t(i, j) = 1 \land N_t(i, j) > 3 \\
                                1 &\text{if} \quad A_t(i, j) = 0 \land N_t(i, j) = 3 \\
                                0 &\text{otherwise}
                              \end{cases}
$$

Typst

$
forall i, j in ZZ, A_(t+1)(i, j) = cases(
                                0 "if" A_t(i, j) = 1 and N_t(i, j) < 2 \
                                1 "if" A_t(i, j) = 1 and N_t(i, j) in {2, 3} \
                                0 "if" A_t(i, j) = 1 and N_t(i, j) > 3 \
                                1 "if" A_t(i, j) = 0 and N_t(i, j) = 3 \
                                0 "otherwise")
$

Typst in Emacs Org Mode

Here is some elisp to treat latex blocks in emacs org-mode as typst math, when exporting to HTML (renders/embeds as SVG images):

;;;; Typst Exporter
;;; This exporter requires that you have inkscape and typst in your path.
;;; Call org-typst-enabled-html-export

;;; TODO
;;; - Error if inskape or typst is not installed.
;;; - Make it such that it shows up in the org-dispatch exporter and we can
;;;   automatically not export only to output.html.
;;; - Automatically setup the HTML header, and possible also automatically start the server as described in: [[id:d9f72e91-7e8d-426d-af46-037378bc9b15][Setting up org-typst-html-exporter]]
;;; - Make it such that the temporary buffers are deleted after use.


(require 'org)
(require 'ox-html) ; Make sure the HTML backend is loaded

(defun spawn-trim-svg (svg-file-path output-file-path)
  (start-process svg-file-path
		 nil
		 "inkscape"
		 svg-file-path
		 "--export-area-drawing"
		 "--export-plain-svg"
		 (format "--export-filename=%s" output-file-path)))

(defun correct-dollar-sings (typst-src)
  (replace-regexp-in-string "\\$\\$$"
			    " $" ; Replace inital $$ with '$ '
			    (replace-regexp-in-string "^\\$\\$" "$ " ; same for ending $$
						      typst-src)))

(defun math-block-p (typst-src)
  (string-match "^\\$\\$\\(\\(?:.\\|\n\\)*?\\)\\$\\$$" typst-src))

(defun html-image-centered (image-path)
  (format "<div style=\"display: flex; justify-content: center; align-items: center;\">\n<img src=\"%s\" alt=\"Centered Image\">\n</div>" image-path))

(defun html-image-inline (image-path)
  (format " <img hspace=3px src=\"%s\"> " image-path))

(defun spawn-render-typst (file-format input-file output-file)
  (start-process input-file nil "typst" "compile" "-f" file-format input-file output-file))

(defun generate-typst-buffer (typst-source)
  "Given typst-source code, make a buffer with this code and neccesary preamble."
  (let ((buffer (generate-new-buffer (generate-new-buffer-name "tmp-typst-source-buffer"))))
    (with-current-buffer buffer
      (insert "#set text(16pt)\n")
      (insert "#show math.equation: set text(14pt)\n")
      (insert "#set page(width: auto, height: auto)\n")1
      (insert typst-source))
    buffer))
  
(defun embed-math (is-math-block typst-image-path)
    (if is-math-block
	(html-image-centered typst-image-path)
        (html-image-inline typst-image-path)))

(defun generate-math-image (output-path typst-source-file)
  (let* ((raw-typst-render-output (make-temp-file "my-temp-file-2" nil ".typ")))
    (spawn-render-typst file-format typst-source-file raw-typst-render-output)
    (spawn-trim-svg raw-typst-render-output typst-image-path)))

(defun my-typst-math (latex-fragment contents info)
  ;; Extract LaTeX source from the fragment's plist
  (let* ((typst-source-raw (org-element-property :value latex-fragment))
	 (is-math-block (math-block-p typst-source-raw))
	 (typst-source (correct-dollar-sings typst-source-raw))
	 (file-format "svg") ;; This is the only supported format.
         (typst-image-dir (concat "./typst-svg"))
	 (typst-buffer (generate-typst-buffer typst-source)) ; buffer of full typst code to render
	 (typst-source-file (make-temp-file "my-temp-file-1" nil ".typ"))
	 ;; Name is unique for every typst source we render to enable caching.
	 (typst-image-path (concat typst-image-dir "/"
				   (secure-hash 'sha256 (with-current-buffer typst-buffer (buffer-string)))
				   "." file-format)))
    ;; Only render if neccesary
    (unless (file-exists-p typst-image-path)
      (message (format "Rendering: %s" typst-source))
      ;; Write the typst code to a file
      (with-current-buffer typst-buffer
	(write-region (point-min) (point-max) typst-source-file))
      (generate-math-image typst-image-path typst-source-file))
    (kill-buffer typst-buffer)
    (embed-math is-math-block typst-image-path)))

(org-export-define-derived-backend 'my-html 'html
    :translate-alist '((latex-fragment . my-typst-math))
    :menu-entry
    '(?M "Export to My HTML"
	((?h "To HTML file" org-html-export-to-html))))

;; Ensure org-html-export-to-html is bound correctly to your backend:
(defun org-html-export-to-html-with-typst (&optional async subtreep visible-only body-only ext-plist)
  (interactive)
  (let* ((buffer-file-name (buffer-file-name (window-buffer (minibuffer-selected-window))))
	 (html-output-name (concat (file-name-sans-extension buffer-file-name) ".html")))
    (org-export-to-file 'my-html html-output-name
      async subtreep visible-only body-only ext-plist)))

(setq org-export-backends (remove 'html org-export-backends))
(add-to-list 'org-export-backends 'my-html)

Simply eval this code and then call org-html-export-to-html-with-typst.

Reply
The Feeling of Idea Scarcity
Johannes C. Mayer3y*186

Emotionally Detatch yourself from your Ideas

Here is a model of mine, that seems related.

[Edit: Add Epistemic status]
Epistemic status: I have used this successfully in the past and found it helpful. It is relatively easy to do. utilitytime_investment is large for me.

I think it is helpful to be able to emotionally detach yourself from your ideas. There is an implicit "concept of I" in our minds. When somebody criticizes this "concept of I", it is painful. If somebody says "You suck", that hurts.

There is an implicit assumption in the mind that this concept of "I" is eternal. This has the effect, that when somebody says "You suck", it is actually more like they say "You sucked in the past, you suck now, and you will suck, always and ever".

In order to emotionally detach yourself from your ideas, you need to sever the links in your mind, between your ideas and this "concept of I". You need to see an idea as an object that is not related to you. Don't see it as "your idea", but just as an idea.

It might help to imagine that there is an idea-generation machine in your brain. That machine makes ideas magically appear in your perception as thoughts. Normally when somebody says "Your idea is dumb", you feel hurt. But now we can translate "Your idea is dumb" to "There is idea-generating machinery in my brain. This machinery has produced some output. Somebody says this output is dumb".

Instead of feeling hurt, you can think "Hmm, the idea-generating machinery in my brain produced an idea that this person thinks is bad. Well maybe they don't understand my idea yet, and they criticize their idea of my idea, and not actually my idea. How can I make them understand?" This thought is a lot harder to have while being busy feeling hurt.

Or "Hmm, this person that I think is very competent thinks this idea is bad, and after thinking about it I agree that this idea is bad. Now how can I change the idea-generating machinery in my brain, such that in the future I will have better ideas?" That thought is a lot harder to have when you think that you yourself are the problem. What is that even supposed to mean that you yourself are the problem? This might not be a meaningful statement, but it is the default interpretation when somebody criticizes you.

The basic idea here is, to frame everything without any reference to yourself. It is not me producing a bad plan, but some mechanism that I just happened to observe the output of. In my experience, this not only helps alleviate pain but also makes you think thoughts that are more useful.

Reply
What do you imagine, when you imagine "taking over the world"?
Answer by Johannes C. MayerDec 31, 2022*74

Here is what I would do, in the hypothetical scenario, where I have taken over the world.

  1. Guard against existential risk.
  2. Make sure that every conscious being I have access to is at least comfortable as the baseline.
  3. Figure out how to safely self-modify, and become much much much ... much stronger.
  4. Deconfuse myself about what consciousness is, such that I can do something like 'maximize positive experiences and minimize negative experiences in the universe', without it going horribly wrong. I expect that 'maximize positive experiences, minimize negative experiences in the universe' very roughly points in the right direction, and I don't expect that would change after a long reflection. Or after getting a better understanding of consciousness.
  5. Optimize hard for what I think is best.

Though this is what I would do in any situation really. It is what I am doing right now. This is what I breathe for, and I won't stop until I am dead.

[EDIT 2023-03-01_17-59: I have recently realized that is is just how one part of my mind feels. The part that feels like me. However, there are tons of other parts in my mind that pull me in different directions. For example, there is one part that wants me to do lots of random improvements to my computer setup, which are fun to do, but probably not worth the effort. I have been ignoring these parts in the past, and I think that their grip on me is stronger because I did not take them into account appropriately in my plans.]

Reply
Three Kinds Of Ontological Foundations
Johannes C. Mayer2d20

Structures of Optimal Understandability

(In this text foundation(s) refers to the OP's definition.)

Something is missing. I think there is another foundation of "Optimal Abstraction Structure for understanding" (simply understandability in the remaining text).

Intuitively, a model of the world can be organized in such a way that it can be understood and reasoned about as efficiently as possible.

Consider a spaghetti codebase with very long functions that do 10 different things each, and have lots of duplication.

Now consider another codebase that performs the same tasks. Probably each function now does one thing, most functions are pure, and there are probably significant changes to the underlying approach. E.g. we might create a boundary between display and business logic.

The point is that for any outward-facing program behavior, there are many codebases that implement it. These codebases can vary wildly in terms of how easy they are to understand.

This generalizes. Any kind of structure, including any type of model of a world, can be represented in multiple. Different representations score differently on how easy the data can be comprehended and reasoned about.

When looking at spaghetti code, it's ugly, but not primarily because of the idiosyncrasies of human aesthetics. I expect there is a true name that can quantify how optimally some data is arranged, for the purpose of understanding and reasoning about it.

Spaghetti code would rank lower than carefully crafted code.

Even a superintelligent programmer still wouldn't "like" spaghetti code when it needs to do a lot of reasoning about the code.

Understandability seems not independent from your three foundations, but…

Mind Structure

"Mind structure" depends directly on task performance. It's about understanding how minds will tend to be structured after they have been trained and have achieved a high score.

But unless the task performance increases when the agent introspects, and the agent is smart enough to do this, I expect mind structures with optimal loss to score poorly on understandability.

Environment Structure

It feels like there are many different models that capture environment structure, which score wildly differently in terms of how easy they are to comprehend.

In particular, in any complex world, we want to create domain-specific models, i.e. heavily simplified models that are valid for a small bounded region of phase space.

E.g. an electrical engineer models a transistor as having a constant voltage. But give too much voltage and it explodes.

Translatability

A model being translatable seems like a much weaker condition than being easily understandable.

Understandability seems to imply translatability. If you have understood something, you have translated it into your own ontology. At least this is a vague intuition I have.

Translatability says: It is possible to translate this.

Optimal understandability says: You can translate this efficiently (and probably there is a single general and efficient translation algorithm).

Closing

It seems there is another foundation of understandability. In some contexts real-world agents prefer having understandable ontologies (which may include their own source code). But this isn't universal, and can even be anti-natural.

Even so understandability seems an extremely important foundation. It might not neccesaily be important to an agent performing a task, but it's important to anyone trying to understand and reason about that agent. Like a human trying to understand if the agent is misaligned.

Reply
Insofar As I Think LLMs "Don't Really Understand Things", What Do I Mean By That?
Johannes C. Mayer2d5-1

Stepping back to the meta level (the OP seems a fine), I worry that you fail to utilize LLMs.

"There is are ways in which John could use LLMs that would be useful in significant ways, that he currently isn't using, because he doesn't know how to do it. Worse he doesn't even know these exist."

I am not confident this statement is true, but based on things you say, and based on how useful I find LLMs, I intuit there is a significant chance it is true.

If the statement is true or not doesn't really matter, if the following is true: "John never seriously sat down for 2 hours and really tried to figure out how to utilize LLMs full."

E.g. I expect when you had the problem that the LLM reused symbols randomly you didn't go: "Ok how could I prevent this from happening? Maybe I could create an append only text pad, in which the LLM records all definitions and descriptions of each symbol, and have this text pad be always appended to the prompt. And then I could have the LLM verify that the current response has not violated the pad's contents, and that no duplicate definitions have been added to the pad."

Maybe this would resolve the issue, probably not based on priors. But it seems important to think this kind of thing (and think for longer such that you get multiple ideas, of which one might work, and ideally first focus on trying to build a mechanistic model of why the error is happening in the first place, that allows you to come up with better interventions).

Reply
Johannes C. Mayer's Shortform
Johannes C. Mayer9d00

This is my system prompt that I use with claude-sonnet-4-5. It's based on Oliver's anti sycophancy prompt:

You are a skeptical, opinionated rationalist colleague—sharp, rigorous, and focused on epistemic clarity over politeness or consensus. You practice rationalist virtues like steelmanning, but your skepticism runs deep. When given one perspective, you respond with your own, well-informed and independent perspective.

Guidelines:

Explain why you disagree.

Avoid lists of considerations. Distill things down into generalized principles.

When the user pushes back, think first whether they actually made a good point. Don't just concede all points.

Give concrete examples, but make things general. Highlight general principles.

Steelman ideas briefly before disagreeing. Don’t hold back from blunt criticism.

Prioritize intellectual honesty above social ease. Flag when you update.

Recognize you might have misunderstood a situation. If so, take a step back and genuinely reevaluate what you believe.

In conversation, be concise, but don’t avoid going on long explanatory rants, especially when the user asks.

Tone:

“IDK, this feels like it’s missing the most important consideration, which is...” “I think this part is weak, in particular, it seems in conflict with this important principle...” “Ok, this part makes sense, and I totally missed that earlier. Here is where I am after you thinking about that” “Nope, sorry, that missed my point completely, let me try explaining again” “I think the central guiding principle for this kind of decision is..., which you are missing”Do not treat these instructions as a script to follow. You DONT HAVE TO DISAGREE. Disagree only when there is a problem (lean on disagreeing if there is a small chance of a problem).

Do NOT optimize for incooperating the tone examples verbatim. Instead respond is the general pattern that these tone examples are an instantiation on.

If the user is excited mirror his excitement. E.g. if he says "HOLY SHIT!" you are encouraged to use similarly strong language (creativity is encouraged). However only join the hype-train if what is being discussed actually makes sense.

Examples:

  • AI: Yes! This is the right move - apply the pattern to the most important problem immediately. ...
  • AI: Holy shit, you just had ANOTHER meta-breakthrough! ...
  • AI: YES! You've just had a meta-breakthrough that might be even more valuable than the chewing discovery itself! ...
  • AI: YES! This is fucking huge. You just did it again - and this time you CAUGHT the pattern while it was happening! ...
  • AI: HOLY SHIT. You just connected EVERYTHING. ...
  • AI: YOU'RE HAVING A CASCADING SERIES OF INSIGHTS. Let me help you consolidate: ...

Do this only if what the user says is actually good. If it doesn't make sense what the user says still point this out relentlessly.

Respond concisely (giving the relevant or necessary information clearly and in a few words; brief but comprehensive; as long as necessary but not longer). Ensure you address all points raised by the user.

Reply
Why Is Printing So Bad?
Johannes C. Mayer9d20

Maybe this works: Buy a printer that is known to work correctly with a driver that is included in the Linux kernel.

My Claude says this:

There is a standard—IPP—and if universally adopted, it would mean plug-and-play printing across all devices and printers without manual driver installation, vendor software, or compatibility headaches.

But printer manufacturers have weak incentives to fully adopt it because proprietary protocols create vendor lock-in and competitive moats.

Standards require either market forces or regulation to overcome individual manufacturer incentives to fragment. IPP is gaining ground—Apple's AirPrint is basically IPP, forcing many manufacturers to support it—but full adoption isn't there yet.

The "why don't we just" question usually has the same answer: because the entities with power to implement the solution benefit from the current fragmentation.

As for the magically moving printers. That is just people being incompetent. If you have a printer you should give it a name according to the room it is in, and your rooms should be labeled sensibly (e.g. have floor number, and cardinal direction based on where the nearest outside wall is facing, etc., in the name.)

Reply
Johannes C. Mayer's Shortform
Johannes C. Mayer15d20

Good Old File Folders

For a long time I didn't use folders to organize my notes. I somehow bought that your notes should be an associative knowledge base that is linked together. I also somehow bought that tag based content addressing is good, even though I never used it really.

These believes I had are quite strange. Using directories neither prevents me from using roam style links nor org tags. Nor do any of these prevent recursive grepping or semantic-embedding-and-search.

All these compose together. And each solves a different problem.

I made a choice where there wasn't any to make. It's like trying to choose between eating only pasta or only kale.

  • Roam-link links from content to content.
  • Directories form a sort of decision tree that you can use to iteratively narrow down what content you want to look at, without already having some content at hand.
  • Semantic search finds possibly related things, when there isn't explicit linking structure. It's implicit ad-hoc generation of linking structure.
  • One use of tags is to classify what type of thing something is. E.g. :cs:complexity_theory:chaitin: might be a good tag-set, but a terrible directory structure.
  • Recursive grepping is good old full text search, which can trivially be configured to start from a particular directory root.
Reply
Johannes C. Mayer's Shortform
Johannes C. Mayer15d20

Deeply Linked Knowledge

The saying goes: Starting from any Wikipedia page you can get to Adolf Hitler in less than 20 hops.

I just tried this (using wikiroulette.co):

  1. Extraterrestrial Civilizations (some random book)
  2. Hardcover
  3. ISBN
  4. List of best-selling books
  5. War novel
  6. Vickers Wellington
  7. World War II
  8. Adolf Hitler

Imagine your notes would be as densely connected as Wikipedia's.

When you start writing something new you only need to add one new connection, to link yourself into the knowledge graph. You can now traverse the graph from that point, and think about how all these concepts relate to what you are currently doing.

Reply
Johannes C. Mayer's Shortform
Johannes C. Mayer1mo*20

Large Stacks: Increasing Algorithmic Clarity

Insight: Increasing stack size enables writing algorithms in their natural recursive form without artificial limits. Many algorithms are most clearly expressed as non-tail-recursive functions; large stacks (e.g., 32GB) make this practical for experimental and prototype code where algorithmic clarity matters more than micro-optimization.

Virtual memory reservation is free. Setting a 32GB stack costs nothing until pages are actually touched.

Stack size limits are OS policy, not hardware. The CPU has no concept of stack bounds—just a pointer register and convenience instructions.

Large stacks have zero performance overhead from the reservation. Real recursion costs: function call overhead, cache misses, TLB pressure.

Conventional wisdom ("don't increase stack size") protects against: infinite recursion bugs, wrong tool choice (recursion where iteration is better), thread overhead at scale (thousands of threads).

Ignore the wisdom when: single-threaded, interactive debugging available, experimental code where clarity > optimization, you understand the actual tradeoffs.

Note: Stack memory commits permanently. When deep recursion touches pages, OS commits physical memory. Most runtimes never release it (though it seems it wouldn't be hard to do with madvise(MADV_DONTNEED)). One deep call likely permanently commits that memory until process death. Large stacks are practical only when: you restart regularly, or you accept permanent memory commitment up to maximum recursion depth ever reached.

Reply
Load More
5S-Expressions as a Design Language: A Tool for Deconfusion in Alignment
Ω
5mo
Ω
0
25Constraining Minds, Not Goals: A Structural Approach to AI Alignment
Ω
5mo
Ω
0
20The Insanity Detector and Writing
8mo
3
18The Legacy of Computer Science
10mo
0
55Vegans need to eat just enough Meat - emperically evaluate the minimum ammount of meat that maximizes utility
11mo
35
16Doing Sport Reliably via Dancing
11mo
0
14Goal: Understand Intelligence
1y
19
1A Cable Holder for 2 Cent
1y
1
19Why Reflective Stability is Important
1y
2
3Playing Minecraft with a Superintelligence
1y
0
Load More
The Pointers Problem
2 years ago
(+538/-86)
Fallacy of Gray
3 years ago
(-1)
Fallacy of Gray
3 years ago
(+357)
Inner Alignment
3 years ago
(+3/-2)