LESSWRONG
LW

Czynski
335Ω91272120
Message
Dialogue
Subscribe

Jisk, formerly Jacob. (And when Jacobs are locally scarce, still Jacob.)

LW has gone downhill a lot from its early days and I disapprove of most of the moderation choices but I'm still, sometimes, here.

It should be possible to easily find me from the username I use here, though not vice versa, for interview reasons.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
2Czynski's Shortform
2y
26
Czynski's Shortform
Czynski7mo10

Editing Essays into Solstice Speeches: Standing offer: if you have a speech to give at Solstice or other rationalist event, message me and I'll look at your script and/or video call you to critique your performance and help

Reply
Generalized Hangriness: A Standard Rationalist Stance Toward Emotions
Czynski2d3-4

Claim that the stance in question is fairly canonical or standard for rationalists-as-a-group, modulo disclaimers about rationalists never agreeing on anything.

I don't think this claim is correct. I have not noticed this being particularly common among rationalists relative to other similar populations, nor normative.

I think it's probably unusually common among postrationalists, but those are a very different culture from rationalists, grounded primarily in not sharing any of the main assumptions common to rationalists.

Reply
Czynski's Shortform
Czynski1mo1-3

Said had a habit of responding to posts that were reasonably well-reasoned but sparsely justified with one-word comments like "Sources?", or very short coments along the lines of "X point is insufficeiently justified." without any throat-clearing or praise, which is a good example of it done well.

In general, "Your premises are treated as obvious when they are actually bizarre, and your argument is therefore irrelevant." is maybe the central example of when this is both highly confrontational but also highly necessary.

Reply
Czynski's Shortform
Czynski1mo-10

I can't make out anything but word salad from this comment.

Reply
Czynski's Shortform
Czynski1mo232

[Originally regarding Said Achmiz and myself ca. 2023]

I feel like you both favor a more aggressive flavor of discourse than I tend to like.

The aggressiveness is, I think, a symptom of the underlying trait, which is being disagreeable about taking people's frames as valid

Most people, when given a weird framing of a situation which feels vaguely off but comes from someone who seems well-intentioned and cooperative, will go along with it and argue within that frame rather than contest it.

But this is very exploitable, and you don't have to actually be consciously trying to exploit it to do so. And so people who do this a lot (e.g. Duncan, but also numerous other people I respect more, including Eliezer except when he's explicitly being careful about it, which he usually is) can warp the whole field of discourse around them

Obviously most people who are disagreeable about this are disagreeable in general, and therefore usually aggressive about arguments and discourse. This isn't necessary in principle but if anyone knows how to teach it I've never met them

Reply
You Are Not Measuring What You Think You Are Measuring
Czynski3mo10

Doesn't this imply that having a theory of the domain you're experimenting in is of low to no value? I find that hard to believe, and therefore doubt your assumptions are correct and applicable.

Reply
Czynski's Shortform
Czynski3mo*30

https://philpapers.org/rec/ARVIAA

This paper uses famous problems from philosophy of science and philosophical psychology—underdetermination of theory by evidence, Nelson Goodman’s new riddle of induction, theory-ladenness of observation, and “Kripkenstein’s” rule-following paradox—to show that it is empirically impossible to reliably interpret which functions a large language model (LLM) AI has learned, and thus, that reliably aligning LLM behavior with human values is provably impossible.

So, this seems provisionally to be bullshit because it doesn't admit of thinking probabilistically or simplicity priors. But I'm not totally sure it's worthless. Anyone read it in detail?

Reply
A concise version of “Twelve Virtues of Rationality”, with Anki deck
Czynski3mo10

The older deck sucks. It contains the entirety of the essay without regard to what's important. This deck is still messy - including too much focusing on the ordering and numbering of the virtues - but it's significantly superior, and contains concise hearts of the matter. If you're trying to create a memory aid for the Twelve Virtues, this deck was absolutely an improvement.

Reply
Meetups Notes (Q1 2025)
Czynski3mo10

If there are a lot of people for the very-low-context NY meetup, possibly at least one very-low-context meetup per quarter is worth doing, to see if that gets people in/back more?

Reply
Tormenting Gemini 2.5 with the [[[]]][][[]] Puzzle
Czynski3mo60

Like others, apparently "think like a mathematician" is enough to get it to work.

Reply1
Load More
No wikitag contributions to display.
48Tormenting Gemini 2.5 with the [[[]]][][[]] Puzzle
3mo
36
5How to Edit an Essay into a Solstice Speech?
7mo
1
39Index of rationalist groups in the Bay Area June 2025
1y
14
26Meetup In a Box: Year In Review
1y
1
2Czynski's Shortform
2y
26
4The North Oakland LessWrong Meetup
3y
2
3Weighted Voting Delenda Est
4y
19
65Dremeling
5y
8
17"God Rewards Fools"
5y
6