Randomini

Posts

Sorted by New

Comments

If there were an interactive software teaching Yudkowskian rationality, what concepts would you want to see it teach?

Training for effective discourse. If arguments for a claim are presented in philosophical standard form (as is done in academic spheres), it gets much easier to diagnose and identify incoherence in arguments. (And more importantly, in a way we could programmatically evaluate.) Using this structure gives you rationality testing in practical situations - there could be exercises where you're presented with a standard-form argument with flaws in one or two places, and you need to both identify them and label/name the issues; then perhaps ones which where you're given a similar argument but in natural language. Belief-forming is at a higher abstraction layer than argument-comprehension and -making, so there's some implicit foundational content that such software should establish first.

And then, of course, you'd have Competitive Ranked Debate, where you form argumentative trees to tear each others' arguments apart, labelling particular nodes in standard form as points of conflict and branching out into fractal disagreements... though probably not as minimum viable product.

Zibbaldone With It All

This would be really interesting mid-composition; if a parallel is found between what you're writing now and what you wrote or captured in the distant past and may have forgotten, Memex Clippy pops up and reminds you. So it's the same kind of pseudo-novelty as you describe in the link, except autonomous and continuous rather than a separate dedicated review time. Still, could dismiss it still with human in the loop Anki-style delays - "oh god, I thought that in the past? please don't remind me of that for another decade, when it becomes nostalgic instead of cringeworthy."

Zibbaldone With It All

That'd be an interesting structure for sure. Some kind of spaced repetition, presenting you with two (or three? or more?) prior thoughts to read simultaneously, to reinforce not just the information, but the relationship between the different ideas; not just in isolation, but reinforcing the network itself... maybe with some kind of highlight markup to indicate where the parallel is strongest.

With regards to the Zettelkasten containing too many cards to keep up with, I think card agglomeration above a certain threshold might be useful. Rather than hyperlinking between 500 cards with one sentence each on them, it may be preferable to link to particular sentences or sections within 50 different cards. The hyper-deconstruction of the original index card system was I think more a limitation of index card size, and how you might best utilize a physical system. Hypertext and NLP could identify links, and keeping a human in the loop ensures they're the kind of parallels you actually want to be forming... to some degree. Being presented with particular links still might still have some subtle undesirable biases. But the prospect of "combine these cards?" might make things more manageable.

I think a manual consolidation step would at least prompt for better structure for future consumption. If you know the order you'll always want to view the notes in, turn them into the one note with headings. That could keep the knowledge bank a bit more manageable and navigable. I don't think the current graph view in systems like Roam or Obsidian is enough - you need a text preview of those notes. Maybe for each new note, checking the graph structure remains planar, avoiding messy crossovers in visualisations of it. Or maybe that's a terrible idea, I have no idea! There's lots of low hanging fruit in the space, and whoever makes the biggest fruit basket's going to win here.

Zibbaldone With It All

(and yes, I know it's only the one b in Zibaldone, but doubling pushes the parallel narrative and you probably didn't notice until this comment - if I could get away with Zibbalgasdon, I would have)

Maybe Lying Can't Exist?!

This just seems like a Wittgensteinian Language Game crossed with the Symbol Grounding Problem. It's not so much that "lying can't exist" as "it is impossible to distinguish intentional deception from using different symbols". A person can confidently and truthfully state "two plus two equals duck" - all we need is for "duck" to be their symbol for "four". They're not "lying", or even "wrong" or "incoherent", their symbols are just different. Those symbols are incompatible with our own, but we don't "really" disagree. A different person could, alternatively, say "two plus two equals duck" and be intentionally deceiving - but there's nothing that can be observed about the situation to prove it, just by looking at a transcription of the text. There's also no way, exclusively through textual conversation, to "prove" that another person is using their symbols in the same way as you! Even kiki-bouba effects aren't universal - symbols can be arbitrary, once pulled out of their original context. If everyone's playing their own Language Game, shared maps are illusory - How Can Maps Be Real If Our Words Aren't Real?

P. rey consistently and unambiguously uses the symbol to mean "mating time". P. redator consistently and unambiguously uses the symbol to mean "I would like to eat you, please". Neither, in this language game, is lying to each other, or violating their own norms - but the same behaviour as above happens. Lying is just dependent on reference frame; just because there's a hole in one map doesn't open a hole in another. In any given example of "deception", we can (however artificially) construct a language game where everyone acted honestly. Lying isn't a part of what we can check on the maps here - it's an element of territory, in so far as we could only tell if someone was "really lying" if we could make direct neurological observations. Maybe not even then, if that understanding's some privileged qualia. The only time you can observe a lie with certainty is if you're the liar, as the only beliefs you can directly observe with confidence are your own.

The territory only contains signals, consequences, and benefits. Lying postulates about intention, which is unverifiable from the outside. That doesn't make "lying" meaningless, though - we can absolutely lie, and be certain that we lied - so it has meaning, but it's dependent on reference frame. When two people observe a relativistic object moving at different speeds, they can both provide truthful yet contradictory claims. When each claims the other "lied", in so far as they have their own evidence and certainty, it's a consequence of reference frame. Lying is centrifugal force, signals are centripetal. Both can be treated as real when useful for analysis. Hooray for compatibilism!

Jam is obsolete

Electricity use isn't the only ongoing factor, though: consider that freezers are somewhat bulky appliances - you can imagine e.g. in an environment where rent is high, there's an additional ongoing cost of physically having a refrigerator taking up floorspace. If your refrigerator has a floor footprint of about square metre, cost can go up to $60 or more just to have it in your space - an order of magnitude more than electricity cost. So there's a much larger ongoing cost that will dominate that effect.

$1000 bounty for OpenAI to show whether GPT3 was "deliberately" pretending to be stupider than it is

Hell, even the choice of "John" rather than, say, "Einstein" (or, if programming or logic skill is being modelled as separate from "intelligence" in common parlance, "Turing" or "Knuth" or something) could increase the probability here. To go even more explicit, calling him "The Genius" or "The Fool" would probably change it too.

The silly part of this is that, if true, then GPT-3 models that average people named John are bad at simple logic tasks.

Prediction = Compression [Transcript]
"Yeah, so my dumb argumentive comment is, prediction does not equal compression. Sequential prediction equals compression. But non-sequential prediction is also important and does not equal compression... And by non-sequential prediction, I mean you have a sequence of bits in the information theory model, but if instead you have this broad array of things that you could think about, and you're not sure when any one of them will become observed, [then] you want all of your beliefs to be good, but you don't have any next bit... You can't express the goodness just in terms of this sequential goodness."

I think this argument is conflating "data which is temporally linear" and "algorithm which is applied sequentially". "Prediction" isn't an element of past data VS future data, it's an element of information the algorithm has seen so far VS information the algorithm hasn't yet processed.

Philosophy in the Darkest Timeline: Basics of the Evolution of Meaning

After spending hours at your computer console struggling with the symbol grounding problem, you realise the piece of paper had icicles on it and that the codec contained information more important than the signal.

What explanatory power does Kahneman's System 2 possess?

For me there are two key components: the transition of a task from an S2 task to an S1 task through repetition and hypothesizing/internalising heuristics, and the use of S1 subtasks to solve more difficult S2 tasks. As an example, consider how mathematical operations move from being S2 to S1 in human learning processes.

Consider a child that can count up and down on the integers - i.e. given an integer, we can apply the increment function and get the next integer, or the decrement function to get the previous one. This is a S1 task, where the result of the operation is taken as "just-so". At that moment addition is still a S2 task for them, and one they solve through repeated application of S1 subtasks: one approach to solve A+B is to sequentially and repeatedly increment A and decrement B until B=0, at which point your incremented result is the answer.

With enough practice, the child learns the basic rules of addition, and it becomes so deeply ingrained that addition is now an S1 task. Multiplication, however, is still S2 to them, but might be solved like this: to multiply A and B, start with a C=0, and then decrement A every time you add B to C. Once A=0, C=A*B. Through enough repetition, they internalise this algorithm (or learn many examples of it by rote) and multiplication might be an S1 task now.

By now you can hopefully see where I'm going - exponentiation is the analogous S2 task on the next level up, and there's an algorithm a learner might perform to decompose it into a sequence of S1 tasks. (Of course, outside the realm of mathematics S2 tasks may be much fuzzier, e.g. puzzling over ethical dilemmas.)

The interesting thing about this (to me) is that the transition from S2 task to S1 task is the critical time where systemic errors and biases may be introduced. I see this as analogous to how a neural net can underfit/overfit training data, depending on the heuristics that are learned. With this analogy, training a neural network transitions from a difficult S2 task into an S1, black-box-esque input/output mapper. This can provide rapid "intuitive" results for us in the same way as S1 human thinking does - but is similarly error-prone.

Load More