Czynski

Jacob, or "Jisk" when there are too many Jacobs about and I need a nickname. Host of the North Oakland LW Meetups, every Tuesday.

Honestly pretty disappointed with the state of the modern LW site, but it's marginally better than other non-blogs so I'm still here.

It should be possible to easily find me from the username I use here, though not vice versa, for interview reasons.

Wiki Contributions

Comments

Anonymous feedback form for those who can't attend or have comments they don't want to give in person: https://forms.gle/7ncZ5GTKRbP15VVe7

Better to make a PDF that you can share by email/Discord.

It was the former. Was, unfortunately.

(I don't check LW very often, sorry. Email is more reliable.)

If space weren’t a priori then we couldn’t become fully confident of geometrical laws such as “a square turned 90 degrees about its center is the same shape”, we’d have to learn these laws from experience, running into Hume’s problem of induction.

This is false. Hume's problem of induction can be avoided by the very simple expedient of not requiring "fully confident" to be perfect, probability-1 confidence. Learning laws from experience is entirely sufficient for 99.99% confidence, and probably still good up to ten or even twenty nines.

This is a logical fallacy, which has been demonstrated as such in a very precise mirror - the Chomskyan view of language syntax, which has been experimentally disproven. To summarize the linguistic debate: Noam Chomsky created the field of syntax and maintained, on the same grounds of the impossibility of induction, that we must have an a priori internal syntax model we are born with, within which children’s language learning is assigning values to a finite set of free parameters, such as “Subject-Verb-Object” sentence structure (SVO) vs. OSV/VOS/SOV/OVS/OSV. He declared that the program of syntax was to discover and understand the set of free parameters, and the underlying framework they were modifying. This was a compelling argument which produced beautiful theories, but it was built on faulty assumptions: perfectly precise language learning is impossible, but it is also unnecessary. (Additionally, some languages, notably Pirãha, violate the core assumptions the accumulated theory had concluded were universals embedded in the language submodule/framework.)

The theory which superseded it (and is now ‘advancing one funeral at a time’) is an approximate theory: it is impossible to learn any syntax precisely from finite evidence, but arbitrarily good approximation is possible. Every English speaker has an ‘idiolect’, the hyper-specific dialect that is how they, and they only, speak and understand English, and this differs slightly, both in vocabulary and syntax, from everyone else’s. No two humans speak the same language, nor have they ever, but this is fine because the languages we do speak are close enough to be mutually intelligible. (And of course now GPT-3 has its own idiolect, though its concept of vocabulary is severely lacking in terms of the meanings of words.)

The analogy is hopefully clear: we have no need for an innate assumption of space. My concept of space and yours are not going to match, but they will be close enough that we can communicate intelligibly and reason from the same premises to the same conclusions. It is of course possible that we have some built-in assumptions, but it is not necessary and we should consider it as an Ockham violation unless we find that there are notions of space we cannot learn even when they manifestly are better at describing our reality. Experimentally, I would say we have very strong evidence that space is not innate: watching babies learn how to interpret their sensorium, they need to learn that distance, angle, and shape exist, and that they are properties shared between sight and touch.

I expect that the same can be done for time, the self, and probably other aspects mentioned here. We can learn things approximately, without any a priori assumptions beyond the basic assumption that induction is valid, i.e. that things that appear true in our memories are more likely to appear true in our ongoing experience than things that appear false in our memories. (I have attempted, and I think succeeded, to make that definition time-free.) For establishing that this applies to time, I would first go about it by examining how babies learn object permanence, which seems like an example of minds which do not yet have an assumption of time. Similarly for the self and the mirror test and video/memory test.

Good job, well predicted. Even CFAR has degenerated into woo now.

Small correction, the level which goes insane or dies is the second level down, out of four total; 100 years rather than 1000.  Though it is roughly 3x the 10,000 day mark would equal, though.  The weaponized use of Penrose Quantum Mind is devised by the top level, who are seen only once per 1000 years.  (As it is written: SciFiWritersHaveNoSenseOfScale. Even the particularly clever ones who play around with big ideas and write extremely ingroup-y doorstoppers.)

There have been several memes by the ingroup that “escaped containment” and reached a huge audience. I think that none of them are accidental, they are all expressions of important truths that could not be spread except by this format.

None of those are actually good or important. They're nonsense.

It trains you to play with ideas, to improv, read between the lines, make the shadow visible.

This is, again, actively bad. Learning to bullshit effectively is in no way a good thing.

Post-rationalists are just non-rationalists with funny hats.

My approach has been to try to have each holiday target one value. "Humans can achieve phenomenal things", "Civilization is fragile", "Cached thoughts sneak up on you". Time of year and trappings I pick downstream from the core idea. It's tricky because, if it works, it will stick even if the value isn't good, or stops being good in the future, so I think it's necessary to be pretty careful with what values you pick. (Also I haven't gotten anyone else to collaborate with, and it's definitely not a one-person job; relatedly I have no successes yet.)

Generally I think good holidays should have a very small core of essential trait/ritual, preferably explicitly marked to organizers, and the rest can accrete, be discarded, accrete again differently, vary from one instance to the next, etc.

For Solstice (or, as I internally label it, "Brighter Night") I consider the core to be the three-part arc (dim > dark > light), with the candle-lighting ritual if at all possible, and the Speech of Darkness. The rest is nonessential. (Even my favorite parts, primarily the choir.)

Petrov Day doesn't have a core beyond "take a minute to not destroy the world", which is not a useful core. I think that is related to why it keeps going badly - people trying to add a core, without carefully thinking about the effects of the addition will be.

The 'application process' used by Overcoming Bias back in the day, namely 'you have to send an email with your post and name', would probably be entirely sufficient. It screens out almost everyone, after all.

But in actuality, what I'd most favor would be everyone maintaining their own blog and the central repository being nothing but a blogroll. Maybe allow voting on the blogroll's ordering.

Load More