Is there a bug around resizing images? Previously I've found that my image size choice is ignored unless the image has a caption. But for gifs, it seems to ignore it even if there is a caption, instead rendering the image at the full width of the article.
The image must be hosted!
This is no longer true, right?
(Also, I came here looking for a list of supported image types; I'm trying to insert an SVG, but it's just getting ignored.)
Gotcha, that makes sense! Agreed that an announcement tag is a good solution.
Meta-comment; It might be a good idea to create an official Lightcone-or-whatever LW account that you can publish these kinds of posts from. Then, someone could e.g. subscribe to that user, and get notified of all the official announcement-type posts, without having to subscribe to the personal account of Ruby-or-Ray-etc.
this post [link]
This link is missing!
theoretical progress has been considerably faster than expected, while crossing the theory-practice gap has been mildly slower than expected. (Note that “theory progressing faster than expected, practice slower” is a potential red flag for theory coming decoupled from reality
I appreciate you flagging this. I read the former sentence and my immediate next thought was the heuristic in the parenthetical sentence.
Chapter 3 of Parr (2022)
My browser thinks this is an invalid link and won't let me open it.
Totally baseless conjecture that I have not thought about for very long; chaos is identical to Turing completeness. All dynamical systems that demonstrate chaotic behavior are Turing complete (or at least implement an undecidable procedure).
Has anyone heard of an established connection here?
FWIW I cannot find your podcast by searching in the app "Pocket Casts" (though I can on spotify).
If anyone's interested in doing an even less formal version of this, I think it would be really useful for me to have semi-regular chats with other people in the alignment space. This could be anything from "you mentor me for an hour a week at the Lightcone office" to "we chat for 15 minutes on zoom every few weeks". I feel reasonably connected to the community, but I think I would strongly benefit from more two-way real-time interaction.
(More info about me: I'm currently doing full-time independent alignment research, but just on my own, with no structure... (read more)
Heh, well, see the aforementioned
it's almost what "doing math" is for me
It also feels like you're asking something like, "what's the most important problem you are trying to solve by having visual perception?" It's kind of just how I navigate the world at all (atoms or math).
But let me take your question at face value and try to answer it.
I think the main answer is something like "semantics". So much of my experiential knowledge is encoded in this physical, 3D physics manner, and when I can match up a symbolic expression with a physical scenario, I get a w... (read more)
Right, so my question to you is, how do you do math?? (This is probably silly question, but I'd love to hear your humor-me answer.)
It sure would be awesome if Lightcone Infrastructure spun up a Mastodon instance for the extended rationalist/EA/AI safety communities.
Hm, it seems pretty dependent on ontology to me – that's pretty much what the set of all states is, an ontology for how the world could be.
In case you missed it, LW 2.0 has feature support for creating sequences. If you hover over your username, the menu has a link to https://www.lesswrong.com/sequencesnew
Is this written against some hypothetical "static world" assumption
Basically exactly that, yeah. But that assumption exists both on a conscious level (in that many people don't consciously realize how much the universe has changed) and on a subconscious level, in that many ways the world currently is feel stable, even if you know they're not.
I'm psyched to have a podcast version! The narrator did a great job. I was wondering how they were going to handle several aspects of the post, and I liked how they did all of them.
Totally agree. Oliver & co. won tons of Bayes points off me.
Heh, I'm still skimming enough to catch this, but definitely not evaluating arguments.
I'm definitely still open to both changing my mind about the best use of terms and also updating the terminology in the sequence (although I suspect that will be quite a non-trivial amount of modified prose). And I think it's best if I don't actually think about it until after I publish another post.
I'd also be much more inclined to think harder about this discussion if there were more than two people involved.
My main goal here has always been "clearly explain the existin... (read more)
quantum mechanics famously provides the measure on phase-space that classical statistical mechanics took as axiomatic
I'd be interested in a citation of what you're referring to here!
Did you want your "abstract entropy" to encompass both of these?
Indeed I definitely do.
I would add a big fat disclaimer
There are a bunch of places where I think I flagged relevant things, and I'm curious if these seem like enough to you;
Just mulling over other names, I think "description length" is the one I like best so far. Then "entropy" would be defined as minimum average description length.
That makes sense. In my post I'm saying that entropy is whatever binary string assignment you want, which does not depend on the probability distribution you're using to weight things. And then if you want the minimum average string length, it becomes in terms of the probability distribution.
one of my personal spicy takes...
Omfg, I love hearing your spicy takes. (I think I remember you advocating hard tabs, and trinary logic.)
ə, pronounced "schwa", for 1/elug, pronounced /ləg/, for log base ə
nl for "negative logarithm"
XD XD guys I literally can't
Extremely pleased with this reception! I indeed feel pretty seen by it.
I think he suggested that this naming fits with something he wants to do with K complexity
I didn't mean something I'm doing, I meant that the field of K-complexity just straight-forwardly uses the word "entropy" to refer to it. Let me see if I can dig up some references.
Part of what confuses me about your objection is that it seems like averages of things can usually be treated the same as the individual things. E.g. an average number of apples is a number of apples, and average height is a height ("Bob is taller than Alice" is treated the same as "men are taller than women"). The sky is blue, by which we mean that the average photon frequency is in the range defined as blue; we also just say "a blue photon".
A possible counter-example I can think of is temperature. Temperature is the average [something like] kinetic energ... (read more)
(Let's not call it "probability" because that has too much baggage.)
This aside raises concerns for me, like it makes me worry that maybe we're more deeply not on the same page. It seems to me like the weighing is just straight-forward probability, and that it's important to call it that.
One thing I'm not very confident about is how working scientists use the concept of "macrostate". If I had good resources for that I might change some of how the sequence is written, because I don't want to create any confusion for people who use this sequence to learn and then go on to work in a related field. (...That said, it's not like people aren't already confused. I kind of expect most working scientists to be confused about entropy outside their exact domain's use.)
Here's another thing that might be adding to our confusion. It just so happens that in the particular system that is this universe, all states with the same total energy are equally likely. That's not true for most systems (which don't even have a concept of energy), and so it doesn't seem like a part of abstract entropy to me. So e.g. macrostates don't necessarily contain microstates of equal probability (which I think you've implied a couple times).
I'm not quite sure what the cruxes of our disagreement are yet. So I'm going to write up some more of how I'm thinking about things, which I think might be relevant.
When we decide to model a system and assign its states entropy, there's a question of what set of states we're including. Often, we're modelling part of the real universe. The real universe is in only one state at any given time. But we're ignorant of a bunch of parts of it (and we're also ignorant about exactly what states it will evolve into over time). So to do some analysis, we decide on so... (read more)
The historical baggage is something that tripped me up, too. In an upcoming post I have a section about classical thermodynamic entropy, including an explanation of the weird units!
Nice catches. I love that somebody double-checked all the binary strings. :)
I think it's also important for my definition of optimization (coming later), because individual microstates do deserve to be assigned a specific level of optimization.
That's a reasonable stance, but one of the main messages of the sequence is that we can start with the concept of individual states having entropy assigned to them, and derive everything else from there! This is especially relevant to the idea of using Kolmogorov complexity as entropy. Calling it "surprisal" or "information" has an information-theoretic connotation to it that I think doesn't apply in all contexts.
Maybe a slightly better title to the post would be "Plans are prediction, not optimization targets"? I found the "plans are predictions" part of the post to be the most insightful, and the rewording also removes a "should".
Loved this post. Both because I think this is a valuable set of reasoning heuristics, and because I read it in your voice, which made it feel something like a rationalist standup routine.
Should there be an "advice for new orgs" tag?
The Role of Deliberate Practice in the Acquisition of Expert Performance (PDF)
This link seems broken (though a google search finds many copies of the PDF).
To anyone landing on this page, the CFAR handbook is now available on LessWrong as a native sequence.
I'd prefer the S'wentworth Law of Measurement
It might be useful to add a quick summary of how arXiv works. I vaguely had the impression that anyone could upload PDFs to it, but some of the comments seem to pretty solidly disagree with that.
I would especially especially love it if it popped out a .tex file that I could edit, since I'm very likely to be using different language on LW than I would in a fancy academic paper.
FYI the screenshots here say "Request feedback" but the actual button currently says "Get feedback". Might trip someone up if they're trying to search for the text.
I feel generally agreeable towards this concept, and also towards the idea of being careful to use phrases as they are defined.
But I feel something else after starting to read the Arbital page. Since you quadruple insisted on it, I went ahead and actually opened the page and started reading it. And several things felt off in quick succession. I'm going to think out loud through those things here.
The first part is the concept of "guarded term". Here's part of the definition of that.
stretching it ... is an unusually strong discourtesy.
...You can't just say t... (read more)
Okay, but how do we get technical terms with precise meanings that are analyzable using propositions that can be investigated and decided using logic and observation? If we're in a context where the meaning of words is automatically eroded by projection into low-dimensional, low-context concepts into whatever the surrounding political forces want, we're not going to get anywhere without being able to fix the meaning of words we need to have a non-obvious technically important use.
I have found throughout my life that there is virtually no correlation between what media other people like (friends, critics, etc) and what I like. Not even a negative correlation; just none. I have given up trying to understand this particular phenomenon.
I share some of your frustrations with what Yudkowsky says, but I really wish you wouldn't reinforce the implicit equating of [Yudkowsky's views] with [what LW as a whole believes]. There's tons of content on here arguing opposing views.
I'm trying out independent AI alignment research.
Nice post! My main takeaway is "incentives are optimization pressures". I may have had that thought before but this tied it nicely in a bow.
Some editing suggestions/nitpicks;
The bullet point that starts with "As evidence for #3" ends with a hanging "How".
Quite recently, a lot of ideas have sort of snapped together into a coherent mindset.
I would put "for me" at the end of this. It does kind of read to me like you're about to describe for us how a scientific field has recently had a breakthrough.
I don't think I'm following what "Skin in the game" refers to.... (read more)
I'm a person who has lived in the Bay area almost the whole time CFAR has existed, and am also moderately (though not intensely) intertwined with that part of the rationalist social network. I was going to write up my own answer but I think you pretty much nailed it with your conclusion here, especially with the part about distinguishing individual people from the institution.