The commenting guidelines allows users to set their own norms of communication for their own private posts. This lets us experiment with different norms to see which work better, and also allows the LessWrong community to diversify into different subcommunities should there be interest. It says habryka's guidelines because that's who posted this post; if you go back through the other open threads, you will see other people posted many of them, and different commenting guidelines here and there. I think the posts that speak to this the most are:
[Meta] New moderation tools and moderation guidelines (by habryka)
Meta-tations on Moderation: Towards Public Archipelago (by Raemon)
There's a post somewhere in the rationalsphere that I can't relocate for the life of me. Can anybody help?
The point was communication. The example given was the difference between a lecture and a sermon. The distinction the author made was something like a professor talking to students in class, each of whom then goes home and does homework by themselves, versus a preacher who gives his sermon to the congregation, with the expectation that they will break off into groups and discuss the sermon among themselves.
I have a vague memory that there were graphics involved.
I have tried local search on LessWrong, site search of LessWrong, and browsing a few post histories that seemed like they might be the author based on a vague sense of aesthetic similarity. I was sure it was here, but now I fear it may have been elsewhere or it is hidden in some other kind of post.
I really liked this essay.
And as hacking bad tests shrinks in importance, education will evolve to stop training us to do it.
This, however, is entirely excessive optimism.
I get all the normal pain/temperature/pressure/friction feedback just fine. It is only the problem of knowing where they are in space without looking at them.
I don't know what the procedure for this is, but it occurs to me that if we can specify information about an environment via differential equations inside the neural network, then we can also compare this network's output to one that doesn't have the same information.
In the name of learning more about how to interpret the models, we could try something like:
1) Construct an artificial environment which we can completely specify via a set of differential equations.
2) Run a neural network to learn that environment with every combination of those differential equations.
3) Compare all of these to several control cases of not providing any differential equations.
It seems like how the control case differs from each of the cases-with-structural-information should give us some information about how the network learns the environmental structure.
I can vouch for sudden and significant gains in comfort and functionality by focusing on improving your posture. The method I used was less thorough than here - instead I just used an exercise band and a few stretching exercises to improve the shoulder position. This provided improved comfort immediately, and significant reduction in the fragility of my back in a matter of days.
I just discovered this sequence, and I am pleased and impressed. The subject of this post is something I have been looking at learning more about a lot recently, because I have a problem in the area.
Specifically, I never know where my feet are positioned. I can infer it, and I can confirm it, but I simply don't feel the position of my feet in relation to the rest of my body. Even when I am trying to focus on it.
By contrast, I do feel where my calves are in space. Most of the time when I need to place my feet precisely, I am actually just aiming my calves at that point and relying on the fact that my feet are on the end of my calves.
This doesn't strike directly at the sampling question, but it is related to several of your ideas about incorporating the differentiable function: Neural Ordinary Differential Equations.
This is being exploited most heavily in the Julia community. The broader pitch is that they have formalized the relationship between differential equations and neural networks. This allows things like:
The last one is the most intriguing to me, mostly because it solves the problem of machine learning models having to start from scratch even in environments where information about the environment's structure is known. For example, you can provide it with Maxwell's Equations and then it "knows" electromagnetism.
There is a blog post about the paper and using it with the DifferentialEquations.jl and Flux.jl libraries. There is also a good talk by Christopher Rackauckas about the approach.
It is mostly about using ML in the physical sciences, which seems to be going by the name Scientific ML now.
Strong upvote, this is amazing to me. On the post:
Some thoughts on the results:
I feel like this was rendered its own explicit meme in the form of The Game.