Steveot
Steveot has not written any posts yet.

Steveot has not written any posts yet.

Thanks, I really like these concepts, especially Grice's maxims were new to me and they seem very useful. Your list also got me thinking and I feel like I also have some (obvious) concepts in mind which I often usefully apply but which may not be so well known:
The data-processing inequality is often useful, especially when thinking about automated tools, like LLMs. It states that for any fixed channel K, the mutual information between X and Z is always larger than that between Y and Z, if Y is the output of X processed by the channel K. E.g.,... (read more)
Tangential comment, but one where I'd be interested in how people in this community feel: When you wrote about the part with the meetup and sign saying "I might be lying", I immediately thought how little fun it must have been, and even how badly it might have felt for others attending. In my mind, people attending a meetup don't want to be lied to, even if you semi-communicated it (I say "semi" because the statement on the sign was trivially true for every person. You did not clearly state you were definitely going to lie about certain things) and in the context of a "social experiment". To me it seems quite similar to people wearing signs saying "I might be rude" and then actually being rude.
Another intuition I often found useful: KL-divergence behaves more like the square of a metric than a metric.
The clearest indicator of this is that KL-divergence satisfies a kind of Pythagorean theorem established in a paper by Csiszár (1975), see https://www.jstor.org/stable/2959270#metadata_info_tab_contents . The intuition is exactly the same as for the euclidean case: If we project a point A onto a convex set S (say the projection is B), and if C is another point in the set S, then the standard Pythagorean theorem would tell us that the angle of the triangle ABC at B is larger than 90 degree, or in other words . And the same holds if... (read more)
It's not a mathematical argument, but here I first came across such an analogy drawn between training of neural networks and evolution, and a potential interpretation of what it means in terms of sample-(in)efficiency.
I thought about Agency Q4 (counterargument to Pearl) recently, but couldn't come up with anything convincing. Does anyone have a strong view/argument here?
I like the idea a lot.
However, I really need simple systems in my work routine. Things like "hitting a stopwatch, dividing by three, and carrying over previous rest time" already feels like it's a lot. Even though it's just a few seconds, I prefer if these systems take as little energy as possible to maintain.
What I thought was using a simple shell script: Just start it at the beginning of work, and hit a random key whenever I switch from work to rest or vice versa. It automatically keeps track of my break times.
I don't have Linux at home, but what I tried online ( https://www.onlinegdb.com/online_bash_shell ) is the following: (I am... (read more)
Thanks, I finally got it. What I just now fully understood is that the final inequality holds with high probability (i.e., as you say, is the data), while the learning bound or loss reduction is given for .
Thanks, I was wondering what people referred to when mentioning PAC-Bayes bounds. I am still a bit confused. Could you explain how and depend on (if they do) and how to interpret the final inequality in this light? Particularly I am wondering because the bound seems to be best when . Minor comment: I think ?
The main thing that caught my attention was that random variables are often assumed to be independent. I am not sure if it is already included, but if one wants to allow for adding, multiplying, taking mixtures etc of random variables that are not independent, one way to do it is via copulas. For sampling based methods, working with copulas is a way of incorporating a moderate variety of possible dependence structures with little additional computational cost.
The basic idea is to take a given dependence structure of some tractable multivariate random variable (e.g., one where we can produce samples quickly, like a multivariate Gaussian) and transfer its dependence structure to the individual one-dimensional distributions one likes to add, multiply, etc.
In your first argument, it seems to me slightly like you are arguing against virtue based ethics under the assumption that consequentialism is true. So in your argument, the only real value may arise from good consequences (however those are defined), while for virtue based ethics (if I understand correctly) the value would arise from truly acting virtuously (whatever that means). In my mind, neither can really be true (it seems like a choice). However, framing it like this would allow for something like a reverse of your argument within the framework of virtue ethics and against consequentialism:
"If you actually have values then thinking about how to act is just taking these... (read more)