Year 3 Computer Science student
find me anywhere in linktr.ee/papetoast
It is hard to see, changed to n.
In my life I have never seen a good one-paragraph explanation of backpropagation so I wrote one.
The most natural algorithms for calculating derivatives are done by going through the expression syntax tree[1]. There are two ends in the tree; starting the algorithm from the two ends corresponds to two good derivative algorithms, which are called forward propagation (starting from input variables) and backward propagation respectively. In both algorithms, calculating the derivative of one output variable with respect to one input variable actually creates a lot of intermediate artifacts. In the case of forward propagation, these artifacts means you get for ~free, and in backward propagation you get for ~free. Backpropagation is used in machine learning because usually there is only one output variable (the loss, a number representing difference between model prediction and reality) but a lot of input variables (parameters; in the scale of millions to billions).
This blogpost has the clearest explanation. Credits for the image too.
or maybe a directed acyclic graph for multivariable vector-valued functions like f(x,y)=(2x+y, y-x)
Donated $25 for all the things I have learned here.
Strongly agreed. Content creators seem to get around this by creating multiple accounts for different purposes, but this is difficult to maintain for most people.
I rarely see them show awareness of the possibility that selection bias has created the effect they're describing.
In my experience with people I encounter, this is not true ;)
Joe Rogero: Buying something more valuable with something less valuable should never feel like a terrible deal. If it does, something is wrong.
clone of saturn: It's completely normal to feel terrible about being forced to choose only one of two things you value very highly.
https://www.lesswrong.com/posts/dRTj2q4n8nmv46Xok/cost-not-sacrifice?commentId=zQPw7tnLzDysRcdQv
Yes!
Bob can choose whether to to hide this waste (at a cost of the utility loss by having $300 and worse listening experience, but a "benefit" of misleading Tim about his misplaced altruism)
True in my example. I acknowledge that my example is wrong and should have been more explicit about having an alternative. Quoting myself from the comment to Vladimir_Nesov:
Anyways, the unwritten thing is that Bob care about having a quality headphone and a good pair of shoes equally. So given that he already has an alright headphone, he would get more utility by buying a good pairs of shoes instead. It is essentially a choice between (a) getting a $300 headphone and (b) getting a $100 headphone and a $300 pair of shoes.
If the bad translation is good enough that the incremental value of a good translation doesn't justify doing it, then that is your answer.
I do accept this as the rational answer, doesn't mean it is not irritating. If A (skillful translator) cares about having a good translation of X slightly more than Y, and B (poor translator) cares about Y much more than X. If B can act first, he can work on X and "force" A (via expected utility) to work on Y. This is a failure of mine to not talk about difference in preference in my examples and expect people to extrapolate and infer it out.
There is also the issue of things only being partially orderable.