(This is part five in a sequence on Machine Learning based on this book. Click here for part 1.)

The first three posts of this sequence have defined PAC learning and established some theoretical results, problems (like overfitting), and limitations. While that is helpful, it doesn't actually answer the question of how to solve real problems (unless the brute force approach is viable). The first way in which we approach that question is to study particular classes of problems and prove that they are learnable. For example, in the previous post, we've looked (among other things) at linear regression, where the loss function has the form ℓ:Rd→R and is linear. In this post, we focus on convex learning problems, where the loss function also has the above form and is convex.

Convexity

We begin with sets rather than functions.

Convex sets

A set M (that is part of a vector space) is called convex iff for any two points in that set, the line segment which connects both points is a subset of M.

The condition that M is part of a vector space is missing in the book, but it is key to guarantee that a line between two points exists. Convexity cannot be defined for mere topological spaces, or even metric spaces. In our case, all of our sets will live in Rd for some d∈N+.

As an example, we can look at letters as a subset of the plane. None of the letters in this font are quite convex – l and I are closest but not quite there. The only convex symbols in this post that I've noticed are . and ' and | and –.

Conversely, every regular filled polygon with n corners is convex. The circle is not convex (no two points have a line segment which is contained in the circle), but the disc (filled circle) is convex. The disc with an arbitrary set of points on its boundary (i.e. the circle) taken out remains convex. The disc with any other point taken out is not convex, neither is the disc with any additional point added. You get the idea. (To be precise on the last one, the mathematical description of the disc is D={x∈R2|||x||≤1}, so there is no way to add a single point that somehow touches the boundary.)

Convex functions

Informally, a function f:Rd→R is convex iff the set of all points on and above the function is convex as a subset of Rd+1, where the dependent variable goes up/downward.

(Here, the middle function (x3) is not convex because the red line segment is not in the blue set, but the left (|x|) and the right (x2) are convex.)

The formal definition is that a function f:Rd→R is convex iff for all x,y∈Rd, the equation f(x+α(y−x))≤f(x)+α[f(y)−f(x)] holds for all α∈[0,1]. This says that a line segment connecting two points on the graph always lies above the graph.

If d=1 as was the case in all of my pictures, then f is convex iff the little pixie flying along the function graph never turns rightward. This is the case iff f′ is monotonically increasing, which is the case iff f′′(x)≥0 for all x∈R.

The main reason why convexity is a desirable property is that, for a convex function, every local minimum is a global minimum. Here is a proof:

Suppose that x∈Rd is a local minimum. Then we find some ball Bd(x,ϵ):={p∈Rd|||p−x||≤ϵ} around x such that f(y)≥x for all y in the ball (this is what it means to be a local minimum in Rd). Now let z be an arbitrary point in Rd; we show that its function value can't lie below that of x. Imagine the line segment from x to z. A part of it must lie in our ball, so we find some (small) δ∈R+ such that x+δ[z−x]∈Bd(x,ϵ). Then (because x is our local minimum), we have that f(x)≤f(x+δ[z−x]). By convexity of f we have f(x+δ[z−x])≤f(x)+δ[f(z)−f(x)], so taken together we obtain the equation

f(x)≤f(x)+δ[f(z)−f(x)]

Or equivalently δ[f(z)−f(x)]≥0 which is to say that δf(z)≥δf(x) which is to say that f(z)≥f(x).

If there are several local minima, then there are several global minima, then one can draw a line segment between them that inevitably cannot go up or down (because otherwise one of the global minima wouldn't be a global minimum), so really there is just one global minimum. This is all about the difference between ≤ and <. The simplest example is a constant function – it is convex, and everywhere is a global minimum.

Jensen's Inequality

The key fact about convex functions, I would argue, is Jensen's inequality:

Givenα1,...,αn∈R+ with ∑ni=1αi=1, if f:Rd→R is convex, then for any sequence (x1,...,xn)∈(Rd)n, it holds that f(∑ni=1αixi)≤∑ni=1αif(xi).

If you look at the inequality above, you might notice that it is almost the definition of linearity, except for the condition ∑ni=1αi=1 and the fact that we have ≤ instead of =. So convex functions fulfill the linearity property as an inequality rather than an equality (almost). In particular, linear functions are convex. Conversely, concave functions (these are functions where the ≤ in the definition of convex functions is a ≥) also fulfill the above property as an inequality, only the the sign does again turn around. In particular, linear functions are concave. To refresh your memory, here is the definition of convexity:

f(x+α(y−x))≤f(x)+α[f(y)−f(x)]∀x,y∈X,α∈[0,1]

So to summarize: convex functions never turn rightward, concave functions never turn leftward, and the intersection of both does neither, i.e., always goes straight, i.e., is linear. Looking at convexity and concavity as a generalization of linearity might further motivate the concept.

Terms of the form x+a(y−x), which one sees quite often (for example in defining points on a line segment), can be equivalently written as (1−a)x+ay. I think the first form is more intuitive; however, the second one generalizes a bit better. We see that x and y are given weights, and those weights sum up to 1. If one goes from 2 weighted values to n weighted values (still all non-negative), one gets Jensen's inequality. Thus, the statement of Jensen's inequality is that if you take any number of points on the graph and construct a weighted mean, that resulting point still lies above the graph. See wikipedia's page for a simple proof via induction.

Guaranteeing learnability

Recall that we are trying to find useful solvable special cases of the setting minimize a loss function of the form ℓ:Rd→R. This can be divided into three tasks:

(1) define the special case

(2) demonstrate that this special case is indeed solvable

(3) apply the class as widely as possible

This chapter is about (1). (When I say "chapter" or "section", I'm referring to the level-1 and level-2 headlines of this post as visible in the navigation at the left.) The reason why we aren't already done with (1) is that convexity of the loss function alone turns out to be insufficient to guarantee PAC learnability. We'll discuss a counter-example in the context of linear regression and then define additional properties to remedy the situation.

A failure of convexity

In our counter-example, we'll have X=Y=R (note that this is a convex set). Our hypothesis class H will be that of all (small) linear predictors, i.e. just H={fα:x↦α⋅x|α∈[−1,1]}. The reason that we only allow small predictors is that our final formulation of the learnable class will also demand that H is a bounded set, so this example demonstrates that even boundedness + convexity is still not enough.

We've previously defined real loss functions as taking a hypothesis and returning the real error, and empirical loss functions as taking a hypothesis and a training sequence and returning the empirical error. Now we'll look point-based loss functions (not bolded as it's not an official term, but I'll be using it a lot) which measure the error of a hypothesis on a single point only, i.e. they have the form ℓ(x,y):H→R for some (x,y)∈X×Y. To be more specific, we will turn the squared loss function defined in the previous post into a point-based loss function. Thus we will have ℓ(2)(x,y)(hα)=||α⋅x−y||2=(αx−y)2, where the last equality holds because we're in the one-dimensional case. We will only care about two points (all else will have probability mass 0), namely these two:

That's the point (1/−1) at the left and one all the way to the right at (1/μ,0). With μ, think of an extremely small positive number, so that 1/μ is quite large.

If this class were PAC learnable, then there would be a learner A such that, for all ϵ,δ∈(0,1), if the size of the training sequence is at least m∗(ϵ,δ), then for all probability distributions over X×Y, with probability at least 1−δ over the choice of S, the error of A(S) would be at most ϵ larger than that of the best classifier.

So to prove that it is not learnable, we first assume we have some learner A. Then we get to set some ϵ and δ and construct a probability distribution DA based on A. Finally, we have to prove that A fails on the problem given the choices of ϵ and δ and DA. That will show that the problem is not PAC learnable.

We consider two possible candidates for DA. The first is DL which has all probability mass on the point (1/−1) on the left. The second is D?, which has almost all probability mass on the point (1/−1), but also has μ probability mass on the point (1/μ,0). As mentioned, μ∈R+ will be extremely small; so small that the right point will be unlikely to ever appear in the training sequence.

If the right point doesn't appear in the training sequence, then the sequence consists of only the left point sampled over and over again. In that case, A cannot differentiate between DL and D?, so in order to succeed, it would have to output a hypothesis which performs well with both distributions – which as we will show is not possible.

Given our class H, the hypothesis A(S) must be of the form hα for some α∈R. Recall that the classifier is supposed to predict the y-coordinate of the points. Thus, for the first point, α=−1 would be the best choice (since −1⋅1=−1) and for the second point, α=0 would be the best choice (since 0⋅1/μ=0).

Now if α≤−0,5, then we declare that DA=D?. In this case (assuming that the second point doesn't appear in the training sequence), there will be a μ chance of predicting the value α⋅1/μ=α/μ≤−1/2μ, which, since we use the squared loss function, leads to an error of at least 14μ2, and thus the expected error is at least μ14μ2=14μ, which, because μ is super tiny, is a very large number. Conversely, the best classifier would be at least as good as the classifier with α=0, which would only have error 1−μ (for the left point), which is about 1 and thus a much smaller number.

Conversely, if α>−0.5, we declare that DA=DL, in which case the error of A(S) is at least (−0.5−(−1))2=14, whereas the best classifier (with α=−1) has zero error.

Thus, we only need to choose some ϵ<14 and an arbitrary δ. Then, given the sample size m, we set μ small enough such that the training sequence is less than δ likely to contain the second point. This is clearly possible: we can make μ arbitrarily small; if we wanted, we could make it so small that the probability of sampling the second point is <δ100. That concludes our proof.

Why was this negative result possible? It comes down to the fact that we were able to make the error of the first classifier with α<−0.5 large via a super unlikely sample point with super2 high error – so the problem is the growth rate of the loss function. As long as the loss function grows so quickly that, while both giving a point less probability mass and moving it further to the right, the expected error goes up, well then one can construct examples with arbitrarily high expected error. (As we've seen, the expected error in the case of α<−0.5 is at least 14μ, i.e. a number that grows arbitrarily large as μ→0.)

Lipschitzness & Smoothness

There are at least two ways to formalize a requirement such that the loss function is somehow "bounded". They're called Lipschitzness and Smoothness, and both are very simple.

Lipschitzness says that a function cannot grow too fast, i.e.:

A functionf:Rd→R is ρ-Lipschitz iff |f(y)−f(x)|≤ρ||y−x||∀x,y∈Rd

If f is differentiable, then a way to measure maximum growth is the gradient, because the gradient points into the direction of fastest growth. Thus, one has the equivalent characterization:

A differentiable function f:Rd→R is ρ-Lipschitz iff ||∇x||≤ρ for all x∈Rd

However, non-differentiable functions can be Lipschitz; for example, the absolute value function |x| on the real numbers is 1-Lipschitz.

Conversely, smoothness is about the change of change. Thus, |x| is definitely not smooth since the derivative jumps from −1 to 1 across a single point (smoothness is only defined for differentiable functions). On the other hand, the function x2 is smooth on all of R. The formal definition simply moves Lipschitzness one level down, i.e.

A differentiable function f:Rd→R is β-smooth iff its gradient is β-Lipschitz

Which is to say, iff ||∇f(y)−∇f(x)||≤β||y−x|| for all x,y∈Rd. In the one-dimensional case, the gradient equals the derivative, and if the derivative is itself differentiable, then smoothness can be characterized in terms of the second derivative. Thus, a twice-differentiable function f:R→R is β-smooth iff |f′′(x)|≤β for all x∈R.

One now defines the class of convex Lipschitz bounded problems and that of convex smooth bounded problems. They both require that H has a structure as a familiar set like Bd(0,M), that it is convex and bounded (so H=Rd would not suffice), and that, for all (x,y)∈X×Y, the point-based loss function ℓ(x,y):H→R is ρ-Lipschitz (in the former case) or β-smooth and nonnegative (in the latter case). If all this is given, the class is called convex Lipschitz bounded with parameters (M,ρ); or convex smooth bounded with parameters (M,β).

In the previous example, the hypothesis could be represented by the set [0,1], which is both convex and bounded. (In that example, we thought of it as a set of functions, each of which fully determined by an element in [0,1]; now we think of it as the set [0,1] itself.) Each point-based loss function ℓ(2)(x,y) is convex (and non-negative). However, for any number x∈R+, the loss function ℓ(2)(x,y) is defined by the rule ℓ(2)(x,y)(α)=(α⋅x−y)2, and the gradient of this function with respect to α (which equals the derivative since we're in the one-dimensional case) is 2(α⋅x−y). Since this gets large as α gets large, the function is not Lipschitz. Furthermore, the second derivative is 2x. This means that each particular function induced by the point (x,y) is 2x-smooth, but there is no parameter β such that all functions are β-smooth.

Surrogate Loss Functions

We are now done with task (1), defining the special case. Task (2) would be demonstrating that both convex Lipschitz bounded and convex smooth bounded problems are, in fact, PAC learnable with no further conditions. This is done by defining an algorithm and then proving that that the algorithm works (i.e., learns any instance of the class with the usual PAC learning guarantees). The algorithm we will look at for this purpose (which does learn both classes) is an implementation of Stochastic Gradient Descent; however we'll do this in the next post rather than now. For this final chapter, we will instead dive into (3), i.e. find an example of how the ability to learn these two classes is useful even for problems that don't naturally fit into either of them.

Recall the case of binary linear classification. We have a set of points in some high-dimensional space X=Rd, a training sequence where points are given binary labels (i.e. Y={−1,1}) and we wish to find a hyperplane that performs well in the real world. We've already discussed the Perceptron algorithm and also reduced the problem to linear programming; however, both approaches have assumed that the problem is separable.

We're not going to find a perfect solution for the general case, because one can show that the problem is NP-hard. However, we can find a solution that approximates the optimal predictor. The approach here is to define a surrogate loss function, which is a loss function ℓ∗ that (a) upper-bounds the real loss ℓ0−1, and (b) has nicer properties than the real loss, so that minimizing it is easier. In particular, we would like for it to be a member of one of the two learnable classes we have introduced. Our point-based loss function for ℓ0−1 has the form ℓ0−1(x,y)(ha):=1ha(x)≠y, where 1B for a boolean statement B is 1 iff B is true and 0 otherwise.

Recall that each hyperplane is fully determined by one vector in Rd, hence the notation ha. If we represent H directly as Rd and assume d=1, then the graph of ℓ0−1(x,y) looks like this...

... because in d=1 and the homogeneous case, the classifier determined by a single number; if this number is positive it will label all positive points with 1; if it's negative, it will label all negative points with 1. If the x-coordinate of the point in question is positive with label 1 or negative with label −1 (i.e. x>0 and y=1; or x<0 and y=−1), then the former case is the correct one and we get this loss function. Otherwise, the loss function would jump from 0 to 1 instead.

Obviously, d=1 is silly, but it already demonstrates that this loss function is not convex (it makes a turn to the right, and it's easy to find a segment which connects two points of the graph and doesn't lie above the graph). But consider the alternative loss function ℓ∗(x,y):

This new loss function can be defined by the rule ℓ∗(x,y)(ha):=max(0,1−⟨a,x⟩y). In the picture, the horizontal axis corresponds to a and we have x>0 and y=1. This loss function is easily seen to be convex, non-negative, and not at all smooth. It is also ||x||-Lipschitz. Thus, the problem with X=Rd is not convex Lipschitz bounded, but if we take X=Bd(0,ρ) and also H=Bd(0,M) for some M,ρ∈R+, then it does become a member of the convex-Lipschitz-bounded class with parameters M and ρ, and we can learn via e.g. stochastic gradient descent.

Of course, this won't give us exactly what we want (although penalizing a predictor for being "more wrong" might not be unreasonable), so if we want to bound our loss (empirical or real) with respect to ℓ0−1, we will have to do it via ℓ0−1(h)=ℓ∗(h)+[ℓ0−1(h)−ℓ∗(h)], where the second term is the difference between both loss functions. If ℓ0−1(h)−ℓ∗(h) is small, then this approach will perform well.

(This is part five in a sequence on Machine Learning based on this book. Click here for part 1.)

The first three posts of this sequence have defined PAC learning and established some theoretical results, problems (like overfitting), and limitations. While that is helpful, it doesn't actually answer the question of how to solve real problems (unless the brute force approach is viable). The first way in which we approach that question is to study

particular classesof problems and prove that they are learnable. For example, in the previous post, we've looked (among other things) atlinear regression, where the loss function has the form ℓ:Rd→R and is linear. In this post, we focus onwhere the loss function also has the above form and is convex.convex learning problems,## Convexity

We begin with sets rather than functions.

## Convex sets

The condition that M is part of a vector space is missing in the book, but it is key to guarantee that a line between two points exists. Convexity cannot be defined for mere topological spaces, or even metric spaces. In our case, all of our sets will live in Rd for some d∈N+.

As an example, we can look at letters as a subset of the plane. None of the letters in this font are quite convex – l and I are closest but not quite there. The only convex symbols in this post that I've noticed are . and ' and | and –.

Conversely, every regular filled polygon with n corners is convex. The circle is not convex (

notwo points have a line segment which is contained in the circle), but the disc (filled circle) is convex. The disc with an arbitrary set of points on its boundary (i.e. the circle) taken out remains convex. The disc with any other point taken out is not convex, neither is the disc with any additional point added. You get the idea. (To be precise on the last one, the mathematical description of the disc is D={x∈R2|||x||≤1}, so there is no way to add a single point that somehow touches the boundary.)## Convex functions

Informally, a function f:Rd→R is convex iff the set of all points

on and above the functionis convex as a subset of Rd+1, where the dependent variable goes up/downward.(Here, the middle function (x3) is not convex because the red line segment is not in the blue set, but the left (|x|) and the right (x2) are convex.)

The formal definition is that a function f:Rd→R is convex iff for all x,y∈Rd, the equation f(x+α(y−x))≤f(x)+α[f(y)−f(x)] holds for all α∈[0,1]. This says that a line segment connecting two points on the graph always lies above the graph.

If d=1 as was the case in all of my pictures, then f is convex iff the little pixie flying along the function graph never turns rightward. This is the case iff f′ is monotonically increasing, which is the case iff f′′(x)≥0 for all x∈R.

The main reason why convexity is a desirable property is that, for a convex function,

every local minimum is a global minimum.Here is a proof:Suppose that x∈Rd is a local minimum. Then we find some ball Bd(x,ϵ):={p∈Rd|||p−x||≤ϵ} around x such that f(y)≥x for all y in the ball (this is what it means to be a local minimum in Rd). Now let z be an arbitrary point in Rd; we show that its function value can't lie below that of x. Imagine the line segment from x to z. A part of it must lie in our ball, so we find some (small) δ∈R+ such that x+δ[z−x]∈Bd(x,ϵ). Then (because x is our local minimum), we have that f(x)≤f(x+δ[z−x]). By convexity of f we have f(x+δ[z−x])≤f(x)+δ[f(z)−f(x)], so taken together we obtain the equation

f(x)≤f(x)+δ[f(z)−f(x)]

Or equivalently δ[f(z)−f(x)]≥0 which is to say that δf(z)≥δf(x) which is to say that f(z)≥f(x).

If there are several local minima, then there are several global minima, then one can draw a line segment between them that inevitably cannot go up or down (because otherwise one of the global minima wouldn't be a global minimum), so really there is just one global minimum. This is all about the difference between ≤ and <. The simplest example is a constant function – it is convex, and everywhere is a global minimum.

## Jensen's Inequality

The key fact about convex functions, I would argue, is

Jensen's inequality:If you look at the inequality above, you might notice that it is almost the definition of linearity, except for the condition ∑ni=1αi=1 and the fact that we have ≤ instead of =. So convex functions

fulfill the linearity property as an inequality rather than an equality(almost). In particular, linear functions are convex. Conversely, concave functions (these are functions where the ≤ in the definition of convex functions is a ≥) also fulfill the above property as an inequality, only the the sign does again turn around. In particular, linear functions are concave. To refresh your memory, here is the definition of convexity:f(x+α(y−x))≤f(x)+α[f(y)−f(x)]∀x,y∈X,α∈[0,1]

So to summarize: convex functions never turn rightward, concave functions never turn leftward, and the intersection of both does neither, i.e., always goes straight, i.e., is linear. Looking at convexity and concavity as a generalization of linearity might further motivate the concept.

Terms of the form x+a(y−x), which one sees quite often (for example in defining points on a line segment), can be equivalently written as (1−a)x+ay. I think the first form is more intuitive; however, the second one generalizes a bit better. We see that x and y are given weights, and those weights sum up to 1. If one goes from 2 weighted values to n weighted values (still all non-negative), one gets Jensen's inequality. Thus, the statement of Jensen's inequality is that if you take any number of points on the graph and construct a weighted mean, that resulting point still lies above the graph. See wikipedia's page for a simple proof via induction.

## Guaranteeing learnability

Recall that we are trying to find useful solvable special cases of the setting

minimize a loss function of the form ℓ:Rd→R. This can be divided into three tasks:(1) define the special case

(2) demonstrate that this special case is indeed solvable

(3) apply the class as widely as possible

This chapter is about (1). (When I say "chapter" or "section", I'm referring to the level-1 and level-2 headlines

of this postas visible in the navigation at the left.) The reason why we aren't already done with (1) is that convexity of the loss function alone turns out to be insufficient to guarantee PAC learnability. We'll discuss a counter-example in the context oflinear regressionand then define additional properties to remedy the situation.## A failure of convexity

In our counter-example, we'll have X=Y=R (note that this is a convex set). Our hypothesis class H will be that of all (small)

linear predictors, i.e. just H={fα:x↦α⋅x|α∈[−1,1]}. The reason that we only allow small predictors is that our final formulation of the learnable class will also demand that H is a bounded set, so this example demonstrates that even boundedness + convexity is still not enough.We've previously defined

real loss functionsas taking a hypothesis and returning thereal error, andempirical loss functionsas taking a hypothesis and a training sequence and returning theempirical error. Now we'll lookpoint-based loss functions(not bolded as it's not an official term, but I'll be using it a lot) which measure the error of a hypothesis on a single point only, i.e. they have the form ℓ(x,y):H→R for some (x,y)∈X×Y. To be more specific, we will turn thesquared loss functiondefined in the previous post into a point-based loss function. Thus we will have ℓ(2)(x,y)(hα)=||α⋅x−y||2=(αx−y)2, where the last equality holds because we're in the one-dimensional case. We will only care about two points (all else will have probability mass 0), namely these two:That's the point (1/−1) at the left and one all the way to the right at (1/μ,0). With μ, think of an extremely small positive number, so that 1/μ is quite large.

If this class were PAC learnable, then there would be a learner A such that, for all ϵ,δ∈(0,1), if the size of the training sequence is at least m∗(ϵ,δ), then for all probability distributions over X×Y, with probability at least 1−δ over the choice of S, the error of A(S) would be at most ϵ larger than that of the best classifier.

So to prove that it is not learnable, we first assume we have some learner A. Then we get to set some ϵ and δ and construct a probability distribution DA based on A. Finally, we have to prove that A fails on the problem given the choices of ϵ and δ and DA. That will show that the problem is not PAC learnable.

We consider two possible candidates for DA. The first is DL which has all probability mass on the point (1/−1) on the left. The second is D?, which has almost all probability mass on the point (1/−1), but also has μ probability mass on the point (1/μ,0). As mentioned, μ∈R+ will be extremely small; so small that the right point will be unlikely to ever appear in the training sequence.

If the right point doesn't appear in the training sequence, then the sequence consists of only the left point sampled over and over again. In that case, A cannot differentiate between DL and D?, so in order to succeed, it would have to output a hypothesis which performs well with both distributions – which as we will show is not possible.

Given our class H, the hypothesis A(S) must be of the form hα for some α∈R. Recall that the classifier is supposed to predict the y-coordinate of the points. Thus, for the first point, α=−1 would be the best choice (since −1⋅1=−1) and for the second point, α=0 would be the best choice (since 0⋅1/μ=0).

Now if α≤−0,5, then we declare that DA=D?. In this case (assuming that the second point doesn't appear in the training sequence), there will be a μ chance of predicting the value α⋅1/μ=α/μ≤−1/2μ, which, since we use the squared loss function, leads to an error of at least 14μ2, and thus the

expected erroris at least μ14μ2=14μ, which, because μ is super tiny, is a very large number. Conversely, the best classifier would be at least as good as the classifier with α=0, which would only have error 1−μ (for the left point), which is about 1 and thus a much smaller number.Conversely, if α>−0.5, we declare that DA=DL, in which case the error of A(S) is at least (−0.5−(−1))2=14, whereas the best classifier (with α=−1) has zero error.

Thus, we only need to choose some ϵ<14 and an arbitrary δ. Then, given the sample size m, we set μ small enough such that the training sequence is less than δ likely to contain the second point. This is clearly possible: we can make μ arbitrarily small; if we wanted, we could make it so small that the probability of sampling the second point is <δ100. That concludes our proof.

Why was this negative result possible? It comes down to the fact that we were able to make the error of the first classifier with α<−0.5 large via a super unlikely sample point with super2 high error – so the problem is the growth rate of the loss function. As long as the loss function grows so quickly that, while both giving a point less probability mass and moving it further to the right, the expected error goes up, well then one can construct examples with arbitrarily high expected error. (As we've seen, the expected error in the case of α<−0.5 is at least 14μ, i.e. a number that grows arbitrarily large as μ→0.)

## Lipschitzness & Smoothness

There are at least two ways to formalize a requirement such that the loss function is somehow "bounded". They're called

LipschitznessandSmoothness,and both are very simple.Lipschitzness says that a function cannot grow too fast, i.e.:

If f is differentiable, then a way to measure maximum growth is the gradient, because the gradient points into the direction of fastest growth. Thus, one has the equivalent characterization:

However, non-differentiable functions can be Lipschitz; for example, the absolute value function |x| on the real numbers is 1-Lipschitz.

Conversely,

smoothnessis about the change of change. Thus, |x| is definitely not smooth since the derivative jumps from −1 to 1 across a single point (smoothness is only defined for differentiable functions). On the other hand, the function x2 is smooth on all of R. The formal definition simply moves Lipschitzness one level down, i.e.Which is to say, iff ||∇f(y)−∇f(x)||≤β||y−x|| for all x,y∈Rd. In the one-dimensional case, the gradient equals the derivative, and if the derivative is itself differentiable, then smoothness can be characterized in terms of the second derivative. Thus, a twice-differentiable function f:R→R is β-smooth iff |f′′(x)|≤β for all x∈R.

One now defines the class of

problems and that ofconvex Lipschitz boundedproblems. They both require that H has a structure as a familiar set like Bd(0,M), that it isconvex smooth boundedconvexandbounded(so H=Rd would not suffice), and that, for all (x,y)∈X×Y, the point-based loss function ℓ(x,y):H→R is ρ-Lipschitz (in the former case) or β-smoothand nonnegative(in the latter case). If all this is given, the class is called convex Lipschitz bounded with parameters (M,ρ); or convex smooth bounded with parameters (M,β).In the previous example, the hypothesis could be represented by the set [0,1], which is both convex and bounded. (In that example, we thought of it as a set of functions, each of which fully determined by an element in [0,1]; now we think of it as the set [0,1] itself.) Each point-based loss function ℓ(2)(x,y) is convex (and non-negative). However, for any number x∈R+, the loss function ℓ(2)(x,y) is defined by the rule ℓ(2)(x,y)(α)=(α⋅x−y)2, and the gradient of this function with respect to α (which equals the derivative since we're in the one-dimensional case) is 2(α⋅x−y). Since this gets large as α gets large, the function is not Lipschitz. Furthermore, the second derivative is 2x. This means that each particular function induced by the point (x,y) is 2x-smooth, but there is no parameter β such that all functions are β-smooth.

## Surrogate Loss Functions

We are now done with task (1), defining the special case. Task (2) would be demonstrating that both convex Lipschitz bounded and convex smooth bounded problems are, in fact, PAC learnable with no further conditions. This is done by defining an algorithm and then proving that that the algorithm works (i.e., learns any instance of the class with the usual PAC learning guarantees). The algorithm we will look at for this purpose (which does learn both classes) is an implementation of

Stochastic Gradient Descent; however we'll do this in the next post rather than now. For this final chapter, we will instead dive into (3), i.e. find an example of how the ability to learn these two classes is useful even for problems that don't naturally fit into either of them.Recall the case of

binary linear classification. We have a set of points in some high-dimensional space X=Rd, a training sequence where points are given binary labels (i.e. Y={−1,1}) and we wish to find ahyperplanethat performs well in the real world. We've already discussed thePerceptronalgorithm and also reduced the problem tolinear programming; however, both approaches have assumed that the problem is separable.We're not going to find a perfect solution for the general case, because one can show that the problem is NP-hard. However, we can find a solution that approximates the optimal predictor. The approach here is to define a

, which is a loss function ℓ∗ that (a) upper-bounds the real loss ℓ0−1, and (b) has nicer properties than the real loss, so that minimizing it is easier. In particular, we would like for it to be a member of one of the two learnable classes we have introduced. Our point-based loss function for ℓ0−1 has the form ℓ0−1(x,y)(ha):=1ha(x)≠y, where 1B for a boolean statement B is 1 iff B is true and 0 otherwise.surrogate loss functionRecall that each hyperplane is fully determined by one vector in Rd, hence the notation ha. If we represent H directly as Rd and assume d=1, then the graph of ℓ0−1(x,y) looks like this...

... because in d=1 and the homogeneous case, the classifier determined by a single number; if this number is positive it will label all positive points with 1; if it's negative, it will label all negative points with 1. If the x-coordinate of the point in question is positive with label 1 or negative with label −1 (i.e. x>0 and y=1; or x<0 and y=−1), then the former case is the correct one and we get this loss function. Otherwise, the loss function would jump from 0 to 1 instead.

Obviously, d=1 is silly, but it already demonstrates that this loss function is not convex (it makes a turn to the right, and it's easy to find a segment which connects two points of the graph and doesn't lie above the graph). But consider the alternative loss function ℓ∗(x,y):

This new loss function can be defined by the rule ℓ∗(x,y)(ha):=max(0,1−⟨a,x⟩y). In the picture, the horizontal axis corresponds to a and we have x>0 and y=1. This loss function is easily seen to be convex, non-negative, and not at all smooth. It is also ||x||-Lipschitz. Thus, the problem with X=Rd is not convex Lipschitz bounded, but if we take X=Bd(0,ρ) and also H=Bd(0,M) for some M,ρ∈R+, then it does become a member of the convex-Lipschitz-bounded class with parameters M and ρ, and we can learn via e.g. stochastic gradient descent.

Of course, this won't give us exactly what we want (although penalizing a predictor for being "more wrong" might not be unreasonable), so if we want to bound our loss (empirical or real) with respect to ℓ0−1, we will have to do it via ℓ0−1(h)=ℓ∗(h)+[ℓ0−1(h)−ℓ∗(h)], where the second term is the difference between both loss functions. If ℓ0−1(h)−ℓ∗(h) is small, then this approach will perform well.