11.2 – Posing the dual problem

Let’s continue and work towards posing a dual for our optimization problem. Specifically, we’ll focus on the constraints that we have. For each training example, we have

y^{(i)}\left( w^T x^{(i)}+b \right) \geq 1

Shuffling terms around,

g_i(w) = -y^{(i)}\left( w^T x^{(i)}+b \right) + 1 \leq 0

Note that if the functional margin (y^{(i)}\left( w^T x^{(i)}+b \right)) is 1, then g_i(w)=0, and all \lambda_i > 0 from the KKT conditions. Let’s recall the scaling condition we imposed on w (in the first post):

\min_i y^{(i)}\left( w^T x^{(i)}+b \right)=1

What this means is that at the points closest to the optimal hyperplane, the functional margin is 1. This figure depicts it perfectly:

800px-SVM_margin
By LarhmamOwn work, CC BY-SA 4.0, Link

Don’t worry about the equation having -b: that’s just a consequence of this particular hyperplane having a negative intercept on the y-axis. Thus, in this example, only three of the \lambdas will be nonzero at the optimal solution. We call these three points support vectors. Sometimes, you’ll see the lines through them also being called the support vectors. The important thing is these are where the functional margin is equal to 1.

Let’s now frame the Lagrangian for our optimization problem.

\mathcal{L}(w, b, \lambda) = \frac{1}{2} \left\Vert w \right\Vert^2 - \sum\limits_{i=1}^m \lambda_i \left( y^{(i)}(w^T x^{(i)}+b)-1 \right)

From the post on convex optimization, recall that the dual problem is \max\min \mathcal{L}. So let’s first minimize our Lagrangian. To do so, we compute the partial gradients of the Lagrangian with respect to w and b. Why aren’t we doing this with respect to \lambda_i as well? Try it out. You’ll get the condition that every point is a support vector, which is obviously not the case. So we’ll fix \lambda_i, and only compute the other two partial derivatives, and set them to 0.

\nabla_w \mathcal{L}(w, b, \boldsymbol\lambda) = w - \sum\limits_{i=1}^m \lambda_i y^{(i)}x^{(i)}=0

The result of this, which is below, is quite important. Let’s call this equation 1.

w = \sum\limits_{i=1}^m \lambda_i y^{(i)}x^{(i)}

Similarly, by finding the partial derivative with respect to b,

\sum\limits_{i=1}^m \lambda_i y^{(i)} = 0

Let’s call this equation 2. We now use equation 1 in our Lagrangian. This gives us

\begin{aligned} \mathcal{L}(w, b, \boldsymbol\lambda) &= \frac{1}{2}\left\Vert w \right\Vert^2 - \sum\limits_{i=1}^m \lambda_i \left( y^{(i)} (w^T x^{(i)}+b)-1 \right) \\ &= \frac{1}{2}w^T w - \sum\limits_{i=1}^m \lambda_i y^{(i)} \sum\limits_{j=1}^m \lambda_j y^{(j)}x^{(j)T}x^{(i)} - \sum\limits_{i=1}^m \lambda_i y^{(i)}b + \sum\limits_{i=1}^m \lambda_i \\ &= \frac{1}{2}\sum\limits_{i=1}^m \lambda_i y^{(i)}x^{(i)T} \sum\limits_{j=1}^m \lambda_j y^{(j)}x^{(j)} - \sum\limits_{i=1}^m \lambda_i y^{(i)} \sum\limits_{j=1}^m \lambda_j y^{(j)}x^{(j)T}x^{(i)} + \sum\limits_{i=1}^m \lambda_i \\ &= \frac{1}{2}\sum\limits_{i=1}^m \sum\limits_{j=1}^m \lambda_i \lambda_j y^{(i)}y^{(j)} x^{(i)T}x^{(j)} - \sum\limits_{i=1}^m \sum\limits_{j=1}^m \lambda_i \lambda_j y^{(i)}y^{(j)} x^{(j)T}x^{(i)} + \sum\limits_{i=1}^m \lambda_i \\ &= \sum\limits_{i=1}^m \lambda_i - \frac{1}{2}\sum\limits_{i=1}^m \sum\limits_{j=1}^m \lambda_i \lambda_j y^{(i)}y^{(j)} x^{(i)T}x^{(j)} \end{aligned}

Let’s look at this derivation.

  • The first step is from the observation that \left\Vert w \right\Vert^2 = w^T w. We also plugged in the value of w in the second sum.
  • The next step expands the sums, and uses equation 2, so the third sum is 0.
  • Next, we shuffle terms around, and note that the first two terms really are the same.

With this in place, let’s look at our final dual problem, which is the problem to maximize this function.

\begin{aligned} \max\limits_{\boldsymbol\lambda} & \sum\limits_{i=1}^m \lambda_i - \frac{1}{2}\sum\limits_{i=1}^m \sum\limits_{j=1}^m \lambda_i \lambda_j y^{(i)}y^{(j)} x^{(i)T}x^{(j)} \\ \text{s.t. } & \lambda_i \geq 0 \\ & \sum\limits_{i=1}^m \lambda_i y^{(i)} = 0 \end{aligned}

If we solve this problem for \boldsymbol\lambda, we can plug it in equation 1, to get w. How do we get b? Look at our constraint:

y^{(i)}\left( w^T x^{(i)}+b \right) \geq 1

If y^{(i)}=-1, then dividing both sides by -1,

w^T x^{(i)}+b \leq -1

so

b \leq -1 - w^T x^{(i)}

For the other case, we divide both sides by +1:

w^T x^{(i)}+b \geq 1

so that

b \geq 1 - w^T x^{(i)}

To combine these, note that our initial, non-convex optimization problem was to maximize the functional margin (we had a denominator \left\Vert w \right\Vert, but that is always positive). To achieve this, we consider the maximum of the first result above, and the minimum of the second. In effect, we consider the equality in both cases. We then add up the results and divide by two. This yields us

b^* = -\frac{\max\limits_{i:y^{(i)} = -1} w^T x^{(i)} + \min\limits_{i:y^{(i)} = 1} w^T x^{(i)}}{2}

Intuitively, this says find the smallest positive and the largest negative training examples, and place the line right in the middle of the two (because $b$ controls the position of the hyperplane, while $w$ controls the direction). The figure below explains this intuition (this is hand-drawn, so it’s probably not accurate).

image_2019-03-31_16-01-09.png

Note that to make predictions, we simply compute w^T x+b, and predict 1 if this is positive; otherwise, we predict -1. Using equation 1, we need to compute

\sum\limits_{i=1}^m \lambda_i y^{(i)}x^{(i)T}x+b

Thus, making predictions only requires finding the values of \boldsymbol\lambda. Notice that these are all 0 except for the support vectors. So in reality, we have only a few computations to make, and the sum really is only over the support vectors, not the entire training set.

The fact that we only need to compute inner products, essentially, will be useful when we deal with kernels, which help when the data is not linearly separable. When the data is very high-dimensional, for some feature spaces, computing inner products can be done efficiently.

One thought on “11.2 – Posing the dual problem

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s