Here I will work out the derivation for the dual, and dual update equations, presented in Frome, Singer, Sha, and Malik, Learning Globally-Consistent Local Distance Functions for Shape-Based Image Retrieval and Classification, ICCV 2007.
Deriving the Dual
From equation (4) in the paper, we can add Lagrange multipliers to get the following objective function:
All the dual variables, are constrained to be greater than 0, and are such that, if one of the original inequality constraints is violated, the inner maximum can be made to be , and so a violated constraint now corresponds to a positive infinite value in the minimization problem.
We form the dual by switching the and . It is straightforward to show that the dual is a lower bound on the original primal problem. In the case of a convex optimization problem with feasible solution, the dual optimum solution will be equal to the primal optimum solution, so we can solve the dual instead of the primal.
To get from the above to equation (5) in the paper, we first take the derivative with respect to . This yields, ,
So . Since , we now have the constraint , and we can substitute the expression for into the above equation, and simplify to get:
Now we take the derivative with respect to a and get:
So therefore, .
If we now plug this expression for back into the objective, expand the L2 norm, and simplify, we get the equation (5) in the paper:
Deriving the Updates
Now we can just do coordinate ascent on the dual variables. One thing to note is that, in the ordinary SVM formulation, there is an equality constraint that ties together all the dual variables . Therefore, if you try to do coordinate ascent by just optimizing a single dual variable at a time, its value will be determined by all the rest, and the objective will stay the same. Dual solvers for SVMs thus optimize a pair of dual variables at a time.
In this case, there is no such equality constraint, so we can use ordinary coordinate ascent. Taking the derivative with respect to , we have
We also know that , so this basically means for any component of that is negative, the corresponding component of will be set to make the sum equal 0. Since , this is just enforcing the positivity of .
Now, taking the derivative with respect to , we get
Adding and subtracting an additional on the right, and using our expression for and above, we get:
Andrea Frome’s thesis gives more details on the algorithm. Essentially, there is both an outer loop and an inner loop. In the inner loop, dual variables are randomly reordered and updated. If a variable becomes bounded, i.e. or , then it is no longer updated in the next iteration of the inner loop. The rationale is that, once a variable becomes bounded, it is not likely to change much, so time should be spent updating the other variables instead. The inner loop ends once the unbounded variables do not change, or the change in the dual objective is less than some threshold. At this point, all variables are marked as unbounded and the outer loop repeats.
I recently had to run some long matlab jobs, so I ran them with nohup to keep the jobs going after I signed out. However, I soon found that the jobs would die as soon as I logged out, despite running them with nohup.
Turns out the secret to getting nohup to work with matlab is to unset the DISPLAY environment variable as described on this Mathworks help page.
Actually, I find it better to run nohup matlab jobs as described on this page of pointers, by redirecting standard in put from a file rather than using the “-r” matlab option.
In summary, to run the script “commands.m” in matlab with nohup, execute the following command:
unset DISPLAY nohup nice matlab < commands.m > output.txt &
Emacs is my editor of choice when I need to code. I suppose there will always be the battle between Emacs and vi on one hand, and between these editors and more sophisticated editors, such as may be part of an IDE on the other hand.
For my part, I prefer Emacs because it is available on all three operating systems I use (Windows, Mac OS X, Unix), it can be easily customized to different languages as far as syntax highlighting and tabbing, and it minimizes the number of times I need to switch over off the keyboard to the mouse while coding. I suppose you could say the same things about vi; honestly I am just more proficient with Emacs as it was the default choice in my first CS class at Stanford.
At any rate, the Emacs-fu blog seems to have a number of tips for getting the most out of Emacs, and I’ll be looking into it in more detail.
Awhile ago, I was reading papers on various hashing algorithms, such as Weiss, Torralba, and Fergus’s Spectral Hashing. One use for these algorithms is in computing approximate distance and nearest neighbors in very large data sets, where exact computation would be too slow. Data points are instead mapped to bit codes that are constructed such that nearby points will map to similar bit codes.
One question this raised, and one which also apparently potential interview question, is what is a fast way to compute the number of one bits in a number (allowing you to compute the Hamming distance between bit codes).
It’s an interesting question, and this page – Puzzle: Fast Bit Counting – gives a number of clever algorithms for computing the number of one bits.
Of course, this isn’t necessarily relevant in the context of using the hashing to bit codes for computing nearest neighbors. Instead, given a query, you’d probably want to find all data points within a Hamming distance of . For this, I imagine you would pre-compute all bit codes with or less one bits, and then run through this list, XOR (^ in C) the query with the bit code to get a new code with Hamming distance less than or equal to from the query, and then test to see if any data point was hashed to the new code.
While reading up on Gaussian Processes (GPs), I decided it would be useful to be able to prove some of the basic facts about multivariate Gaussian distributions that are the building blocks for GPs. Namely, how to prove that the conditional distribution and marginal distribution of a multivariate Gaussian is also Gaussian, and to give its form.
First, we know that the density of a multivariate normal distribution with mean and covariance is given by
For simplicity of notation, I’ll now assume that the distribution has zero-mean, but everything should carry over in a straightforward manner to the more general case.
Writing out as two components , we are now interested in two distributions, the conditional and the marginal .
Separate the components of the covariance matrix into a block matrix , such that corresponds to the covariance for , similarly for , and contains the cross-terms.
Rewriting the Joint
We’d now like to be able to write out the form for the inverse covariance matrix . We can make use of the Schur complement and write this as
I’ll explain below how this can be derived.
Now, we know that the joint distribution can be written as
We can substitute in the above expression of the inverse of the block covariance matrix, and if we simplify by multiplying the outer matrices, we obtain
Using the fact that the center matrix is block diagonal, we have
At this point, we’re pretty much done. If we condition on , the second exponential term drops out as a constant, and we have
Note that if and are uncorrelated, , and we just get the marginal distribution of .
If we marginalize over , we can pull the second exponential term outside the integral, and the first term is just the density of a Gaussian distribution, so it integrates to 1, and we find that
Above, I wrote that you could use the Schur complement to get the block matrix form of the inverse covariance matrix. How would one actually derive that? As mentioned in the wikipedia page, the expression for the inverse can be derived using Gaussian elimination.
If you right-multiply the covariance by the left-most matrix in the expression, you obtain
zero-ing out the bottom right matrix. Multiplying by the center matrix gives you the identity in the diagonal components, and the right-most matrix zeros out the top left matrix, giving you the identity, so the whole expression is the inverse of the covariance matrix.
I got started on this train of thought after reading the wikipedia page on Gaussian processes. The external link on the page to a gentle introduction to GPs was somewhat helpful as a quick primer. The video lectures by MacKay and Rasmussen were both good and helped to give a better understanding of GPs.
MacKay also has a nice short essay on the Humble Gaussian distribution, which gives more information on the covariance and inverse covariance matrices of Gaussian distributions. In particular, the inverse covariance matrix tells you the relationship between two variables, conditioned on all other variables, and therefore changes if you marginalize out some of the variables. The sign of the off diagonal elements in the inverse covariance matrix is opposite the sign of the correlation between the two variables, conditioned on all the other variables.
To go deeper into Gaussian Processes, one can read the book Gaussian Processes for Machine Learning, by Rasmussen and Williams, which is available online. The appendix contains useful facts and references on Gaussian identities and matrix identities, such as the matrix inversion lemma, another application of Gaussian elimination to determine the inverse, in this case the inverse of a matrix sum.