Optimization is a subject rooted in antiquity yet has a very modern favor. The modern theory of optimization that we see today essentially began with the discovery of the simplex method for linear programming problem in the late 1940's. However the modern approaches in optimization theory that we use at present owe their origins to the calculus of variations which has been studied for more than three centuries and was also crucial in the development of functional analysis.
Our story here actually begins in the 1960's when it was realized that many problems in application could not be modelled as a linear programming problem where as they could be model as an optimization problem where one had to minimize a convex function subject to convex inequality and affine equality constraints. Such problems were termed as convex optimization problems or convex programming problems. Use of inequalities as constraints rather than just equalities is the hall mark of modern optimization theory. The use of inequalities gives rise to a very different approach to optimization which leads to the growth of a subject called convex analysis. The real impetus in the study of convex optimization began with the publication of the now classic work tiled Convex Analysis by R. T. Rockafellar. It is not possible to overstate the impact this book had on the development of theory and methods of optimization. The term convex analysis was suggested by Albert. W. Tucker (famous for the Kuhn-Tucker Conditions) in order to rhyme with complex analysis. Convexity also brought in a new paradigm in optimization. This is that of non-differentiability. It was found that many convex optimization problems do not have a derivative at the point of minimum. Thus a completely different approach was developed where in the notion of subdifferential was brought in. The subdifferential a given point is a set that can be thought of as a replacement of the notion of the derivative at nondifferentiable points. The importance of the subdifferential was that a set of calculus rules could be developed which becomes very handy in carrying out the analysis. In fact there is also a calculus rule in convex analysis which has no analogue in classical analysis. It was then though natural to extend the domain of non-differentiable optimization beyond convexity. The first natural extension was to the class of locally Lipschitz function and this was pioneered by F. H. Clarke who was the second student of R. T. Rockafellar. He introduced the notion of a generalized gradient for locally Lipschitz functions which is no famous as the Clarke subdifferential . Clarke also developed a calculus for the generalized gradient and applied his theory to the study of optimal control where he was able to expand the horizon of the Pontryagin maximum principle. Nondifferentiable functions were also studied in calculus variations where piecewise continuous extremals were sought. Clarke in fact coined the term Nonsmooth Analysis in 1970's. To be more precise in nonsmooth optimization one studies functions which are continuous yet need not have a derivative at all points. For details on Clarke's approach see the now famous monograph titled : Nonsmooth Analysis and Optimization.
There have many attempts to also move beyond locally Lipschitz functions and consider the study of non-locally Lipschitz functions. Attempts have been made by Rockafellar, Clarke and Ioffe for example. However this move away from convexity was championed by Boris Mordukhovich who developed a nonsmooth calculus without using the standard tools of convex analysis. The approach of Mordukhovich now appears to be fundamental as one can see in the book Variational Analysis (Springer 1998) by Rockafellar and Wets. It is interesting to note that Mordukhovich had got the motivation for his new approach to optimization from problems of optimal control. The importance of the Mordukhovich's approach is that it give a much better calculus rule than that of Clarke in the setting of Asplund spaces and thus in finite dimensions. For details one can see the very recent monograph by Mordukhovich titled: Variational Calculus and Generalized Differentiation (Vol I and Vol II, Springer 2005).
On the other hand there were a large number of mathematicians in the erstwhile Soviet Union who were engaged in the study of convex and nonmooth optimization. Some leading names for example are, L. S. Pontryagin, A Milyutin, A. Dubovitskii, V. Boltyanski, N. Z. Shor, V. F. Demyanov, A. Rubinov, A. Ioffe, B. Mordukhovich, V. Tikhomirov , just to name a few. Most of them have made significant contributions to the theory of optimization. For example Demyanov and Rubinov developed the theory of quasidifferentials which takes completely different route from the approaches that we had mentioned above.
Of course the question remains whether one can numerically handle nonsmooth convex optimization problems or not. The answer is yes and it was pioneered by N. Z. Shor and Claude Lemarechal who developed the bundle method approach which is a very efficient approach to solve a nonsmooth unconstrained convex optimization problem.
Convex optimization now has a diverse range of application from electrical engineering, statistics to the mathematics of finance and Stephen Boyd of Stanford University even says that convex optimization is a technology.
Department of Mathematics and Statistics