블로그 이미지
Leeway is... the freedom that someone has to take the action they want to or to change their plans.
maetel

Notice

Recent Post

Recent Comment

Recent Trackback

Archive

calendar

1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30
  • total
  • today
  • yesterday

Category

http://www.stanford.edu/class/ee364/lectures.html

 

1.1 Mathematical optimization


mathematical optimization problem

A vector of optimization variables is optimal (or a solution) of a problem,
if it has the smallest objective value satisfying the constraints.

 

objective and constraint functions: linear  -->  optimization problem: linear program
: convex --> convex optimization problem

 

Convexitiy is more general than linearity.

Convex optimization can be a generalization of linear programming.
 

1.1.1 Applications

optimization variable  <=> choice
objective function 
constraint functions  <=>  requirements / specifications
objective value  <=>  cost
limit / bound
solution  <=>  choice with minimum cost (/ maximum utility)

http://en.wikipedia.org/wiki/Supply_chain_management

data fitting
decision making
system design/analysis/operation

automatic control systems + embedded optimization


1.1.2 Solving optimization problems

A problem is sparse if each constraint function depends on only a small number of the variables.

> effective algorithms 
least-squares problems
linear programs
convex optimization


1.2 Least-squares and linear programming

1.2.1 Least-squares problems

no constraints

http://en.wikipedia.org/wiki/Least_squares
The best fit in the least-squares sense is that instance of the model for which the sum of squared residuals has its least value, a residual being the difference between an observed value and the value given by the model. The method was first described by Carl Friedrich Gauss around 1794.

http://en.wikipedia.org/wiki/Moore%27s_law

http://en.wikipedia.org/wiki/Regression_analysis
techniques for the modeling and analysis of numerical data consisting of values of a dependent variable (also called response variable or measurement) and of one or more independent variables (also known as explanatory variables or predictors)
The dependent variable in the regression equation is modeled as a function of the independent variables, corresponding parameters ("constants"), and an error term. The error term is treated as a random variable. It represents unexplained variation in the dependent variable. The parameters are estimated so as to give a "best fit" of the data. Most commonly the best fit is evaluated by using the least squares method, but other criteria have also been used.

http://en.wikipedia.org/wiki/Maximum_likelihood_estimation
The modeling of real world data using estimation by maximum likelihood offers a way of tuning the free parameters of the model to provide a good fit. For a fixed set of data and underlying probability model, maximum likelihood picks the values of the model parameters that make the data "more likely" than any other values of the parameters would make them.


standard least-squares
weighted least-squares : linear measurements corrupted by errors with unequal variances

regularization
Regularization involves introducing additional information in order to solve an ill-posed problem or prevent overfitting. This information is usually of the form of a penalty for complexity, such as restrictions for smoothness or bounds on the vector space norm. A theoretical justification for regularization is that it attempts to impose Occam's razor on the solution. From a Bayesian point of view, many regularization techniques correspond to imposing certain prior distributions on model parameters


1.2.2 Linear programming

http://en.wikipedia.org/wiki/Linear_programming

Dantzig's simplex method


Simplex Method A good tutorial for Simplex Method with examples (also two-phase and M-method). Department of Mathematics, CUHK

Interior-point method (barrier methods)
http://en.wikipedia.org/wiki/Interior_point_method


http://en.wikipedia.org/wiki/Chebyshev_approximation#Chebyshev_approximation
expanding the given function in terms of Chebyshev polynomials and then cutting off the expansion at the desired degree

Chebyshev approximation problem can be reduced to a linear program.



1.3 Convex optimization

The least-squares problem and linear programming problem are both special cases of the general convex optimization problem.

http://en.wikipedia.org/wiki/Second-order_cone_programming
equivalent to a convex Quadratically constrained quadratic program

http://en.wikipedia.org/wiki/Geometric_programming
Geometric programs are not (in general) convex optimization problems, but they can be transformed to convex problems by a change of variables and a transformation of the objective and constraint functions.


many tricks for transforming convex optimization problems

"recognizing and formulating convex optimization problems"



1.4 Nonlinear optimization

http://en.wikipedia.org/wiki/Nonlinear_programming

1.4.1 Local optimization

to require
- differentiability of the objective and constraint functions
- initial guess (or starting point) for the optimization variable

1.4.2 Global optimization

worst-case analysis or verification of high value or safety-critical system

object funtion --> utility function
constraints --> prior knowledge about the possible paramter values
If the worst-case value is acceptable, we can certify the system as safe or reliable.
In cases where the value of certifying the performance is high, or the cost of being wrong about the reliability or safety is high.

1.4.3 Role of convex optimizatin in nonconvex problems

Initialization for local optimization

Convex heuristics for nonconvex optimization
- finding a sparse vector
- randomized algorithms

Bounds for global optimization
- relaxation
- Lagrangian relaxation --> Lagrangian dual problem

1.5 Outline

1.6 Notation


posted by maetel