Optimization Technique

Introduction:
Optimization is the technique of obtaining best results under given circumstances.The 'Optimization Techniques' also known as 'Mathematical Programming Techniques' are the methods which gives which gives the best results, under the given conditions, to the given programming problems.
History of Optimization:
The main origin is the second world war. At that time the scientists of England were asked to study the strategic and tactical problems related to air and land defence of the country.Due to the limited resources, it was necessary to make utilization of them. Historically, the first term for optimization was "linear programming", which was due to George B. Dantzig, although much of the theory had been introduced by Leonid Kantorovich in 1939. Dantzig published the Simplex algorithm in 1947, and John von Neumann developed the theory of duality in the same year.The term, programming, in this context does not refer to computer programming. Rather, the term comes from the use of program by the United States military to refer to proposed training and logistics schedules, which were the problems Dantzig studied at that time

       

Computational Optimization Problem:
To solve problems, researchers may use algorithms that terminate in a finite number of steps,or iterative methods that converge to a solution 
1. Optimization algorithms
Simplex algorithm of George Dantzig, designed for linear programming. Extensions of the simplex algorithm, designed for quadratic programming and for linear-fractional programming.Variants of the simplex algorithm that are especially suited for network optimization. 
2. Iterative methods
The iterative methods used to solve problems of nonlinear programming differ according to whether they evaluate Hessians, gradients, or only function values. While evaluating Hessians (H) and gradients (G) improves the rate of convergence, for functions for which these quantities exist and vary sufficiently smoothly, such evaluations increase the computational complexity (or computational cost) of each iteration. In some cases, the computational complexity may be excessively high. 
Methods that evaluate Hessians (or approximate Hessians, using finite differences):
1.Newton's method
2.Sequential quadratic programming: A Newton-based method for small-medium scale constrained problems. Some versions can handle large-dimensional problems. 
Methods that evaluate gradients or approximate gradients using finite differences (or even subgradients): 
1.Quasi-Newton methods: Iterative methods for medium-large problems (e.g. N<1000). 
2.Conjugate gradient methods: Iterative methods for large problems. (In theory, these methods terminate in a finite number of steps with quadratic objective functions, but this finite termination is not observed in practice on finite–precision computers.) 
3.Interior point methods: This is a large class of methods for constrained optimization. Some interior-point methods use only (sub)gradient information, and others of which require the evaluation of Hessians. 

                        
Applications:
1.In bringing out new design
2.Planning of maintenance and replacement of equipment to reduce operating cost.
3.Controlling the idle and waiting times and queueing in production lines to reduce the costs .
4.Design of control system.
5.Inventory control.
6.Travelling salesman problem.

No comments:

Post a Comment