Random search methods. What optimization methods exist? Methods for optimizing management decisions Golden ratio method

Optimization is the process of finding an extremum (global maximum or minimum) of a certain function or choosing the best (optimal) option from a set of possible ones. The most reliable way to find the best option is a comparative assessment of all possible options (alternatives). If the number of alternatives is large, mathematical programming methods are usually used to find the best one. These methods can be used if there is a strict formulation of the problem: a set of variables is specified, the area of ​​their possible change is established (constraints are specified), and the type of the objective function (the function whose extremum needs to be found) from these variables is determined. The latter is a quantitative measure (criterion) for assessing the degree of achievement of the goal.

The problem of unconstrained optimization is to find the minimum or maximum of a function in the absence of any restrictions. Although most practical optimization problems contain constraints, learning unconstrained optimization methods is important from several points of view. Many algorithms for solving a constrained problem involve reducing it to a sequence of unconstrained optimization problems. Another class of methods is based on finding a suitable direction and then minimizing along this direction. The justification of unconstrained optimization methods can be naturally extended to the justification of procedures for solving problems with constraints.

The constrained optimization problem is to find the minimum or maximum value of a scalar function f(x) of n-dimensional vector arguments. The solution to the problem is based on a linear or quadratic approximation of the objective function to determine the increments x1, ..., xn at each iteration. There are also approximate methods for solving nonlinear problems. These are methods based on the piecewise linear approximation method. The accuracy of finding solutions depends on the number of intervals on which we find a solution to a linear problem that is as close as possible to the nonlinear one. This method allows calculations to be made using the simplex method. Typically, in linear models, the coefficients of the objective function are constant and do not depend on the values ​​of the variables. However, there are a number of problems where costs depend nonlinearly on volume.

Solution algorithm:

  • 1. The work begins by constructing a regular simplex in the space of independent variables and estimating the values ​​of the objective function at each of the vertices of the simplex.
  • 2. The vertex is determined - the largest value of the function.
  • 3. The vertex is projected through the centroid of the remaining vertices to a new point, which is used as the vertex of the new simplex.
  • 4. If the function decreases smoothly enough, the iterations continue until either the point min is covered, or cyclic movement along 2 or more simplexes begins.
  • 5. The search ends when either the dimensions of the simplex or the differences between the function values ​​at the vertices remain sufficiently small.

Task: capacity optimization. Achieve minimal costs for the manufacture of a container with a volume of 2750 liters for storing sand.

Z = C1X1 + C2X2 + C3X3 + C4X4 + C5X5 min;

where: X1 - amount of required metal, kg;

C1 - cost of metal, rub/kg;

X2 - mass of required electrodes, kg;

C2 - cost of electrodes, rub/kg;

X3 - amount of consumed electricity, kWh;

C3 - cost of electricity, rub/kWh;

X4 - welder working time, h;

C4 - welder tariff rate, rub/hour;

X5 - lift operating time, h;

C5 - lift fee, rub/hour.

1. Find the optimal surface area of ​​the container:

F = 2ab+2bh+2ah min (1)

where V=2750 liters.

x1=16.331; x2=10.99

The minimum of the function was obtained in the optimization process using the Box method - 1196.065 dm2

In accordance with GOST 19903 - 74, we accept:

h=16.50 dm, b=10.00 dm.

Let us express a from (1) and get:

Let's calculate the optimal thickness of the metal sheet

Let's choose ordinary carbon steel St2sp

For this steel 320 MPa, ;

Mass of sand.

Load on the wall of the container with the largest area:

Let's calculate the load per 1 linear centimeter of a sheet 100 cm wide:

Let us determine the wall thickness based on the condition:

where: l is the length of the sheet (preferably the longest to leave an additional margin of safety);

q - load per 1 linear centimeter, kg/cm;

Metal sheet thickness, m

Maximum permissible metal stress, N/mm2.

Let us express the wall thickness from (2):

Considering that 320 MPa = 3263 kg/cm2,

Metal mass

where: F - surface area of ​​the container, m2;

Metal wall thickness, m;

Metal density, kg/m3.

The price of St2sp steel is about 38 rubles/kg.

2. Weld length:

We will use electrodes for stainless steel “UONI-13/45”

Price 88.66 rub/kg;

where: Sweld - cross-sectional area of ​​the weld, m2;

l is the length of the weld, m;

Density of deposited metal, kg/m3.

3. Welding time:

where l is the length of the weld, m;

v - welding speed, m/h.

Total power consumption:

Рsum = 5 17 = 85 kWh;

The cost of electricity is 5.7 rubles/kWh.

4. For manual arc welding, the costs of auxiliary, preparatory and final time and time for servicing the workplace average 40 - 60%. Let's use the average value of 50%.

Total time:

Payment for a welder of the VI category is 270 rubles/hour.

Plus a tariff coefficient of 17% for work in a confined, poorly ventilated space:

The assistant's payment will be 60% of the welder's payment:

8055 0.6 = 4833 rub.

Total: 8055+4833 = 12888 rubles.

5. A crane is needed to hold sheets of metal during welding, loading and unloading sheets of metal and the finished container itself.

To “grab” the entire structure, the welder needs to apply about 30% of the seams.

Payment for the crane is 1000 rubles/hour.

Total cost of the container.

5. Multidimensional optimization

Linear programming

Optimization is a purposeful activity aimed at obtaining the best results under appropriate conditions.

The quantitative assessment of the quality being optimized is called optimality criterion or target function .It can be written in the form:

(5.1)

where x 1, x 2, …, x n– some parameters of the optimization object.

There are two types of optimization problems – unconditional and conditional.

Unconditional task optimization consists in finding the maximum or minimum of the real function (5.1) ofnreal variables and determining the corresponding argument values.

Conditional optimization problems , or problems with restrictions, are those in the formulation of which restrictions in the form of equalities or inequalities are imposed on the values ​​of the arguments.

Solving optimization problems in which the optimality criterion is a linear function of independent variables (that is, contains these variables to the first degree) with linear restrictions on them is the subject of linear programming.

The word “programming” here reflects the ultimate goal of the study - determining the optimal plan or optimal program, according to which, from the many possible options for the process under study, the best, optimal option is selected based on some criterion.

Example such a task is problem of optimal distribution of raw materials between different industries at the maximum cost of production.

Let two types of products be made from two types of raw materials.

Let's denote: x 1 , x 2 – the number of units of products of the first and second types, respectively; c 1 , c 2 – unit price of products of the first and second types, respectively. Then the total cost of all products will be:

(5.2)

As a result of production, it is desirable that the total cost of production be maximized.R (x 1 , x 2 ) is the objective function in this problem.

b 1, b 2 – the amount of raw materials of the first and second types available;a ij– number of units i -th type of raw material required to produce a unitj-th type of product.

Considering that the consumption of a given resource cannot exceed its total quantity, we write down the restrictive conditions for resources:

(5.3)

Regarding variables x 1 , x 2 we can also say that they are non-negative and infinite:

(5.4)

Among the many solutions to the system of inequalities (5.3) and (5.4), it is required to find such a solution ( x 1 , x 2 ), for which the functionRreaches its greatest value.

The so-called transport problems (problems of optimally organizing the delivery of goods, raw materials or products from various warehouses to several destinations with a minimum of transportation costs) and a number of others are formulated in a similar form.

Graphical method for solving linear programming problems.

Let it be required to find x 1 and x 2 , satisfying system of inequalities:

(5.5)

and conditions non-negativity:

(5.6)

For whose function

(5. 7 )

reaches its maximum.

Solution.

Let's construct in a system of rectangular coordinates x 1 x 2 area of ​​feasible solutions to the problem (Fig. 11). To do this, replacing each of the inequalities (5.5) with equality, we construct relevant its boundary line:

(i = 1, 2, … , r)

Rice. eleven

This straight line divides the entire plane into two half-planes. For coordinates x 1 , x 2 any point A one half-plane the following inequality holds:

and for the coordinates of any point IN another half-plane – the opposite inequality:

The coordinates of any point on the boundary line satisfy the equation:

To determine on which side of the boundary line the half-plane corresponding to a given inequality is located, it is enough to “test” one point (the easiest way is the point ABOUT(0;0)). If, when substituting its coordinates into the left side of the inequality, it is satisfied, then the half-plane is turned towards the point under test; if the inequality is not satisfied, then the corresponding half-plane is turned in the opposite direction. The direction of the half-plane is shown in the drawing by hatching. Inequalities:

correspond to half-planes located to the right of the ordinate axis and above the abscissa axis.

In the figure we construct boundary straight lines and half-planes corresponding to all inequalities.

The common part (intersection) of all these half-planes will represent the region of feasible solutions to this problem.

When constructing a region of feasible solutions, depending on the specific type of system of restrictions (inequalities) on variables, one of the following four cases may occur:

Rice. 12. The region of feasible solutions is empty, which corresponds to the inconsistency of the system of inequalities; no solution

Rice. 13. The region of feasible solutions is represented by one point A, which corresponds to the only solution to the system

Rice. 14. The area of ​​feasible solutions is limited and is depicted as a convex polygon. There are an infinite number of feasible solutions

Rice. 15. The region of feasible solutions is unlimited, in the form of a convex polygonal region. There are an infinite number of feasible solutions

Graphical representation of the objective function

at a fixed valueRdefines a straight line, and when changingR- a family of parallel lines with a parameterR. For all points lying on one of the lines, the function R takes one specific value, so the indicated straight lines are called level lines for function R.

Gradient vector:

perpendicularto the level lines, shows the direction of increaseR.

The problem of finding an optimal solution to the system of inequalities (5.5), for which the objective function isR(5.7) reaches a maximum, geometrically reduces to determining in the region of admissible solutions the point through which the level line corresponding to the largest value of the parameter will passR

Rice. 16

If the region of feasible solutions is a convex polygon, then the extremum of the functionR is reached at least at one of the vertices of this polygon.

If the extreme valueRis achieved at two vertices, then the same extreme value is achieved at any point on the segment connecting these two vertices. In this case, the task is said to have alternative optimum .

In the case of an unbounded region, the extremum of the functionReither does not exist, or is achieved at one of the vertices of the region, or has an alternative optimum.

Example.

Suppose we need to find the values x 1 and x 2 , satisfying the system of inequalities:

and conditions non-negativity:

For whose function is:

reaches its maximum.

Solution.

Let us replace each of the inequalities with equality and construct the boundary lines:

Rice. 17

Let us determine the half-planes corresponding to these inequalities by “testing” the point (0;0). Taking into account non-negativity x 1 and x 2 we obtain the region of feasible solutions to this problem in the form of a convex polygon OAVDE.

In the region of feasible solutions, we find the optimal solution by constructing the gradient vector

showingdirection of increaseR.

The optimal solution corresponds to the point IN, the coordinates of which can be determined either graphically or by solving a system of two equations corresponding to the boundary straight lines AB and VD:

Answer: x 1 = 2; x 2 = 6; Rmax = 22.

Tasks. Find the position of the extremum point and the extreme value of the objective function

under given restrictions.

Table 9

Option No.

Extremum

Restrictions

M ax

; ;

; ;

Max

; ; ;

;

; ;

; ;

; ;

; ; ;

;

; ;


Classic unconstrained optimization methods

Introduction

As is known, the classical unconstrained optimization problem has the form:

There are analytical and numerical methods for solving these problems.

First of all, let us recall the analytical methods for solving the unconstrained optimization problem.

Unconstrained optimization methods occupy a significant place in ML courses. This is due to their direct use in solving a number of optimization problems, as well as in the implementation of methods for solving a significant part of conditional optimization problems (MP problems).

1. Necessary conditions for a local minimum (maximum) point

Let m provide the minimum values ​​of the function. It is known that at this point the increment of the function is non-negative, i.e.

Let's find it using the Taylor series expansion of the function in the neighborhood of m.

where, is the sum of the terms of the series whose order is relative to increments (two) and higher.

From (4) it clearly follows that

Suppose then

Taking into account (6) we have: . (7)

Let us assume that it is positive, i.e. . Let us then choose a product that contradicts (1).

So it's really obvious.

Reasoning similarly for other variables, we obtain the necessary condition for local minimum points of a function of many variables

It is easy to prove that for a local maximum point the necessary conditions will be exactly the same as for a local minimum point, i.e. conditions (8).

It is clear that the result of the proof will be an inequality of the form: - a condition for non-positive increment of a function in the vicinity of a local maximum.

The obtained necessary conditions do not answer the question: whether the stationary point is a minimum point or a maximum point.

The answer to this question can be obtained by studying sufficient conditions. These conditions imply the study of the matrix of second derivatives of the objective function.

2. Sufficient conditions for a local minimum (maximum) point

Let us represent the expansion of a function in a neighborhood of a point in a Taylor series up to quadratic terms.

Decomposition (1) can be presented more briefly using the concepts: “scalar product of vectors” and “vector-matrix product”.

Matrix of two derivatives of the objective function with respect to the corresponding variables.

The increment of a function based on (1") can be written as:

Taking into account the necessary conditions:

Let's substitute (3) in the form:

The quadratic form is called differential quadratic form (DQF).

If the DCF is positive definite, then the stationary point is also a local minimum point.

If the DCF and the matrix representing it are negative definite, then the stationary point is also a point of local maximum.

So, the necessary and sufficient condition for the local minimum point has the form

(these necessary conditions can be written as follows:

Sufficient condition.

Accordingly, the necessary and sufficient condition for a local maximum has the form:

Let us recall the criterion that allows us to determine whether the quadratic form and the matrix representing it are positive definite or negative definite.

3. Sylvester criterion

Allows you to answer the question: is the quadratic form and the matrix representing it positive definite or negative definite.

Called the Hessian matrix.

Principal determinant of the Hessian matrix

and the DCF that it represents will be positive definite if all the main determinants of the Hessian matrix () are positive (i.e., the following sign scheme holds:

If there is a different scheme of signs for the main determinants of the Hessian matrix, for example, then the matrix and the DCF are negatively defined.

4. Euler's method - a classical method for solving unconstrained optimization problems

This method is based on the necessary and sufficient conditions studied in 1.1 - 1.3; applicable to finding local extrema of only continuous differentiable functions.

The algorithm for this method is quite simple:

1) using the necessary conditions, we form a system of nonlinear equations in the general case. Note that it is impossible to solve this system analytically in the general case; it is necessary to apply numerical methods for solving systems of nonlinear equations (NL) (see "FM"). For this reason, Euler's method will be an analytical-numerical method. By solving the indicated system of equations we find the coordinates of the stationary point.;

2) we study the DCF and the Hessian matrix that represents it. Using the Sylvester criterion, we determine whether a stationary point is a minimum point or a maximum point;

3) calculate the value of the objective function at the extreme point

Using the Euler method, solve the following unconstrained optimization problem: find 4 stationary points of a function of the form:

Find out the nature of these points, whether they are minimum points or saddle points (see). Construct a graphical display of this function in space and on a plane (using level lines).

5. Classical constrained optimization problem and methods for solving it: Elimination method and Lagrange multiplier method (LML)

As is known, the classical constrained optimization problem has the form:

A graph explaining the formulation of problem (1), (2) in space.

Level line equations

So, the ODR in the problem under consideration is a certain curve represented by equation (2").

As can be seen from the figure, the point is the point of the unconditional global maximum; point - a point of a conditional (relative) local minimum; point - a point of conditional (relative) local maximum.

Problem (1"), (2") can be solved by the elimination (substitution) method by solving equation (2") with respect to the variable and substituting the found solution (1").

The original problem (1"), (2") is thus transformed into a problem of unconditional optimization of a function, which can be easily solved by the Euler method.

Elimination (substitution) method.

Let the objective function depend on the variables:

are called dependent variables (or state variables); accordingly, you can enter the vector

The remaining variables are called independent decision variables.

Accordingly, we can talk about a column vector:

and vector.

In a classic constrained optimization problem:

System (2), in accordance with the elimination (substitution) method, must be resolved with respect to dependent variables (state variables), i.e. The following expressions for the dependent variables should be obtained:

Is the system of equations (2) always solvable with respect to the dependent variables - not always; this is only possible in the case when the determinant, called the Jacobian, the elements of which have the form:

is not equal to zero (see the corresponding theorem in the MA course)

As can be seen, the functions must be continuous differentiable functions; secondly, the elements of the determinant must be calculated at the stationary point of the objective function.

Substituting from (3) into the objective function (1), we have:

The function under study can be brought to an extremum by Euler's method - the method of unconditional optimization of a continuously differentiable function.

So, the elimination (substitution) method allows you to use the classical conditional optimization problem to transform it into an unconditional optimization problem of a function - a function of variables under condition (4), which allows you to obtain a system of expressions (3).

Disadvantage of the exclusion method: difficulties and sometimes impossibility of obtaining a system of expressions (3). Free from this drawback, but requiring the fulfillment of condition (4) is MML.

5.2. Lagrange multiplier method. Necessary conditions in a classical constrained optimization problem. Lagrange function

MML allows the original problem of classical constrained optimization:

Convert into a problem of unconstrained optimization of a specially constructed function - the Lagrange function:

where, are the Lagrange multipliers;

As you can see, it is a sum consisting of the original objective function and the “weighted” sum of functions - functions representing their restrictions (2) of the original problem.

Let the point be the unconditional extremum point of the function, then, as is known, or (the total differential of the function at the point).

Using the concept of dependent and independent variables - dependent variables; - independent variables, then we present (5) in expanded form:

From (2) it obviously follows a system of equations of the form:

The result of calculating the total differential for each of the functions

Let us present (6) in “expanded” form, using the concept of dependent and independent variables:

Note that (6"), unlike (5"), is a system consisting of equations.

Let's multiply each th equation of the system (6") by the corresponding th Lagrange multiplier. Add them together and with equation (5") and get the expression:

Let us arrange the Lagrange multipliers in such a way that the expression in square brackets under the sign of the first sum (in other words, the coefficients of the differentials of the independent variables) equals zero.

The term “disposing” of the Lagrange multipliers in the above manner means that it is necessary to solve some system of equations for.

The structure of such a system of equations can be easily obtained by equating the expression in square brackets under the first sum sign to zero:

Let us rewrite (8) in the form

System (8") is a system of linear equations with respect to the known: . The system is solvable if (that is why, as in the elimination method in the case under consideration, the condition must be satisfied). (9)

Since in the key expression (7) the first sum is equal to zero, it is easy to understand that the second sum will also be equal to zero, i.e. the following system of equations takes place:

System of equations (8) consists of equations, and system of equations (10) consists of equations; total equations in two systems, and unknowns

The missing equations are given by the system of constraint equations (2):

So, there is a system of equations for finding unknowns:

The obtained result - the system of equations (11) - constitutes the main content of the MML.

It is easy to understand that the system of equations (11) can be obtained very simply by introducing into consideration a specially constructed Lagrange function (3).

Really

So, the system of equations (11) can be represented as (using (12), (13)):

System of equations (14) represents the necessary condition in the classical constrained optimization problem.

The vector value found as a result of solving this system is called a conditionally stationary point.

In order to find out the nature of a conditionally stationary point, it is necessary to use sufficient conditions.

5.3 Sufficient conditions in the classical constrained optimization problem. MML algorithm

These conditions make it possible to find out whether a conditionally stationary point is a point of a local conditional minimum, or a point of a local conditional maximum.

Relatively simple, similar to how sufficient conditions were obtained in the problem of an unconditional extremum. It is also possible to obtain sufficient conditions in a classical constrained optimization problem.

The result of this study:

where is the point of local conditional minimum.

where is the point of local conditional maximum, is the Hessian matrix with elements

The Hessian matrix has dimension.

The dimension of the Hessian matrix can be reduced using the condition that the Jacobian is not zero: . Under this condition, dependent variables can be expressed through independent variables, then the Hessian matrix will have a dimension, i.e. we need to talk about a matrix with elements

then the sufficient conditions will look like:

Local conditional minimum point.

Local conditional maximum point.

Proof: MML Algorithm:

1) compose the Lagrange function: ;

2) using the necessary conditions, we form a system of equations:

3) from the solution of this system we find a point;

4) using sufficient conditions, we determine whether the point is a point of local conditional minimum or maximum, then we find

1.5.4. Graphic-analytical method for solving the classical problem of constrained optimization in space and its modification when solving the simplest IP and AP problems

This method uses a geometric interpretation of the classical constrained optimization problem and is based on a number of important facts inherent in this problem.

B is the common tangent for the function and the function representing the ODR.

As can be seen from the figure, a point is a point of unconditional minimum, a point is a point of conditional local minimum, a point is a point of conditional local maximum.

Let us prove that at the points of conditional local extrema the curve and the corresponding level lines

From the MA course it is known that at the point of contact the condition is satisfied

where is the angular coefficient of the tangent drawn by the corresponding level line; - angular coefficient of the tangent drawn to the function

The expression (MA) for these coefficients is known:

Let us prove that these coefficients are equal.

because the necessary conditions “speak” about it

The above allows us to formulate the GFA algorithm for solving the classical constrained optimization problem:

1) build a family of lines of the objective function level:

2) construct the ODD using the constraint equation

3) in order to correct the increase in the function, we find and clarify the nature of the extreme points;

4) we study the interaction of level lines and functions, while finding from the system of equations the coordinates of conditionally stationary points - local conditional minima and local conditional maxima.

5) calculate

It should be especially noted that the main stages of the GFA method for solving the classical conditional optimization problem coincide with the main stages of the GFA method for solving LP and LP problems, the only difference is in the ODR, as well as in finding the location of extreme points in the ODD (for example, in LP problems these points are necessarily located at the vertices of a convex polygon representing the ODR).

5.5. About the practical meaning of MML

Let's imagine the classic constrained optimization problem as:

where are variable quantities representing variable resources in applied technical and economic problems.

In space, problem (1), (2) takes the form:

where is a variable quantity. (2")

Let be a conditional extremum point:

When changing changes

The value of the objective function will change accordingly:

Let's calculate the derivative:

From (3), (4), (5). (6)

Substitute (5") into (3) and get:

From (6) that the Lagrange multiplier characterizes the “reaction” value (orthogonal to the value of the objective function) to changes in the parameter.

In the general case (6) takes the form:

From (6), (7), the multiplier characterizes the change when the corresponding resource changes by 1.

If is the maximum profit or minimum cost, then characterizes the changes in this value when changing by 1.

5.6. The classical problem of constrained optimization, as the problem of finding the saddle point of the Lagrange function:

A pair is called a saddle point if the inequality holds.

It is obvious that from (1). (2)

From (2), that. (3)

As you can see, system (3) contains equations similar to those equations that represent the necessary condition in the classical constrained optimization problem:

where is the Lagrange function.

In connection with the analogy of systems of equations (3) and (4), the classical problem of constrained optimization can be considered as the problem of finding the saddle point of the Lagrange function.

Similar documents

    Multidimensional optimization problems in the study of technological processes in the textile industry, analysis of emerging difficulties. Finding the extremum, the type of extremum, the value of the objective function of unconstrained multidimensional optimization.

    test, added 11/26/2011

    Characteristics of classical methods of unconstrained optimization. Determination of the necessary and sufficient condition for the existence of an extremum of functions of one and several variables. Lagrange multiplier rule. Necessary and sufficient conditions for optimality.

    course work, added 10/13/2013

    Methodology and features of solving optimization problems, in particular about the distribution of investments and the choice of path in the transport network. Specifics of modeling using Hamming and Brown methods. Identification, stimulation and motivation as management functions.

    test, added 12/12/2009

    Statement, analysis, graphical solution of linear optimization problems, simplex method, duality in linear optimization. Statement of the transport problem, properties and finding a reference solution. Conditional optimization under equality constraints.

    training manual, added 07/11/2010

    Critical path in a graph. Optimal flow distribution in a transport network. Linear programming problem solved graphically. Unbalanced transport problem. Numerical methods for solving one-dimensional static optimization problems.

    course work, added 06/21/2014

    Graphical method for solving the problem of optimization of production processes. Application of a simplex algorithm to solve an economically optimized production management problem. Dynamic programming method for selecting the optimal path profile.

    test, added 10/15/2010

    Optimization methods for solving economic problems. Classical formulation of the optimization problem. Function optimization. Optimization of functionality. Multicriteria optimization. Methods for reducing a multicriteria problem to a single-criteria one. Concessions method.

    abstract, added 06/20/2005

    Application of nonlinear programming methods to solve problems with nonlinear functions of variables. Optimality conditions (Kuhn-Tucker theorem). Conditional optimization methods (Wolfe method); gradient design; penalty and barrier functions.

    abstract, added 10/25/2009

    Concept, definition, highlighting of features, capabilities and characteristics of existing problems of multicriteria optimization and ways to solve them. Calculation of the method of equal and least deviations of multicriteria optimization and its application in practice.

    course work, added 01/21/2012

    Basic concepts of modeling. General concepts and definition of the model. Setting optimization problems. Linear programming methods. General and typical problem in linear programming. Simplex method for solving linear programming problems.

Problem 1. Find

where x = (x 1 .. x p) e E p

This problem comes down to solving the system of equations

and study the value of the second differential

at points (a-|, (*2, a n) solutions to equations (7.3).

If the quadratic form (7.4) is negative definite at a point, then it reaches its maximum value, and if it is positive definite, then it reaches its minimum value.

Example:

The system of equations has solutions:

Point (-1; 3.0) is the maximum point, and point (1; 3.2) is the minimum point.

Task 2. Find

under conditions:

This problem 2 is solved by the Lagrange multiplier method, for which a solution to the system is found (t + p) equations:

Example 2. Find the sides of a rectangle with maximum area inscribed in a circle Area L of a rectangle

can be written as: A= 4xy, then

where

Task 3. Find under the conditions:

This problem covers a wide range of settings determined by the functions f and Wed If they are linear, then the problem is a linear programming problem.

Task For.

under conditions

It is solved using the simplex method, which, using the apparatus of linear algebra, carries out a targeted search of the vertices of the polyhedron defined by (7.13).

Simplex method consists of two stages.

Stage 1. Finding the reference solution x^ 0). The reference solution is one of the points of the polyhedron (7.13).

Stage 2. Finding the optimal solution. It is found by sequentially enumerating the vertices of the polyhedron (7.13), in which the value of the objective function z does not decrease at each step, that is:

A special case of a linear programming problem is the so-called transport problem.

Transport problem. Let in points a-1, a 2, .... a l there be warehouses in which goods are stored in quantities x 1, x 2, ..., x l, respectively. At points b-|, b 2,..., b t there are consumers who need to supply these goods in quantities y- y 2, y t respectively. Let's denote Cjj cost of transporting a unit of cargo between points a-| and by.

We examine the operation of transporting goods by consumers in quantities sufficient to satisfy the needs of the clientele. Let us denote by Hu the quantity of goods transported from point a to point by.

In order to satisfy consumer needs, it is necessary that the values ​​of x, y satisfy the conditions:

At the same time, it is impossible to export more products from the warehouse than are available there. This means that the required quantities must satisfy the system of inequalities:

Satisfy conditions (7.14), (7.15), i.e. There are countless ways to create a transportation plan that meets consumer needs. In order for the operations researcher to select a certain optimal solution, i.e. assign certain Xjj, some selection rule must be formulated, determined using a criterion that reflects our subjective idea of ​​\u200b\u200bthe goal.

The problem of the criterion is solved independently of the study of the operation - the criterion must be set by the operating party. In the problem under consideration, one of the possible criteria will be the cost of transportation. It amounts to

Then the transportation problem is formulated as a linear programming problem: determine the values ​​x,y > O that satisfy the constraints (7.14), (7.15) and provide the minimum value to the function (7.16). Constraint (7.15) is a balance condition; condition (7.14) can be called the goal of the operation, because the meaning of the operation is to satisfy the needs of consumers.

Specifying two conditions essentially constitutes a model of the operation. The implementation of the operation will depend on the criterion by which the goal of the operation will be achieved. A criterion can appear in various roles. It can act both as a way of formalizing a goal and as a principle for selecting actions from among those that are permissible, i.e. satisfying the restrictions.

One of the well-known methods for solving the transport problem is the potential method, the scheme of which is as follows.

At the first stage of solving the problem, an initial transportation plan is drawn up that satisfies constraints (7.14), (7.15). If

(i.e. the total needs do not coincide with the total stocks of products in warehouses), then a fictitious point of consumption is introduced into consideration or fictitious warehouse

with transportation costs equal to zero. For a new task, the total number of goods in warehouses coincides with their total demand. Then, by some method (for example, the smallest element or the northwest corner) the original plan is found. At the next step, the procedures of the resulting plan build a system of special characteristics - potentials. A necessary and sufficient condition for an optimal plan is its potentiality. The plan refinement procedure is repeated until the plan becomes potential (optimal).

Task 36. In the general case, problem (7.10-7.11) is called a nonlinear programming problem. Let's consider it in the form

under conditions

To solve this problem, so-called relaxation methods are used. The process of constructing a sequence of points is called relaxation if:

Descent methods (general scheme). All descent methods in solving the unconstrained optimization problem (7.17) differ either in the choice of descent direction or in the method of movement along the descent direction. Descent methods consist of the following sequence construction procedure (x k).

An arbitrary point Xq is chosen as the initial approximation. Successive approximations are constructed according to the following scheme:

  • point x k descent direction is selected - S k ;
  • find (To+ 1)th approximation according to the formula

where as a quantity $k choose any number that satisfies the inequality

where is the number X k - any number such that 0 X k min f(x k - $ Sk).

In most descent methods, the value X k is chosen equal to one. Thus, to determine (3^ it is necessary to solve the problem of one-dimensional minimization.

Gradient descent method. Since the anti-gradient is G(x k) indicates the direction of the fastest decrease of the function f(x), then it is natural to move from the point x to by this direction. Descent method in which S k = f"(x k) called the gradient descent method. If X k= 1, then the relaxation process is called the steepest descent method.

Method of conjugate directions. IN In linear algebra, this method is known as the conjugate gradient method for solving systems of linear algebraic equations OH= b, and therefore, as a method for minimizing a quadratic function f(x) =((Dx - b)) 2 .

Method diagram:

If f k = 0, then this circuit turns into a steepest descent method circuit. Appropriate selection of value tk guarantees the convergence of the conjugate direction method with a speed of the same order as in gradient descent methods, and ensures that the number of iterations in quadratic descent is finite (for example,

Coordinate descent. At each iteration, as the direction of descent S k the direction along one of the coordinate axes is selected. The method has a convergence rate of the minimization process of the order of 0 (1 //77), and it significantly depends on the dimension of the space.

Method diagram:

Where coordinate vector,

If at the point x k there is information about the behavior of the function gradient f(x), For example,

then as the direction of descent S k we can take the coordinate vector ey. In this case, the convergence rate of the method is P times less than with gradient descent.

At the initial stage of the minimization process, you can use the method of cyclic coordinate-by-coordinate descent, when first the descent is carried out in the direction e-|, then in the direction b2, etc. up to e p, after which the whole cycle repeats. More promising than the one described is coordinate-by-coordinate descent, in which the directions of descent are chosen randomly. With this approach to choosing a direction, there are a priori estimates that guarantee for the function f(x) with a probability tending to unity when the process converges at a speed of the order of 0(1 1t).

Method diagram:

At every step of the process P numbers (1, 2, ..., P) a number is randomly selected j(k) and as s k a unit coordinate vector is selected vsch, after which the descent takes place:


Random descent method. A random point is selected on the n-dimensional unit sphere centered at the origin S k , obeying a uniform distribution on this sphere, and then according to the element calculated at the /th step of the process x k determined x k+] :


Convergence rate of the random descent method in P times less than the gradient descent method, but P times greater than that of the random coordinate descent method. The considered descent methods are also applicable to functions that are not necessarily convex and guarantee their convergence under very small restrictions on them (such as the absence of local minima).

Relaxation methods of mathematical programming. Let's return to problem 36 ((7.17) - (7.18)):

under conditions

In optimization problems with constraints, the choice of descent direction involves the need to constantly check that the new value x k +" should be the same as the previous one x k, satisfy the system of constraints X.

Conditional gradient method. IN In this method, the idea of ​​choosing the direction of descent is as follows: at the point x k linearize the function

f(x), constructing a linear function f(x) = f(x k) + (y"(x k), x-x k), and then minimizing f(x) on a set X, find a point at k. After this they believe S k = y k - x k and then descend along this direction Xk+ 1= x k - $ k (x k -y k), so that g X.

Thus, to find the direction S k the problem of minimizing a linear function on a set X should be solved. If X, in turn, is specified by linear constraints, then it becomes a linear programming problem.

Method of possible directions. The idea of ​​the method: among all possible directions at the point xk, choose the one along which the function f(x) decreases most quickly, and then descend along this direction.

Direction s at the point X e X is called possible_if there exists such a number (3 > O, that X- (3s e X for all (3 g. To find a possible direction, it is necessary to solve a linear programming problem or the simplest quadratic programming problem: huh?=> min under conditions

Let d k And s k- solution to this problem. Condition (7.25) guarantees that the direction s k possible. Condition (7.26) ensures the maximum value (/"( x k),s), those. among all possible directions s, direction s k provides the fastest decreasing function f(x). Condition (7.27) eliminates the unboundedness of the solution to the problem. The possible directions method is resistant to possible computational errors. However, the rate of its convergence is difficult to estimate in the general case, and this problem remains unsolved.

Random search method. The implementation of the previously described minimization methods is generally very labor-intensive, except for the simplest cases when the set of constraints has a simple geometric structure (for example, it is a multidimensional parallelepiped). In general, the random search method, when the direction of descent is chosen randomly, can be very promising. In this case, there will be a significant loss in the speed of convergence, but the simplicity of choosing the direction can compensate for these losses in terms of the total labor costs for solving the minimization problem.

Method diagram:

on the n-dimensional unit sphere centered at the origin, a random point is selected gu subject to a uniform distribution on this sphere, and then the direction of descent - s^ from the conditions

As an initial approximation, we choose xc e X. Based on the point calculated at each iteration X? under construction (k + 1)th point x^+ y:

Any number from satisfying inequality

The convergence of this method is proved under very non-rigid restrictions on the function / (convexity) and the set of restrictions X(convexity and closure).

The most acceptable decision option, which is made at the managerial level regarding any issue, is considered to be optimal, and the process of searching for it is considered optimization.

The interdependence and complexity of organizational, socio-economic, technical and other aspects of production management currently comes down to making a management decision that affects a large number of different types of factors that are closely intertwined with each other, due to which it becomes impossible to analyze each separately using traditional analytical methods.

Most factors are decisive in the decision-making process, and they (inherently) cannot be quantified. There are also those that are practically unchanged. In this regard, there was a need to develop special methods capable of ensuring the selection of important management decisions within the framework of complex organizational, economic, technical problems (expert assessments, operations research and optimization methods, etc.).

Methods aimed at operations research are used to find optimal solutions in such areas of management as organizing production and transportation processes, planning large-scale production, material and technical supply.

Methods for optimizing solutions involve research by comparing numerical estimates of a number of factors, the analysis of which cannot be carried out using traditional methods. The optimal solution is the best among possible options regarding the economic system, and the most acceptable in relation to individual elements of the system is suboptimal.

The essence of operations research methods

As mentioned earlier, they form methods for optimizing management decisions. Their basis is mathematical (deterministic), probabilistic models representing the process, type of activity or system under study. This type of model represents a quantitative characteristic of the corresponding problem. They serve as the basis for making important management decisions in the process of searching for the optimal option.

A list of issues that play a significant role for direct production managers and that are resolved during the use of the methods under consideration:

  • the degree of validity of the chosen decision options;
  • how much better are they than the alternatives;
  • degree of consideration of determining factors;
  • what is the criterion for the optimality of the selected solutions.

These methods of decision optimization (managerial) are aimed at finding optimal solutions for as many firms, companies or their divisions as possible. They are based on existing achievements in statistical, mathematical and economic disciplines (game theory, queuing, graphics, optimal programming, mathematical statistics).

Expert assessment methods

These methods for optimizing management decisions are used when the problem is partially or completely not subject to formalization, and its solution cannot be found using mathematical methods.

Expertise is the study of complex special issues at the stage of developing a specific management decision by relevant persons who have a special knowledge base and impressive experience in order to obtain conclusions, recommendations, opinions, and assessments. In the process of expert research, the latest achievements of both science and technology are used within the framework of the expert’s specialization.

The considered methods for optimizing a number of management decisions (expert assessments) are effective in solving the following management tasks in the field of production:

  1. The study of complex processes, phenomena, situations, systems that are characterized by informal, qualitative characteristics.
  2. Ranking and determination, according to a given criterion, of significant factors that are decisive regarding the functioning and development of the production system.
  3. The optimization methods under consideration are particularly effective in predicting trends in the development of a production system, as well as its interaction with the external environment.
  4. Increasing the reliability of expert assessment of mainly target functions that are quantitative and qualitative in nature, by averaging the opinions of qualified specialists.

And these are just some methods for optimizing a number of management decisions (expert assessment).

Classification of the methods under consideration

Methods for solving optimization problems, based on the number of parameters, can be divided into:

  • One-dimensional optimization methods.
  • Multidimensional optimization methods.

They are also called “numerical optimization methods”. To be precise, these are algorithms for searching it.

As part of the use of derivatives, the methods are:

  • direct optimization methods (zero order);
  • gradient methods (1st order);
  • 2nd order methods, etc.

Most of the multidimensional optimization methods are close to the problem of the second group of methods (one-dimensional optimization).

One-dimensional optimization methods

Any numerical optimization methods are based on the approximate or exact calculation of such characteristics as the values ​​of the objective function and functions that define the admissible set and their derivatives. Thus, for each individual task, the question regarding the choice of characteristics for calculation can be resolved depending on the existing properties of the function under consideration, the available capabilities and limitations in storing and processing information.

There are the following methods for solving optimization problems (one-dimensional):

  • Fibonacci method;
  • dichotomies;
  • golden ratio;
  • doubling the step.

Fibonacci method

First, you need to set the coordinates of point x on the interval as a number equal to the ratio of the difference (x - a) to the difference (b - a). Therefore, a has a coordinate of 0 relative to the interval, and b has a coordinate of 1, and the midpoint is ½.

If we assume that F0 and F1 are equal to each other and take the value 1, F2 will be equal to 2, F3 - 3, ..., then Fn = Fn-1 + Fn-2. So, Fn are Fibonacci numbers, and Fibonacci search is the optimal strategy for the so-called sequential search for the maximum due to the fact that it is quite closely related to them.

As part of the optimal strategy, it is customary to choose xn - 1 = Fn-2: Fn, xn = Fn-1: Fn. For any of two intervals (or), each of which can act as a narrowed interval of uncertainty, the point (inherited) relative to the new interval will have either coordinates , or . Next, a point is taken as xn - 2 that has one of the presented coordinates relative to the new interval. If you use F(xn - 2), a function value that is inherited from the previous interval, it becomes possible to reduce the uncertainty interval and inherit one function value.

At the final step, it will be possible to move to an uncertainty interval such as , while the middle point is inherited from the previous step. As x1, a point is set that has a relative coordinate of ½+ε, and the final uncertainty interval will be or [½, 1] with respect to .

At the 1st step, the length of this interval was reduced to Fn-1: Fn (from one). At the finishing steps, the reduction in the lengths of the corresponding intervals is represented by the numbers Fn-2: Fn-1, Fn-3: Fn-2, ..., F2: F3, F1: F2 (1 + 2ε). So, the length of such an interval as the final version will take the value (1 + 2ε) : Fn.

If we neglect ε, then asymptotically 1: Fn will be equal to rn, with n→∞, and r = (√5 - 1) : 2, which is approximately equal to 0.6180.

It is worth noting that asymptotically for significant n, each subsequent step of the Fibonacci search significantly narrows the considered interval by the above coefficient. This result must be compared with 0.5 (the coefficient of narrowing the uncertainty interval within the bisection method for finding the zero of the function).

Dichotomy method

If you imagine a certain objective function, then first you need to find its extremum on the interval (a; b). To do this, the abscissa axis is divided into four equivalent parts, then it is necessary to determine the value of the function in question at 5 points. Next, the minimum among them is selected. The extremum of the function must lie within the interval (a"; b"), which is adjacent to the minimum point. The search boundaries are narrowed by 2 times. And if the minimum is located in point a or b, then it narrows by all four times. The new interval is also divided into four equal segments. Due to the fact that the values ​​of this function at three points were determined at the previous stage, it is then necessary to calculate the objective function at two points.

Golden ratio method

For significant values ​​of n, the coordinates of points such as xn and xn-1 are close to 1 - r, equal to 0.3820, and r ≈ 0.6180. The push from these values ​​is very close to the desired optimal strategy.

If we assume that F(0.3820) > F(0.6180), then the interval is outlined. However, due to the fact that 0.6180 * 0.6180 ≈ 0.3820 ≈ xn-1, then F is already known at this point. Consequently, at each stage, starting from the 2nd, only one calculation of the objective function is necessary, and each step reduces the length of the considered interval by a factor of 0.6180.

Unlike Fibonacci search, this method does not require fixing the number n before starting the search.

The “golden section” of a section (a; b) is a section at which the ratio of its length r to the larger part (a; c) is identical to the ratio of the larger part r to the smaller one, that is, (a; c) to (c; b). It is not difficult to guess that r is determined by the above formula. Consequently, for significant n, the Fibonacci method goes into this one.

Step doubling method

The essence is the search for the direction of decrease of the objective function, movement in this direction in case of a successful search with a gradually increasing step.

First, we determine the initial coordinate M0 of the function F(M), the minimum step value h0, and the search direction. Then we define the function at point M0. Next, we take a step and find the value of this function at this point.

If the function is less than the value that was in the previous step, the next step should be taken in the same direction, having first increased it by 2 times. If its value is greater than the previous one, you will need to change the search direction and then start moving in the selected direction with steps h0. The presented algorithm can be modified.

Multidimensional optimization methods

The above-mentioned zero-order method does not take into account the derivatives of the minimized function, which is why their use can be effective if any difficulties arise in calculating derivatives.

The group of 1st order methods is also called gradient methods, because to establish the search direction, the gradient of a given function is used - a vector, the components of which are the partial derivatives of the minimized function with respect to the corresponding optimized parameters.

In the group of 2nd order methods, 2 derivatives are used (their use is quite limited due to difficulties in their calculation).

List of unconstrained optimization methods

When using multidimensional search without using derivatives, unconstrained optimization methods are as follows:

  • Hook and Jeeves (carrying out 2 types of search - pattern-based and exploratory);
  • minimization by the correct simplex (searching for the minimum point of the corresponding function by comparing its values ​​at the vertices of the simplex at each individual iteration);
  • cyclic coordinate descent (using coordinate vectors as reference points);
  • Rosenbrock (based on the use of one-dimensional minimization);
  • minimization using a deformed simplex (modification of the minimization method using a regular simplex: adding a compression and stretching procedure).

In the situation of using derivatives in the process of multidimensional search, the method of steepest descent is distinguished (the most fundamental procedure for minimizing a differentiable function with several variables).

There are also other methods that use conjugate directions (Davidon-Fletcher-Powell method). Its essence is the representation of search directions as Dj*grad(f(y)).

Classification of mathematical optimization methods

Conventionally, based on the dimension of functions (target), they are:

  • with 1 variable;
  • multidimensional.

Depending on the function (linear or nonlinear), there are a large number of mathematical methods aimed at finding an extremum to solve the problem.

Based on the criterion for using derivatives, mathematical optimization methods are divided into:

  • methods for calculating 1 derivative of the objective function;
  • multidimensional (1st derivative-vector quantity-gradient).

Based on the efficiency of the calculation, there are:

  • methods for fast calculation of extremum;
  • simplified calculation.

This is a conditional classification of the methods under consideration.

Business Process Optimization

Various methods can be used here, depending on the problems being solved. It is customary to distinguish the following methods for optimizing business processes:

  • exceptions (reducing the levels of the existing process, eliminating the causes of interference and incoming control, reducing transport routes);
  • simplification (facilitated order processing, reduced complexity of the product structure, distribution of work);
  • standardization (use of special programs, methods, technologies, etc.);
  • acceleration (parallel engineering, stimulation, operational design of prototypes, automation);
  • change (changes in raw materials, technology, work methods, staffing, work systems, order volume, processing procedures);
  • ensuring interaction (in relation to organizational units, personnel, work system);
  • selection and inclusion (relative to necessary processes, components).

Tax optimization: methods

Russian legislation provides the taxpayer with very rich opportunities to reduce taxes, which is why it is customary to distinguish such methods aimed at minimizing them as general (classical) and special.

General tax optimization methods are as follows:

  • elaboration of the company’s accounting policy with the maximum possible use of the opportunities provided by Russian legislation (the procedure for writing off small business enterprises, the choice of a method for calculating revenue from the sale of goods, etc.);
  • optimization through a contract (conclusion of preferential transactions, clear and competent use of wording, etc.);
  • application of various types of benefits and tax exemptions.

The second group of methods can also be used by all companies, but they still have a rather narrow scope of application. Special tax optimization methods are as follows:

  • replacement of relations (an operation that involves burdensome taxation is replaced by another, which allows one to achieve a similar goal, but at the same time use a preferential tax treatment).
  • division of relations (replacement of only part of a business transaction);
  • deferment of tax payment (postponement of the moment of appearance of the taxable object to another calendar period);
  • direct reduction of the object of taxation (getting rid of many taxable transactions or property without having a negative impact on the main economic activities of the company).

Views