 ## Vectors and Matrices Appendix Vectors and matrices are notational conveniences for dealing with systems of linear equations and inequalities - Description

In particular they are useful for compactly representing and discussing the linear programming problem Maximize subject to i j This appendix reviews several properties of vectors and matrices that are especially relevant to this problem We shoul ID: 23411 Download Pdf

226K - views

# Vectors and Matrices Appendix Vectors and matrices are notational conveniences for dealing with systems of linear equations and inequalities

In particular they are useful for compactly representing and discussing the linear programming problem Maximize subject to i j This appendix reviews several properties of vectors and matrices that are especially relevant to this problem We shoul

## Vectors and Matrices Appendix Vectors and matrices are notational conveniences for dealing with systems of linear equations and inequalities

Download Pdf - The PPT/PDF document "Vectors and Matrices Appendix Vectors an..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.

## Presentation on theme: "Vectors and Matrices Appendix Vectors and matrices are notational conveniences for dealing with systems of linear equations and inequalities"— Presentation transcript:

Page 1
Vectors and Matrices Appendix Vectors and matrices are notational conveniences for dealing with systems of linear equations and inequalities. In particular, they are useful for compactly representing and discussing the linear programming problem: Maximize subject to: i j ,..., ), ,..., ). This appendix reviews several properties of vectors and matrices that are especially relevant to this problem. We should note, however, that the material contained here is more technical than is required for understanding the rest of this book. It is included for completeness rather than for

background. A.1 VECTORS We begin by deﬁning vectors, relations among vectors, and elementary vector operations. Deﬁnition. k-dimensional vector y is an ordered collection of real numbers ,..., , and is written as ,..., . The numbers ,..., are called the components of the vector Each of the following are examples of vectors: i) (1, 3, 0, 5) is a four-dimensional vector. Its ﬁrst component is 1, its second component is 3, and its third and fourth components are 0 and 5, respectively. ii) The coefﬁcients ,..., of the linear-programming objective function determine the -dimensional vector

,..., iii) The activity levels ,..., of a linear program deﬁne the -dimensional vector ,..., iv) The coefﬁcients ,..., in of the decision variables in the th equation of a linear program deter- mine an -dimensional vector ,..., in v) The coefﬁcients ,..., nj of the decision variable in constraints 1 through of a linear program deﬁne an -dimensional vector which we denote as ,..., mj 487
Page 2
488 Vectors and Matrices A.2 Equality and ordering of vectors are deﬁned by comparing the vectors’ individual components. Formally, let ,..., and ,..., be two -dimensional vectors.

We write: when ,..., ), or when ,..., ), or when ,..., ), and say, respectively, that y equals z, y is greater than or equal to z and that is greater than z . In the last two cases, we also say that is less than or equal to y and less than y . It should be emphasized that not all vectors are ordered. For example, if and , then the ﬁrst two components of are greater than or equal to the ﬁrst two components of but the third component of is less than the corresponding component of A ﬁnal note: 0 is used to denote the null vector (0, 0, …, 0), where the dimension of the vector is

understood from context. Thus, if is a -dimensional vector, 0 means that each component of the vector is nonnegative. We also deﬁne scalar multiplication and addition in terms of the components of the vectors. Deﬁnition. Scalar multiplication of a vector ,..., and a scalar is deﬁned to be a new vector ,..., , written or α, whose components are given by Deﬁnition. Vector addition of two -dimensional vectors ,..., and ,..., is deﬁned as a new vector ,..., , denoted , with components given by As an example of scalar multiplication, consider 12 32 ), and for vector addition, ). Using

both operations, we can make the following type of calculation: ). It is important to note that and must have the same dimensions for vector addition and vector comparisons. Thus is not deﬁned, and makes no sense at all. A.2 MATRICES We can now extend these ideas to any rectangular array of numbers, which we call a matrix Deﬁnition. matrix is deﬁned to be a rectangular array of numbers 11 12 21 22 mn whose dimension is by is called square if . The numbers i j are referred to as the elements of The tableau of a linear programming problem is an example of a matrix. We deﬁne equality of

two matrices in terms of their elements just as in the case of vectors.
Page 3
A.2 Matrices 489 Deﬁnition. Two matrices and are said to be equal , written , if they have the same dimension and their corresponding elements are equal, i.e., i j i j for all and In some instances it is convenient to think of vectors as merely being special cases of matrices. However, we will later prove a number of properties of vectors that do not have straightforward generalizations to matrices. Deﬁnition. -by-1 matrix is called a column vector and a 1-by- matrix is called a row vector The

coefﬁcients in row of the matrix determine a row vector , …, in ), and the coefﬁcients of column of determine a column vector ,..., mj . For notational convenience, column vectors are frequently written horizontally in angular brackets. We can deﬁne scalar multiplication of a matrix, and addition of two matrices, by the obvious analogs of these deﬁnitions for vectors. Deﬁnition. Scalar multiplication of a matrix and a real number is deﬁned to be a new matrix written or , whose elements i j are given by i j i j For example, 1 2 3 6 Deﬁnition. Addition of two matrices and , both

with dimension by , is deﬁned as a new matrix , written , whose elements i j are given by i j i j i j For example, 1 2 4 3 1 2 6 1 4 0 3 8 1 1 1 1 )( )( If two matrices and do not have the same dimension, then is undeﬁned. The product of two matrices can also be deﬁned if the two matrices have appropriate dimensions. Deﬁnition. The product of an -by- matrix and a -by- matrix is deﬁned to be a new -by- matrix , written AB , whose elements i j are given by: i j ik kj For example, 1 2 3 1 2 6 1 4 0 4 14 12 0 7 22 and 2 6 1 4 0 1 2 3 1 17 10 If the number of columns of does not equal the

number of rows of , then AB is undeﬁned. Further, from these examples, observe that matrix multiplication is not commutative ; that is, AB 6= B A , in general. If ( , ,..., is a row vector and = ,..., a column vector, then the special case
Page 4
490 Vectors and Matrices A.2 of matrix multiplication is sometimes referred to as an inner product . It can be visualized by placing the elements of next to those of and adding , as follows: In these terms, the elements i j of matrix AB are found by taking the inner product of (the th row of ) with (the th column of ); that is, i j The

following properties of matrices can be seen easily by writing out the appropriate expressions in each instance and rearranging the terms: (Commutative law) (Associative law) BC AB (Associative law) AB AC (Distributive law) As a result, or ABC is well deﬁned, since the evaluations can be performed in any order. There are a few special matrices that will be useful in our discussion, so we deﬁne them here. Deﬁnition. The identity matrix of order , written (or simply , when no confusion arises) is a square -by- matrix with ones along the diagonal and zeros elsewhere. For example, 1 0 0 0 1

0 0 0 1 It is important to note that for any -by- matrix B I . In particular, or I I Deﬁnition. The transpose of a matrix , denoted , is formed by interchanging the rows and columns of ; that is, i j ji If 2 4 3 0 4 then the transpose of is given by: 4 0 1 4 We can show that AB since the i j th element of both sides of the equality is jk ki Deﬁnition. An elementary matrix is a square matrix with one arbitrary column, but otherwise ones along the diagonal and zeros elsewhere (i.e., an identify matrix with the exception of one column).
Page 5
A.3 Linear Programming in Matrix Form

491 For example, 1 0 1 0 0 1 3 0 0 0 2 0 0 0 4 1 is an elementary matrix. A.3 LINEAR PROGRAMMING IN MATRIX FORM The linear-programming problem Maximize +  + subject to: 11 12 +  + 12 22 +  + +  + mn ,..., can now be written in matrix form in a straightforward manner. If we let: and be column vectors, the linear system of inequalities is written in matrix form as Ax . Letting ,..., be a row vector, the objective function is written as cx . Hence,the linear program assumes the following compact form: Maximize cx subject to: Ax The same problem can also be

written in terms of the column vectors of the matrix as: Maximize +  + subject to: +  + ,..., ). At various times it is convenient to use either of these forms. The appropriate dual linear program is given by: Minimize +  + subject to: 11 21 +  + 12 22 +  + +  + mn ,...,
Page 6
492 Vectors and Matrices A.4 Letting be a column vector, since the dual variables are associated with the constraints of the primal problem, we can write the dual linear program in compact form as follows: Minimize subject to: We can also write the dual in

terms of the untransposed vectors as follows: Minimize yb subject to: y A In this form it is easy to write the problem in terms of the row vectors of the matrix , as: Minimize +  + subject to: +  + ,..., ). Finally, we can write the primal and dual problems in equality form. In the primal, we merely deﬁne an -dimensional column vector measuring the amount of slack in each constraint, and write: Maximize cx subject to: Ax I s In the dual, we deﬁne an -dimensional row vector measuring the amount of surplus in each dual constraint and write: Minimize yb subject to: y A uI

A.4 THE INVERSE OF A MATRIX Deﬁnition. Given a square -by- matrix , if there is an -by- matrix such that DB B D then is called the inverse of and is denoted Note that does not mean 1 or , since division is not deﬁned for matrices. The symbol is just a convenient way to emphasize the relationship between the inverse matrix and the original matrix There are a number of simple properties of inverses that are sometimes helpful to know.
Page 7
A.4 The Inverse of a Matrix 493 i) The inverse of a matrix is unique if it exists. Proof. Suppose that and are both inverses of . Then I B AB B

B ii) since I I iii) If the inverse of and exist, then the inverse of AB exists and is given by AB Proof. AB )( B B AI A AA iv) If the inverse of exists, then the inverse of exists and is given by Proof. v) If the inverse of exists, then the inverse of exists and is given by Proof. The natural question that arises is: Under what circumstances does the inverse of a matrix exist? Consider the square system of equations given by: Bx I y If has an inverse, then multiplying on the left by yields I x which ‘‘solves’’ the original square system of equations for any choice of . The second

system of equations has a unique solution in terms of for any choice of , since one variable is isolated in each equation. The ﬁrst system of equations can be derived from the second by multiplying on the left by ; hence, the two systems are identical in the sense that any that satisﬁes one system will also satisfy the other. We can now show that a square matrix has an inverse if the square system of equations Bx has a unique solution for an arbitrary choice of The solution to this system of equations can be obtained by successively isolating one variable in each equation by a procedure

known as Gauss–Jordan elimination , which is just the method for solving square systems of equations learned in high-school algebra. Assuming 11 6= 0, we can use the ﬁrst equation to eliminate from the other equations, giving: 12 11 +  + 11 11 22 21 12 11 +  + 21 11 = 21 11 12 11 +  + mm 11 = 11
Page 8
494 Vectors and Matrices A.4 If 11 0, we merely choose some other variable to isolate in the ﬁrst equation. In matrix form, the new matrices of the and coefﬁcients are given respectively by and , where is an elementary matrix of the form: 0 0 1 0 0 1

0 0 11 = 11 ,..., ). Further, since 11 is chosen to be nonzero, has an inverse given by: 0 0 1 0 0 1 0 0 Thus by property (iii) above, if has an inverse, then has an inverse and the procedure may be repeated. Some coefﬁcient in the second row of the updated system must be nonzero , or no variable can be isolated in the second row, implying that the inverse does not exist. The procedure may be repeated by eliminating this from the other equations. Thus, a new elementary matrix is deﬁned, and the new system has isolated in equation 1 and in equation 2. Repeating the procedure ﬁnally gives:

with one variable isolated in each equation. If variable is isolated in equation , the ﬁnal system reads: 11 12 +  + 21 22 +  + +  + mm and 11 12 21 22 mm Equivalently, is expressed in product form as the matrix product of elementary matrices. If, at any stage in the procedure, it is not possible to isolate a variable in the row under consideration, then the inverse of the original matrix does not exist. If has not been isolated in the th equation, the equations may have to be permuted to determine This point is illustrated by the following example:
Page

9
A.4 The Inverse of a Matrix 495 Rearranging the ﬁrst and second rows of the last table gives the desired transformation of into the identity matrix, and shows that: Alternately, if the ﬁrst and second columns of the last table are interchanged, an identity matrix is produced. Interchanging the ﬁrst and second columns of , and performing the same operations as above, has this same effect. Consequently, is the inverse of 2 0 4 2 2 0 6 4 0 In many applications the column order, i.e., the indexing of the variables , is arbitrary, and this last procedure is utilized. That is, one

variable is isolated in each row and the variable isolated in row is considered the th basic variable (above, the second basic variable would be ). Then the product form gives the inverse of the columns to , reindexed to agree with the ordering of the basic variables. In computing the inverse of a matrix, it is often helpful to take advantage of any special structure that the matrix may have. To take advantage of this structure, we may partition a matrix into a number of smaller matrices, by subdividing its rows and columns. For example, the matrix below is partitioned into four submatrices 11

12 21 , and 22 11 12 13 21 22 23 31 32 33 41 42 43 11 12 21 22 The important point to note is that partitioned matrices obey the usual rules of matrix algebra. For example,
Page 10
496 Vectors and Matrices A.5 multiplication of two partitioned matrices 11 12 21 22 31 32 and 11 12 21 22 results in AB 11 11 12 21 11 12 12 22 21 11 22 21 21 12 22 22 31 11 32 21 31 12 32 22 assuming the indicated products are deﬁned; i.e., the matrices i j and jk have the appropriate dimensions. To illustrate that partitioned matrices may be helpful in computing inverses, consider the following

example. Let I Q where 0 denotes a matrix with all zero entries. Then A B C D satisﬁes M M or I Q A B C D which implies the following matrix equations: QC Q D RC R D Solving these simultaneous equations gives and = Q R or, equivalently, Q R Note that we need only compute in order to determine easily. This type of use of partitioned matrices is the essence of many schemes for handling large-scale linear programs with special structures. A.5 BASES AND REPRESENTATIONS In Chapters 2, 3, and 4, the concept of a basis plays an important role in developing the computational procedures and

fundamental properties of linear programming. In this section, we present the algebraic foundations of this concept. Deﬁnition. m-dimensional real space is deﬁned as the collection of all -dimensional vectors ,..., Deﬁnition. A set of -dimensional vectors ,..., is linearly dependent if there exist real numbers , ,..., not all zero , such that +  + (1) If the only set of ’s for which (1) holds is =  = 0, then the -vectors ,..., are said to be linearly independent
Page 11
A.5 Bases and Representations 497 For example, the vectors ),( , and are linearly

dependent, since Further, the unit m -dimensional vectors ,..., ,..., for ,..., , with a plus one in the th component and zeros elsewhere, are linearly independent, since implies that =  = If any of the vectors ,..., , say , is the 0 vector (i.e., has all zero components), then, taking 1 and all other 0 shows that the vectors are linearly dependent. Hence, the null vector is linearly dependent on any set of vectors. Deﬁnition. An -dimensional vector is said to be dependent on the set of -dimensional vectors ,..., if can be written as a linear combination of these vectors; that is, +

 + for some real numbers , ,..., . The -dimensional vector ( , ,..., is said to be the representation of in terms of ,..., Note that is not dependent upon and , since and can never have 1 as its ﬁrst component. The -dimensional vector ( , ,..., is dependent upon the -dimensional unit vectors ,..., since ( , ,..., Thus, any -dimensional vector is dependent on the -dimensional unit vectors. This suggests the following important deﬁnition. Deﬁnition. basis of is a set of linearly independent -dimensional vectors with the property that every vector of is dependent upon these

vectors. Note that the -dimensional unit vectors ,..., are a basis for , since they are linearly independent and any -dimensional vector is dependent on them. We now sketch the proofs of a number of important properties relating bases of real spaces, representations of vectors in terms of bases, changes of bases, and inverses of basis matrices. Property 1. A set of -dimensional vectors ,..., is linearly dependent if and only if one of these vectors is dependent upon the others. Proof. First, suppose that
Page 12
498 Vectors and Matrices A.5 so that is dependent upon ,..., . Then,

setting = 1, we have which shows that ,..., are linearly dependent. Next, if the set of vectors is dependent, then with at least one 6= say 6= 0. Then, where = and depends upon ,..., Property 2. The representation of any vector in terms of basis vectors ,..., is unique. Proof. Suppose that is represented as both and Eliminating gives 0 ( . Since ,..., constitute a basis, they are linearly independent and each ( That is, , so that the representation must be unique. This proposition actually shows that if can be represented in terms of the linearly independent vectors ,..., , whether a basis or

not, then the representation is unique. If ,..., is a basis, then the representation is always possible because of the deﬁnition of a basis. Several mathematical-programming algorithms, including the simplex method for linear programming, move from one basis to another by introducing a vector into the basis in place of one already there. Property 3. Let ,..., be a basis for ; let 6= 0 be any -dimensional vector; and let ( , ,..., be the representation of in terms of this basis; that is, (2) Then, if replaces any vector in the basis with 6= the new set of vectors is a basis for
Page

13
A.5 Bases and Representations 499 Proof. Suppose that 6= First, we show that the vectors ,..., are linearly indepen- dent. Let for ,..., and be any real numbers satisfying: (3) If 6= 0, then which with (2) gives two representations of in terms of the basis ,..., By Property 2, this is impossible, so 0. But then, =  = 0, since ,..., are linearly independent. Thus, as required, =  = 0 is the only solution to (3). Second, we show that any -dimensional vector can be represented in terms of the vectors ,..., Since ,..., is a basis, there are constants , ,..., such that

Using expression (2) to eliminate , we ﬁnd that which by deﬁnition shows that ,..., is a basis. Property 4. Let ,..., be a collection of linearly independent -dimensional vectors, and let ,..., be a basis for . Then ,..., can replace vectors from ,..., to form a new basis. Proof. First recall that the 0 vector is not one of the vectors , since 0 vector is dependent on any set of vectors. For 1, the result is a consequence of Property 3. The proof is by induction. Suppose, by reindexing if necessary, that ,..., ,..., is a basis. By deﬁnition of basis, there are real numbers , ,..., such

that +  + +  + If 0 for ,..., then is represented in terms of ,..., , which, by Property 1, contradicts the linear independence of ,..., . Thus some, 6= 0 for ,..., , say, 6= 0. By Property 3, then, ,..., ,..., is also a basis. Consequently, whenever of the vectors can replace vectors from ,..., to form a basis, of them can be used as well, and eventually ,..., can replace vectors from ,..., to form a basis. Property 5. Every basis for contains vectors.
Page 14
500 Vectors and Matrices A.5 Proof. If ,..., and ,..., are two bases, then Property 4 implies that . By

reversing the roles of the and , we also have and thus , and every two bases contain the same number of vectors. But the unit -dimensional vectors ,..., constitute a basis with -dimensional vectors, and consequently, every basis of must contain vectors. Property 6. Every collection ,..., of linearly independent -dimensional vectors is con- tained in a basis. Proof. Apply Property 4 with ,..., the unit -dimensional vectors. Property 7. Every linearly-independent vectors of form a basis. Every collection of or more vectors in are linearly dependent. Proof. Immediate, from Properties 5 and 6. If

a matrix is constructed with linearly-independent column vectors ,..., the properties just developed for vectors are directly related to the concept of a basis inverse introduced previously. We will show the relationships by deﬁning the concept of a nonsingular matrix in terms of the independence of its vectors. The usual deﬁnition of a nonsingular matrix is that the determinant of the matrix is nonzero. However, this deﬁnition stems historically from calculating inverses by the method of cofactors, which is of little computational interest for our purposes and will not be pursued.

Deﬁnition. An -by- matrix is said to be nonsingular if both its column vectors ,..., and rows vectors ,..., are linearly independent. Although we will not establish the property here, deﬁning nonsingularity of merely in terms of linear independence of either its column vectors or row vectors is equivalent to this deﬁnition. That is, linear independence of either its column or row vectors automatically implies linear independence of the other vectors. Property 8. An -by- matrix has an inverse if and only if it is nonsingular. Proof. First, suppose that has an inverse and that + 

+ Letting = , ,..., , in matrix form, this expression says that Thus )( α) 0 or ) 0. That is, =  = 0, so that vectors ,..., are linearly independent. Similarly, 0 implies that α( B B ( so that the rows ,..., are linearly independent.
Page 15
A.6 Extreme Points of Linear Programs 501 Next, suppose that ,..., are linearly independent. Then, by Property 7, these vectors are a basis for , so that each unit -dimensional vector is dependent upon them. That is, for each +  + (4) for some real numbers , ,..., . Letting be the column vector = , ,..., , Eq. (4) says that B

D or B D where is a matrix with columns ,..., . The same argument applied to the row vectors ,..., shows that there is a matrix with . But I D B D , so that is the inverse of Property 8 shows that the rows and columns of a nonsingular matrix inherit properties of bases for and suggests the following deﬁnition. Deﬁnition. Let be an -by- matrix and be any -by- submatrix of . If is nonsingular, it is called a basis for Let be a basis for and let be any column of . Then there is a unique solution ,..., nj to the system of equations given by multiplying both sides of the equality by ; that is,

. Since +  + nj the vector is the representation of the column in terms of the basis. Applying Property 3, we see that can replace column to form a new basis if kj 6= 0. This result is essential for several mathematical-programming algorithms, including the simplex method for solving linear programs. A.6 EXTREME POINTS OF LINEAR PROGRAMS In our discussion of linear programs in the text, we have alluded to the connection between extreme points, or corner points, of feasible regions and basic solutions to linear programs. the material in this section delineates this connection

precisely, using concepts of vectors and matrices. In pursuing this objective, this section also indicates why a linear program can always be solved at a basic solution, an insight which adds to our seemingly ad hoc choice of basic feasible solutions in the text as the central focus for the simplex method. Deﬁnition. Let be a set of points in . A point in is called an extreme point of if cannot be written as λw λ) for two distinct points and in and 0 <λ< That is, does not lie on the line segment joining any two points of For example, if is the set of feasible points to the system then the

extreme points are (0, 0), (0, 3), (3, 3), and (6, 0) (see Fig. (A.1). The next result interprets the geometric notion of an extreme point for linear programs algebraically in terms of linear independence. Feasible Extreme Point Theorem. Let be the set of feasible solutions to the linear program Ax Then the feasible point ,..., is an extreme point of if and only if the columns of with 0 are linearly independent.
Page 16
502 Vectors and Matrices A.6 Figure A.1 Proof. By reindexing if necessary, we may assume that only the ﬁrst components of are positive; that is, ,..., =  =

We must show that any vector solving Ay is an extreme point if and only if the ﬁrst column ,..., of are linearly independent. First, suppose that these columns are not linearly independent, so that +  + 0 (5) for some real numbers , ,..., not all zero. If we let denote the vector ( , ,..., ,..., then expression (5) can be written as Ax 0. Now let and . Then, as long as is chosen small enough to satisfy | for each component ,..., both 0 and 0. But then, both and are contained in , since Ay Ax Ay λ( and, similarly, . However, since (w + w) , we see that is not an extreme point of in

this case. Consequently, every extreme point of satisﬁes the linear independence requirement. Conversely, suppose that ,... are linearly independent. If λw λ) for some points and of and some 0 <λ< then λw λ) . Since 0 for 1 and then necessarily 0 for Therefore, +  + +  + +  + Since, by Property 2 in Section A.5, the representation of the vector in terms of the linearly independent vectors ,..., is unique, then Thus the two points and cannot be distinct and therefore is an extreme point of If contains a basis (i.e., the tows of are linearly independent), then,

by Property 6, any collection ,..., of linearly independent vectors can be extended to a basis ,..., . The extreme- point theorem shows, in this case, that every extreme point can be associated with a basic feasible solution, i.e., with a solution satisfying 0 for nonbasic variables , for ,..., Chapter 2 shows that optimal solutions to linear programs can be found at basic feasible solutions or equivalently, now, at extreme points of the feasible region. At this point, let us use the linear-algebra tools
Page 17
A.6 Extreme Points of Linear Programs 503 of this appendix to drive this

result independently. This will motivate the simplex method for solving linear programs algebraically. Suppose that is a feasible solution to the linear program Maximize cx subject to: Ax (6) and, by reindexing variables if necessary, that ,..., 0 and =  = If the column is linearly dependent upon columns ,..., , then +  + (7) with at least one of the constants nonzero for ,..., . Multiplying both sides of this expression by gives ( θ) ( θ) +  + ( θ), (8) which states that we may simulate the effect of setting in (6) by setting ,..., , respectively, to ( θ),(

θ),...,( θ) . Taking 1 gives: +  + as the per-unit proﬁt from the simulated activity of using units of , units of , through units of , in place of 1 unit of Letting ,..., ,..., ), Eq. (8) is rewritten as ( 0. Here is interpreted as setting to 1 and decreasing the simulated activity to compensate. Thus, Ay Ay so that is feasible as long as 0 (this condition is satisﬁed if is chosen so that | for every component ,..., ). The return from is given by: cy cy θ( ). Consequently, if , the simulated activity is less proﬁtable than the st activity itself, and return improves by

increasing . If , return increases by decreasing (i.e., decreasing and increasing the simulated activity). If , return is unaffected by These observation imply that, if the objective function is bounded from above over the feasible region, then by increasing the simulated activity and decreasing activity , or vice versa, we can ﬁnd a new feasible solution whose objective value is at least as large as cy but which contains at least one more zero component than For, suppose that . Then by decreasing from cy eventually 0 for some component ,..., 1 (possibly . On the other hand, if then )> cy as

increases from if some component of from (7) is positive, then eventually reaches 0 as increases. (If every 0, then we may increase indeﬁnitely, → + and the objective value is unbounded over the constraints, contrary to our assumption.) Therefore, if either or we can ﬁnd a value for such that at least one component of becomes zero for ,..., 1. Since 0 and 0 for remains at 0 for 1. Thus, the entire vector contains at least one more positive component than and cy With a little more argument, we can use this result to show that there must be an optimal extreme-point solution to a linear

program.
Page 18
504 Vectors and Matrices A.6 Optimal Extreme-Point Theorem. If the objective function for a feasible linear program is bounded from above over the feasible region, then there is an optimal solution at an extreme point of the feasible region. Proof. If is any feasible solution and the columns of , with are linearly dependent, then one of these columns depends upon the others (Property 1). From above, there is a feasible solution to the linear program with both cx cy and having one less positive component than . Either the columns of with 0 are linearly independent, or

the argument may be repeated to ﬁnd another feasible solution with one less positive component. Continuing, we eventually ﬁnd a feasible solution with cy , and the columns of with 0 are linearly independent. By the feasible extreme-point theorem is an extreme point of the feasible region. Consequently, given any feasible point, there is always an extreme point whose objective value is at least as good. Since the number of extreme points is ﬁnite (the number of collections of linear independent vectors of is ﬁnite), the extreme point giving the maximum objective value solves the

problem.