], where <|,x) = J ^Xj. Then (1) M has property (*); (2) Mf = Mf, where f is the concave increasing "cap" of f; (3) if Mf =

** R(x,B) 0 B" = 0. (8.4)
Proof. In the representation of x consider the nonzero coefficients of
vectors in B* and B" in conjunction with (7.5) and (8.1); since x € F, those
coefficients must be of opposite sign; since x c G, they must be of the
same sign. Hence, R(x,B) cannot intersect both B"1" and B".
x € D, y e D, B € R(y,B) 0 B- = 0. (8.5)
Proof. Assume R(y,B) f] B~ * 0. Then, by (8.4), R(x,B) fl B~ = 0 and
R(y,B) n B+ = 0, hence R(x,B) fl R(y,B) fl B° = 0. If now b € R(x,B) fl R(y,B),
then b €T B°, hence C=B-b+xe**

and a are completely linear the corre- sponding program of course satisfies the linear duality theorem, which, although different from the theorem established above, is rather analogous. Tbe essential difference is that, in the linear case, the program (D) does not contain terms in x. As a consequence, only the first part of the theorem is always valid {the dual variable u is the optimal solution of (D), and the variable x simply becomes useless); as for the formulation of the second part, one sees clearly that it is insufficient to give the solution to (P) apart from that to (D), this latter not containing x.

J defined by the expression for xsj without the genuine nonbasic variables, and substitute for 0| throughout. We then have the tableau *; 2/9 4/9 -1/9 -1/3 * 10/3 -1/3 -2/3 «i 5/9 1/9 2/9 -1/3 *» 11/9 4/9 -1/9 -1/3 **u 1 1 1 1 **M 1 1 1 *« 14/3 -2/3 -1/3 1 2 *» 44/9 -11/9 -4/9 2/3 -1 1 *tt 2/3 1/3 -1/3 -1 1 2 **« 1 1 -1 1 C -37/3 1/3 5/3 1 2 1 1 1 1 5 We now consider whether any nonbasic variable from the first sub- problem can usefully be increased* To illustrate the process of computing the c£|» w^ express C completely in terms of the pseudo-basic and non- basic variables, though in practice only the coefficients of x41» xsl and xei would be calculated at this point. We have

b(y)} (2.3) x is a convex function of y. Furthermore, the region of definition of cp (y) is convex. Proof We consider two feasible values yt and yj of the vector y. Since they are feasible, there exist vectors Xj and x2 such that

*rb(yt) } = cTXj
*

* b(y2) } = cTx2 (2.4)
x
For any scalar X, 0 < X < 1, we define y sXyt + (1 -Xly* and b s
+ (1- A)b(y2). Since W is convex, b(y) b(y) is not smaller than the
region ATx ^b. Hence
min {CTX 1 ATx >b(y}} ^ min {CTX | ATx >b}
x *>>
Now
*

b. This can be seen directly from (2.4). To show that the region of definition of

b(y!) and ATx2 ^b(y2). Then for x = \xl 4 (1 -X)x2, we have ATx>b>b(y), Q.E.D. This theorem applies directly to the function 3^ (y) defined by (1.1) for the I subproblems, so that each of the functions *j (y), i = 1, ..., £ is convex. The sum of convex functions is also convex, so that *(y) as given by (1.2) is convex. An alternate proof of Theorem 1 can also be given based on a known result [1, 6] that

**£(y) to solve the
complete problem is based on the following remarks. For a specified
value, say y = yfl, the solution of the k^1 subproblem (1.1) gives not only
the corresponding vector xfc>0 and function *k(y0) but also the partial
derivatives a^fc&^/dyj, j « 1, ...» s, where they exist. These
derivatives are readily obtained from the subproblem optimal shadow
prices and the partial derivatives db/8yj. They are valid in the y-space
region containing yt for which the optimal basis at y0 remains optimal.
Furthermore, as long as the same basis is maintained the vector x± is
given as an explicit function of y, so that all the constraints can be rep-
resented explicitly in the y-space. With this information a minimiza-
tion over y in a region containing ye can be carried out. For each such
y-space minimization, only a single linear programming solution of each
of the I subproblems is now required. As will be shown in the next section,
this sequence of minimizations gives the desired global minimum, where
**

» *•&& fck® corresponding Problem II shadow price vector v. Note that the solution of the convex Problem II is not necessarily a finite pro- cedure* The feasibility of yj+i follows from the fact that it satisfies QTb(yj+i) ~"e(yj+i) 5:0, as shown by Theorem 4. If yj + 1 = yj and u as given by (2.22) is nonnegative, we have the desired optimal solution to the complete problem. If yj+i = yj and u has at least one negative com- ponent, we formulate and solve Problem II for each alternate Problem I basis corresponding to yj. If

e (yj), or using (2.10), *(yh Ui the neighborhood of yj, is given by at least one of the alternate Problem I optimal bases. Trying each of these in turn, we will find a basis for which either the complete problem optimum conditions

0, and y « y% 4 0m(yj ~y«) we have

- D (y0)Ql (3.5) since by (2.11) and (3.2), the Jacobian of h(y) at y0 is [E(y0) -D(y0)Ql. Since yj is feasible for Problem II, we have that the right-hand side of (3.5) is nonnegative. Therefore, h(yj) ^0. It follows that xi = (A^^bty and yi are feasible for the complete problem, with c^Xj = rTb (yt). The minimum solution ^(yj) for y = y1 must therefore satisfy which gives (3.3). From (3.4) we have h(y) 5:0, for y = y0 + 0m(yi "Yo>- The correspond- ing value of CTX is rTb(y), so that

0. Then,