Lecture 15 - Duality

\[\text{ 10/20/2006 }\] \[\text{6.854 - Advanced Algorithms}\] \[\text{Professor David Karger}\] \[\text{Jay Kumar Sundararajan, and Aaron Bernstein}\]

This lecture covers weak and strong duality, and also explains the rules for finding the dual of a linear program, with an example. Before we move on to duality, we shall first see some general facts about the location of the optima of a linear program.

Structure of LP solutions

Some intuition in two dimensions.

Consider a linear program -

\(\mbox{Maximize } y^Tb\\ \mbox{subject to } y^TA\leq c\)

The feasible region of this LP is in general, a convex polyhedron. Visualize it as a polygon in 2 dimensions, for simplicity. Now, maximizing \(y^Tb\) is the same as maximizing the projection of the vector \(y\) in the direction represented by vector \(b\). For whichever direction \(b\) we choose, the point \(y\) that maximizes \(y^Tb\) cannot lie strictly in the interior of the feasible region. The reason is that, from an interior point, we can move further in any direction, and still be feasible. In particular, by moving along \(b\), we can get to a point with a larger projection along \(b\). This intuition suggests that the optimal solution of an LP will never lie in the interior of the feasible region, but only on the boundaries. In fact, we can say more. We can show that for any LP, the optimal solutions are always at the "corners" of the feasible region polyhedron. This notion is formalized in the next subsection.

Some definitions

Vertex of a Polyhedron A point in the polyhedron which is uniquely optimal for some linear objective, is called a vertex of the polyhedron.

Extreme Point of a Polyhedron A point in the polyhedron which is not a convex combination of two other points in the polyhedron is called an extreme point of the polyhedron.

Tightness A constraint of the form \(a^Tx \leq b\), \(a^Tx=b\) or \(a^Tx \geq b\) in a linear program is said to be tight for a certain point \(y\), if \(a^Ty=b\).

Basic Solution For an n-dimensional linear program, a point is called a basic solution, if \(n\) linearly independent constraints are tight for that point.

Basic Feasible Solution A point is a basic feasible solution, iff it is a basic solution that is also feasible.

Note: If \(x\) is a basic feasible solution, then it is in fact, the unique point that is tight for all its tight constraints. This is because, there can be only one solution for a set of \(n\) linearly independent equalities, in \(n\)-dimensional space.

For a polyhedron \(P\) and a point \(x\in P\), the following are equivalent:

\(x\) is a basic feasible solution

\(x\) is a vertex of \(P\)

\(x\) is an extreme point of \(P\)

Assume the LP is in the canonical form.

Vertex\(\Rightarrow\) Extreme Point

Let \(v\) be a vertex. Then for some objective function \(c\), \(c^Tx\) is uniquely minimized at \(v\). Assume \(v\) is not an extreme point. Then, \(v\) can be written as \(v=\lambda y+(1-\lambda)z\) for some \(y\), \(z\) neither of which is \(v\), and some \(\lambda\) satisfying \(0\leq \lambda\leq 1\).

Now, \(c^Tv=c^T[\lambda y+(1-\lambda)z]=\lambda c^Ty+(1-\lambda)c^Tz\)

But note that since \(c^{T}x\) is uniquely minimized at v, we must have \(c^{T}y > c^{T}v\) and \(c^{T}z > c^{T}v\). Thus, \(c^{T}v = \lambda c^Ty+(1-\lambda)c^Tz > c^{T}v (\lambda + 1 - \lambda) = c^{T}v\). This is clearly a contradiction.

Extreme Point \(\Rightarrow\) Basic Feasible Solution

Let \(x\) be an extreme point. By definition, it lies in the polyhedron and is therefore feasible. Assume \(x\) is not a basic solution. Let \(T\) be the set of rows of the constraint matrix \(A\) for which the constraints are tight at \(x\). Let \(a_i\) (a \(1 \times n\) vector) denote the \(i^{th}\) row of \(A\). For \(a_i \notin T\), \(a_i.x > b_i\). Since \(x\) is not a basic solution, \(T\) does not span \({\mathcal R}^n\). So, there is a vector \(d \neq 0\) such that \(a_i.d = 0\ \forall a_i \in T\).

Consider \(y=x+\epsilon d\) and \(z=x-\epsilon d\). If \(a_i\in T\), then \(a_i.y=a_i.z=b_i\). If \(a_i\notin T\), then, by choosing a sufficiently small \(\epsilon\): \(0 < \epsilon \leq min_{i\notin T}\frac{a_i.x - b_i}{|a_i.d|}\), we can ensure that \(a_i.y \geq b_i\) and \(a_i.z \geq b_i\). Thus \(y\) and \(z\) are feasible. Since \(x=y/2+z/2\), \(x\) cannot be an extreme point – a contradiction.

Basic Feasible Solution \(\Rightarrow\) Vertex

Let \(x\) be a basic feasible solution. Let \(T = \{i\ |\ a_i.x = b_i\}\). Consider the objective as minimizing \(c.y\) for \(c = \sum_{i \in T} a_i\). Then, \(c.x = \sum_{i \in T}(a_i.x) = \sum_{i \in T}b_i\). For any \(x^{\prime} \in {\cal P}\), \(c.x^{\prime} = \sum_{i \in T} (a_i.x^{\prime}) \geq \sum_{i \in T} b_i\) with equality only if \(a_i.x^{\prime} = b_i\ \forall i \in T\). This implies that \(x^{\prime} = x\) and that \(x\) uniquely minimizes the objective \(c.y\).

This proves that vertex, extreme point and basic feasible solution are equivalent terms.

Any bounded LP in standard form has an optimum at a basic feasible solution.

Consider an optimal \(x\) which is not a basic feasible solution. Being optimal, it is feasible, hence it is not basic. As in the previous proof, let \(T\) be the set of rows of the constraint matrix \(A\) for which the constraints are tight at \(x\). Since \(x\) is not a basic solution, \(T\) does not span \({\mathcal R}^n\). So, there is a vector \(d \neq 0\) such that \(a_i.d = 0\ \forall a_i \in T\). For a scalar \(\epsilon\) with sufficiently small absolute value, \(y=x+\epsilon d\) is feasible, and represents a line containing \(x\) in the direction \(d\). The objective function at \(y\) is \(c^Tx+\epsilon c^Td\). Since \(x\) is optimal, \(c^Td=0\), as otherwise, an \(\epsilon\) of the opposite sign can reduce the objective. This means, all feasible points on this line are optimal. One of the directions of motion on this line will reduce some \(x_i\). Keep going till some \(x_i\) reduces to 0. This results in one more tight constraint than before.

This technique can be repeated, till the solution becomes basic.

Thus, we can convert any feasible solution to a basic feasible solution of no worse value. In fact, this proof gives an algorithm for solving a linear program: evaluate the objective at all basic feasible solutions, and take the best one. Suppose there are \(m\) constraints and \(n\) variables. Since a set of \(n\) constraints defines a basic feasible solution, there can be upto \(m \choose n\) basic feasible solutions. For each set of \(n\) constraints, a linear system of inequalities has to be solved, which by Gaussian elimination, takes \(O(n^3)\) time. This is in general an exponential complexity algorithm in \(n\). Note that the output size is polynomial in \(n\), since the optimal solution is just the solution of a system of linear equalities.

The dual of a linear program

Given an LP in the standard form:

Minimize \(c.x\) subject to: \(Ax=b; x \geq 0\)

We call the above LP the primal LP. The decision version of the problem is: Is the optimum \(c.x \leq \delta\) ? This problem is in \(NP\), because, if we find a feasible solution with optimum value \(\leq \delta\), we can verify that it satisfies these requirements, in polynomial time. A more interesting question is whether this problem is in co-NP . We need to find an easily verifiable proof for the fact that there is no \(x\) which satisfies \(c.x < \delta\). To do this, we require the concept of duality.

Weak Duality

We seek a lower bound on the optimum. Consider a vector \(y\) (treat is as a row vector here). For any feasible \(x\), \(yAx=yb\) holds. If we require that \(yA\leq c\), then \(yb=yAx\leq cx\). Thus, \(yb\) is a lower bound on \(cx\), and in particular on the optimum \(cx\). To get the best lower bound, we need to maximize \(yb\). This new linear program:

Maximize \(yb\) subject to: \(yA\leq c\)

is called the dual linear program . (Note: The dual of a dual program is the primal). Thus primal optimum is lower bounded by the dual optimum. This is called weak duality .

Weak Duality Consider the LP \(z = Min\{c.x\ |\ Ax = b, x \geq 0 \}\) and its dual \(w = max \{y.b\ |\ yA \leq c\}\). Then \(z \geq w\).

If the primal is feasible and unbounded, then the dual is infeasible.

Strong Duality

In fact, if either the primal or the dual is feasible, then the two optima are equal to each other. This is known as strong duality . In this section, we first present an intuitive explanation of the theorem, using a gravitational model. The formal proof follows that.

A gravitational model

Consider the LP min{\(y.b|yA\geq c\)}. We represent this feasible region as a hollow polytope, for each constraint \(yA \geq c\) we physically build, say, a wooden floor. The vector \(b\) represents the direction "upwards". If a ball is dropped into the polytope, it will settle down at the lowest point, which is the optimum of the above LP. Note that any minimum is a global minimum, since the feasible region of an LP is a convex polyhedron. At the equilibrium point, there is a balance of forces – the gravitational force and the normal reaction of the floors (constraints). Let \(x_i\) represent the amount of force exerted by the \(i^{th}\) constraint. The direction of this force is given by the \(i^{th}\) column of \(A\). Then the total force exerted by all the constraints \(Ax\) balances the gravity \(b\): \(Ax=b\).

The physical world also gives the constraints that \(x\geq 0\), since the floors' force is always outwards. Recall that each constraint in the LP represents a wooden floor in the gravitational model to ensure that the ball cannot "fall through", if a constraint is not tight, i.e. \(yA_i >c_i\), then the ball would be above that wooden floor and hence not touching it. So, only those floors which the ball touches exert a force. This means that for the constraints which are not tight, the corresponding \(x_i\)'s are zero: \(x_i=0\) if \(yA_i>c_i\) (again, it is a greater than sign since we want to ensure the ball is higher than the floor). This can be summarized as \[(c_i-yA_i)x_i=0\]. This means \(x\) and \(y\) satisfy:\[y.b=\sum yA_ix_i=\sum c_ix_i=c.x\] But weak duality says that \(yb \leq cx\), for every \(x\) and \(y\). Hence the \(x\) and \(y\) are the optimal solutions of their respective LP's. This implies strong duality – the optima of the primal and dual are equal.

A formal proof

Strong Duality Consider \(w = min \{y.b\ |\ yA \geq c\}\) and \(z = max \{c.x\ |\ Ax = b, x \geq 0 \}\). Then \(z = w\).

Consider the LP min{\(y.b|yA\geq c\)}. Consider the optimal solution \(y^*\). Without loss of generality, ignore all the constraints that are loose for \(y^*\). If there are any redundant constraints, drop them. Clearly, these changes cannot alter the optimal solution. Dropping these constraints leads to a new \(A\) with fewer columns and a new shorter \(c\). We will prove that the dual of the new LP has an optimum equal in value to the primal. This dual optimal solution can be extended to an optimal solution of the dual of the original LP, by filling in zeros at places corresponding to the dropped constraints. The point is that we do not need those constraints to come up with the dual optimal solution.

After dropping those constraints, at most \(n\) tight constraints remain (where \(n\) is the length of the vector \(y\)). Since we have removed all redundancy, these constraints are linearly independent. In terms of the new \(A\) and \(c\), we have new constraints \(yA=c\). \(y^*\) is still the optimum.

Claim: There exists an \(x\), such that \(Ax=b\). Proof: Assume such an \(x\) does not exist, i.e. \(Ax=b\) is infeasible. Then "duality" for linear equalities implies that there exists a \(z\) such that \(zA=0\), but \(zb\neq 0\). Without loss of generality, assume \(z.b<0\) (otherwise, just negate the \(z\)). Now consider \((y^*+z)\). \(A(y^*+z)=Ay^*+Az=Ay^*\). Hence, it is feasible. \((y^*+z).b=y^*.b+z.b < y^*.b\), which is better than the assumed optimum – a contradiction. So, there is an \(x\) such that \(Ax=b\). Let this be called \(x^*\).

Claim: \(y^*.b=c.x^*\). Proof: \(y^*.b=y^*.(Ax^*)=(y^*A).x^*=c.x^*\) (since \(Ax^*=b\) and \(y^*A=c\))

Claim: \(x^*\geq 0\) Proof: Assume the contrary. Then, for some \(i\), \(x_i^*<0\). Let \(c'=c+e_i\), where \(e_i\) is all 0's except at the \(i^{th}\) position, where it has a 1. Since \(A\) has full rank, \(yA\geq c'\) has a solution, say \(y'\). Besides, since \(c'\geq c\), \(y'\) is feasible for the original constraints \(yA\geq c\). But, \(y'.b = y'Ax^*=c'x^* < cx^* = y^*b\) (since \(c'_i\) is now higher and \(x_i < 0\)). This means \(y'\) gives a better objective value than the optimal solution – a contradiction. Hence, \(x^*\geq 0\).

Thus, there is an \(x^*\) which is feasible in the dual, and whose objective is equal to the primal optimum. Hence, \(x^*\) must be the dual optimal solution, using weak duality. Thus, the optima of primal and dual are equal.

Checking for feasibility of a linear system of inequalities and optimizing an LP are equally hard.

Optimizer \(\rightarrow\) Feasibility checker

Use the optimizer to optimize any arbitrary function with the linear system of inequalities as the constraints. This will automatically check for feasibility, since every optimal solution is feasible.

Feasibility checker \(\rightarrow\) Optimizer

We construct a reduction from the problem of finding an optimal solution of \(LP_1\) to the problem of finding a feasible solution of \(LP_2\). \(LP_1\) is \(min \{c.x\ |\ Ax = b, x \geq 0\}\). Consider \(LP_2 = min\{0.x|Ax = b, x \geq 0, yA \leq c, c.x = b.y\}\). Any feasible solution of \(LP_2\) gives an optimal solution of \(LP_1\) due to the strong duality theorem. Finding an optimal solution is thus no harder than finding a feasible solution.

Rules for duals

Usually the primal is constructed as a minimization problem and hence the dual becomes a maximization problem. For the standard form, the primal is given by:

while the dual is given by:

For a mixed form of the primal, the following describes the dual:

(UIS = unrestricted in sign)

These rules are summarized in the following table.

Each variable in the primal corresponds to a constraint in the dual, and vice versa. For a maximization, an upper bound constraint is a "natural" constraint, while for a minimization, a lower bound constraint is natural. If the constraint is in the natural direction, then the corresponding dual variable is non-negative.

An interesting observation is that, the tighter the primal gets, the looser the dual gets. For instance, an equality constraint in the primal leads to an unrestricted variable in the dual. Adding more constraints in the primal leads to more variables in the dual, hence more flexibility. This makes sense because the dual bounds the primal, so since adding more constraints will lead to a worse optimum in the primal, this will be a reflected by a better optimum in the dual.

Shortest Path - an example

Consider the problem of finding the shortest path in a graph. Given a graph \(G\), we wish to find the shortest path from a specified source node, to all other nodes. The following linear program finds the shortest distance from s to t:

\[w=\mbox{max }(d_t-d_s)\] s.t. \(d_j-d_i\leq c_{ij}, \hspace{0.3in} \forall i,j\)

In this formulation, \(d_i\) represents the distance of node \(i\) from the source node \(s\). The \(c_{ij}\) constraints are essentially the triangle inequalities – the distance from the source to a node \(i\) should not be more than the distance to some node \(j\) plus the distance from \(j\) to \(i\). Intuitively, one can imagine stretching the network physically, to increase the source-destination distance. When we cannot pull any further without breaking an edge, we have found a shortest path.

More formally, let p be a shortest path from s to t, and let \(\delta\)(t) be the length of this path. We must have d(t) \(\geq\) \(\delta\)(t) because we are trying to maximize d(t), and setting d(i) = \(\delta\)(i) for every vertex on p is obviously a feasible solution. Now, I will show that we cannot have d(t) \(> \delta\)(t). For let v be the first vertex on p such that \(d(v) > \delta(v)\), and let u be its predecessor. p is a shortest path to v, so \(\delta(v)\) = \(\delta(u) + c_{u,v} \geq d(u) + c_{u,v} \geq d(v)\), contradictory to our assumption.

The dual to this program is found thus. The constraint matrix in the primal has a row for every pair of nodes \((i,j)\), and a column for every node. The row corresponding to \((i,j)\) has a +1 in the \(i^{th}\) column and a -1 in the \(j^{th}\) column, and zeros elsewhere.

Using this, we conclude that the dual has a variable for each pair \((i,j)\), say \(y_{ij}\).

It has a constraint for each node \(i\). The constraint has a coefficient of +1 for each edge entering node \(i\) and a -1 for each edge leaving \(i\). The right side for the constraints are -1 for the node \(s\) constraint, 1 for the node \(t\) constraint, and 0 for others, based on the objective function in the primal. Moreover, all the constraints are equality constraints, since the \(d_i\) variables were unrestricted in sign in the primal.

The dual variables will have to have a non-negativity constraint as well, since the constraints in the primal were "natural" (upper bounds for a maximization).

The objective is to minimize \(\sum_{i,j}c_{ij}y_{ij}\), since the right side of the primal constraints are \(c_{ij}\).

Thus the dual is:

\[z=\mbox{min }\sum_{i,j}c_{ij}y_{ij}\]

This is precisely the linear program to solve the minimum cost unit flow, in a gross flow formulation. The constraints correspond to the flow conservation at all nodes except at the source and sink. The value of the flow is forced to be 1. This makes sense intuitively because if we treat edge weights as cost, then the shortest path is precisely the cheapest way to send a unit of flow.

Max Flow - Another Example

Consider the problem of finding a maximum flow from a source s to a sink t in a graph G. In order to formulate this as a LP, we will add an infinite capacity edge from t to s. Our objective is then to maximize the flow on this edge while preserving conservation constraints, capacity constraints, and the non-negative flow constraint. Think of non-existing edges as having capacity 0. Let x\(_{v,w}\) be the flow value on edge (v,w). The linear program is then: Primal: maximize x\(_{t,s}\) while preserving x\(_{v.w} \geq 0 \ \ \forall (v,w)\) (a non-negative flow value for each edge) x\(_{v,w}\) \(\leq u_{v,w} \ \ \forall (v,w)\) (capacity constraints) \(\sum_{w} x_{v,w} - x_{w,v} = 0 \ \ \forall v\) (conservation constraints). What is the dual of this linear program? Well, we have two types of constraints. We have a constraint for each vertex v, which will become a variable z\(_{v}\) in the dual. We also have capacity constraints for each edge (v,w), which will become variables y\(_{v,w}\). Note that the conservation constraints are equalities, so by the cookbook, z\(_{v}\) is unrestricted in sign (\(\forall\) v). Capacity constraints are in the "natural direction", so we must have y\(_{v,w} \geq 0 \ \ \forall\) (v,w). What are the constraints of the dual? The LHS (left hand side) of the constraints is derived from the primal variables, so there will be a constraint for each edge. To find the constraints, we must examine the constraint matrix in the primal. In the primal, the rows correspond to constraints, while the columns correspond to variables. In the dual, we will transpose this matrix, so the columns will now correspond to constraints. There are m + n rows in the constraint matrix: m for the capacity constraints (y\(_{v,w}\) in the dual), and n for the circulation constraints (z\(_{v}\) in the dual). There are m columns: one for each variable x\(_{v,w}\). The capacity constraint corresponding to an edge (v,w) is simple: there is a 1 in the x\(_{v,w}\) column, and a zero everywhere else. The circulation constraint for a node v is \(\sum_{w} x_{v,w} - x_{w,v} = 0\), so we will have a one in every column x\(_{v,w}\), a -1 in every column x\(_{w,v}\), and a zero everywhere else. If we transpose this matrix, we get that a column x\(_{v,w}\) has a 1 in the row z\(_{v}\), a -1 in the row z\(_{w}\), a 1 in the row y\(_{v,w}\), and a zero everywhere else. Thus, the LHS of a constraint (in the dual) corresponding to x\(_{v,w}\) is just z\(_{v}\) - z\(_{w}\) + y\(_{w,v}\). The direction of the inequality is \(\geq\) because we had x\(_{i,j}\) \(\geq\) 0 (see cookbook). Now, we must find the value on the RHS. To do this, we look at the objective function. For (v,w) \(\neq\) (s,t), x\(_{v,w}\) is not even in the objective, so we have a 0. For (v,w) = (t,s), we have a 1. Finally, the objective of the dual is y\(_{1}\)b\(_{1}\) + y\(_{2}\)b\(_{2}\) + y\(_{3}\)b\(_{3}\). The y's are variables in the dual (y\(_{1}\) UIS, y\(_{2} \geq 0\), y\(_{3} \leq 0\)), while the b's are constraints in the primal (b\(_{1}\) =, b\(_{2}\) \(\geq\), b\(_{3}\) \(\leq\)). In our example, b\(_{2}\) (the capacity constraints) are the only non-zero constraints, and so our objective is y\(_{2}\)b\(_{2}\) = \(\sum y_{v,w}u_{v,w}\). Thus, we have: Dual: Minimize \(\sum u_{v,w}y_{v,w}\), while preserving y\(_{v,w}\) \(\geq\) 0 z\(_{v} - z_{w} + y_{v,w} \geq 0 \ \ \forall (v,w) \neq (t,s)\) z\(_{v} - z_{w} + y_{v,w} \geq 1\) for (v,w) = (t,s). Note that u\(_{t,s} = \infty\) (since we asserted that we are putting an infinite capacity arc from \(t\) to \(s\)), so an optimal solution to the dual will have y\(_{t,s}\) = 0. By using this, and moving some equations around, we get the constraints: y\(_{v,w}\) \(\geq\) 0 z\(_{w} \leq z_{v} + y_{v,w} \ \ \forall (v,w) \neq (t,s)\) z\(_{t} - z_{s} \geq 1\) We can interpret the dual in the following way. View u\(_{v,w}\) as the cross-sectional area of (v,w), and y\(_{v,w}\) as the length of (v,w). Our goal is to minimize the total volume, while maintaining a distance of at least 1 between s and t. Duality is a very useful concept, especially because it helps to view the optimization problem on hand from a different perspective, which might be easier to handle.


  • Duality in Linear Programming
  • Post last modified: 23 July 2022
  • Reading time: 16 mins read
  • Post category: Operations Research

Coursera 7-Day Trail offer

Soon after Dantzig presented the linear programming problem, John von Neumann presented the duality theorem for linear optimisation.

In mathematical optimisation theory, principle of duality depicts that every optimisation problem may be viewed from the primal problem or the dual problem.

Table of Content

  • 1.1 Sometimes, initial feasible solution to the dual is easier
  • 1.2 Sometimes, solving the dual is easier
  • 1.3 Sensitivity analysis
  • 1.4 Shadow prices
  • 1.5 An economic interpretation of duality
  • 2 Primal and Dual Problems
  • 3 Converting Primal Into Dual Problem and Vice Versa
  • 4 Economic Interpretation of Duality
  • 5 Primal Dual Relationships

The solution to the dual problem provides a lower bound to the solution of the primal (minimisation) problem (Ekeocha, 2018). However, the optimal values of the primal and dual problems need not always be equal. The difference between the two is called the duality gap. The duality gap is zero for convex optimisation problems, for a constraint qualification condition.

The duality theory in linear programming is concerned with the study of the relationship between two related linear programming problems, where if the primal is a maximisation problem, then the dual is a minimisation problem and vice versa.

Duality Theorem in Linear Programming Problems

The duality theorem in linear programming states that for every linear programming problem, there exists another linear programming problem related to it and therefore, can be derived from it.

Following are the duality theorem:

Sometimes, initial feasible solution to the dual is easier

Sometimes, solving the dual is easier, sensitivity analysis, shadow prices, an economic interpretation of duality.

Sometimes, the dual problem is easier to solve for an initial feasible solution than the primal problem. For example, if the primal is a minimisation problem, the constraints are often of the form Ax ≥ b, x ≥ 0, for b ≥ 0. The dual constraints would then likely be of the form ATy ≤ c, y ≥ 0, for c ≥ 0. For the dual problem, the origin is feasible but not for the primal.

Solving for the dual is easier when the number of decision variables is significantly lower than slack or surplus variables. A primal problem with many constraints and few variables can be changed into a dual problem with few constraints and many variables. It is still possible to obtain the solution for the primal problem from the simplex tableau if the dual problem is solved.

Sensitivity analysis is a very important part of almost every linear programming study. Because most of the parameter values used in the original model are just estimates of future conditions, it is worthwhile to study the effect on the optimal solution if conditions are different. Duality allows sensitivity analysis of a linear programming study.

Shadow prices are provided by the optimal solution for the dual problem. Shadow prices indicate the resulting change in contribution margin (in a maximisation problem) or the change in cost (in a minimisation problem) that occurs when a constraint is relaxed. Shadow prices depict the sensitivity of profit to resource quantities.

Interpretation of shadow price is quite valuable for two reasons:

  • Shadow prices directly indicate the marginal value of an additional unit of any of the resources.
  • Shadow prices is useful in determining the opportunity cost of allocating resources to an activity, i.e., ‘‘priced out’’ relative to other activities.

The interpretation of the dual variable from the loss or economic point of view is prominent for future decisions in the activities being linear programmed.

Primal and Dual Problems

Below are some key features of the relationship between primal and dual problems:

  • The dual of dual is primal.
  • If either one of the problems has a solution, then the other one also has a solution and the optimum values of both are equal.
  • In case either one of the problems has an infeasible solution, the value of the objective function of the other is unbounded.
  • In case either one of the problems has an unbounded solution, the solution to the other problem is infeasible.
  • If a feasible solution exists for the primal problem but not for the dual, the primal will not have a finite optimum solution and vice versa.
  • The value of the objective function for any feasible solution of the primal is smaller than the value of the objective function for any feasible solution of the dual.

Converting Primal Into Dual Problem and Vice Versa

Steps for converting primal into a dual problem and vice versa are summarised below:

Step 1: Before formulating the dual problem, write the original linear programming problem in its standard form.

Step 2: Identify the coefficients of the variables of the primal problem as these will become the constraints equation of the dual problem.

Step 3: Identify the constraint values of the primal problem as these will become the coefficient of dual variables in the objective function of a dual problem.

Step 4: If primal problem is maximisation type, then the dual is minimisation type and vice versa. The constraints are also reversed, so if the inequality sign is ≤ in problem, it becomes ≥ in dual problem and vice versa.

Converting a primal into dual problem can be understood through the example given below:

Maximise Z = 40 (x1) + 75 (x2)

Subject to:

4 (x1) + 3 (x2) ≤ 180 3 (x1) + 5 (x2) ≤ 220 x1, x2 ≥ 0

The original linear programming problem can be changed to its dual as:

Minimise C = 180 (y1) + 220 (y2)

4 (y1) + 3 (y2) ≥ 40 3 (y1) +5 (y2) ≥ 75 (y1), (y2) ≥ 0

Economic Interpretation of Duality

Let us look at the economic interpretation of dual variables.

You read in the previous sections that for any pair of feasible primal and dual solutions:

“Objective value in the maximisation problem ≤ Objective value in the minimisation problem”

From this relationship, you can infer that if the total returns from all the activities is less than the resources cost, the corresponding primal and dual solutions are not optimal. Optimality is only met when the resources have been maximally exploited, which is only possible when the input equals the output.

For economic stability, the two quantities must be equal.

An LP problem can be considered as a resource allocation model whose aim is to maximise profit or minimise cost with limited resources. When we look at the problem with this perspective, the associated dual problem also provides rather interesting economic interpretations of the LP resource allocation model.

You saw in the last sections, that the primal and the dual are related in a mathematical sense. In this section, you will see how they are related in the economic sense.

If there are n variables and m resource constraints in the primal problem, there will be m variables and n constraints in the dual problem. Dual variables Ui directly correspond to the primal constraints, which means that dual variable (U1) corresponds with the first primal constraint, dual variable (U2) with the second primal constraint, etc.

Dual variables can therefore be interpreted as the marginal value of each constraint’s resources. These dual variables are known as shadow prices and specify the marginal value of each resource. Likewise, primal variables Xi also directly correspond to the dual constraints, i.e., X1 corresponds to the first dual constraint, X2 corresponds to the second dual constraint, and so on.

Let us use the following example to understand the economic impor- tance of the dual. Consider the following primal problem:

Maximise Z = 200 (X1) + 180 (X2)

subject to:

X1 + 25X2 ≤ 120 X1 + 20X2 ≤ 280 X1, X2 ≥ 0

The associated dual problem is:

Minimise C = 120U1 + 280U2

U1 + 25U2 ≥ 200 U1 + 20U2 ≥ 180 U1, U2 ≥ 0

The dual variable U1 gives the marginal value of the first resource and the dual variable U2 gives the marginal value of the second resource. The first dual constraint limits the value of the resources used in producing one unit of X to be greater than or equal to the marginal revenue contribution of X.

In the primal problem, X1 uses one unit of first resource and 25 units of the second resource, which gives a return of 200 and X2 uses one unit of first resource and 20 units of second resource, which gives a return of 180. In the dual problem, the use of first resource times its marginal value (1U1) plus use of second resource times its marginal value (25U2) must be greater than or equal to the returns when one unit of X1 is produced (which is 200).

For the second constraint, the marginal value of first resource plus 20 times the marginal value of second resource has be greater than or equal to the returns when one unit of X2 is produced (which is 180).

The values of the dual variable are thus so constrained that the marginal value of the resources utilised by each primal variable is at least as much as the marginal profit contribution of that variable.

The objective function of the dual problem minimises the total mar- ginal value of the available resources. In the above problem, this equals to the capacity of the first resource times the marginal value of the first resource (120U1) plus the capacity of the second resource times the marginal value of the second resource (280 (U2)).

Thus, a linear problem for minimising the marginal value of the resource capacity (whose constraints require that the marginal value of the resources utilised for producing each product must be no less than the marginal value of the product) gives rise to dual variables. In case there is slack in the constraints, it implies the amount by which cost exceeds returns.

This can be considered a decision-making problem faced by a resource procurer in a competitive market. While the procurer would have to pay a minimum as much for the resources as the value of the goods produced using those resources, but they would try to minimise the total cost of the resources procured.

Primal Dual Relationships

If any changes are made in the original LP model, it will result in a change in the elements of the current optimal tableau. This may influence the optimality and/or the feasibility of the current solution.

An understanding of these relationships is important as they form the basis for the economic interpretation of the LP model and post-optimality analysis. Let us understand the types of prime-dual relationships:

Relationship 1: The dual problem of the dual problem is the primal problem, i.e., either of the primal and dual problems are the dual problem of the other. This means that there is symmetry in their positions, therefore, any fact that holds true for either the primal or dual problem has a corresponding fact that holds true for the other.

Relationship 2: The feasible and optimal solutions of the primal and dual problems are in a close relationship. If feasible solutions to both primal and dual problems exist, then any feasible value of the primal is an upper bound of any feasible value of the dual, while any feasible value of the dual is a lower bound of any feasible value of the primal.

Relationship 3: In case either of the primal and dual problems is unbounded, there is no feasible solution to the other. This relationship is a consequence of the relationship. Let us say a feasible solution to the dual problem exists, and then the primal problem is bounded from below.

Similarly, if a feasible solution to the primal problem exists, the dual problem is bounded above. It follows from the above statements that there is no feasible solution to one if either is unbounded.

Relationship 4 : Strong duality theorem states that if an optimal solution to either of the primal and dual problems exists, then an optimal solution to the other also exists and the associated optimal values are equal. That is, if x is an optimal solution for the primal problem and y is an optimal solution for the dual problem, then cx = yb.

Relationship 5: The weak duality property defines the relation- ship between a given pair of solutions for primal and dual problems, where both solutions for their corresponding problems are feasible. If x is a feasible solution for the primal problem and y is a feasible solution for the dual problem, then cx ≤ yb.

Relationship 6: If either of the primal and dual problems is infeasible, then the other problem is infeasible or unbounded.

Business Ethics

( Click on Topic to Read )

  • What is Ethics?
  • What is Business Ethics?
  • Values, Norms, Beliefs and Standards in Business Ethics
  • Indian Ethos in Management
  • Ethical Issues in Marketing
  • Ethical Issues in HRM
  • Ethical Issues in IT
  • Ethical Issues in Production and Operations Management
  • Ethical Issues in Finance and Accounting
  • What is Corporate Governance?
  • What is Ownership Concentration?
  • What is Ownership Composition?
  • Types of Companies in India
  • Internal Corporate Governance
  • External Corporate Governance
  • Corporate Governance in India
  • What is Enterprise Risk Management (ERM)?
  • What is Assessment of Risk?
  • What is Risk Register?
  • Risk Management Committee

Corporate social responsibility (CSR)

  • Theories of CSR
  • Arguments Against CSR
  • Business Case for CSR
  • Importance of CSR in India
  • Drivers of Corporate Social Responsibility
  • Developing a CSR Strategy
  • Implement CSR Commitments
  • CSR Marketplace
  • CSR at Workplace
  • Environmental CSR
  • CSR with Communities and in Supply Chain
  • Community Interventions
  • CSR Monitoring
  • CSR Reporting
  • Voluntary Codes in CSR
  • What is Corporate Ethics?

Lean Six Sigma

  • What is Six Sigma?
  • What is Lean Six Sigma?
  • Value and Waste in Lean Six Sigma
  • Six Sigma Team
  • MAIC Six Sigma
  • Six Sigma in Supply Chains
  • What is Binomial, Poisson, Normal Distribution?
  • What is Sigma Level?
  • What is DMAIC in Six Sigma?
  • What is DMADV in Six Sigma?
  • Six Sigma Project Charter
  • Project Decomposition in Six Sigma
  • Critical to Quality (CTQ) Six Sigma
  • Process Mapping Six Sigma
  • Flowchart and SIPOC
  • Gage Repeatability and Reproducibility
  • Statistical Diagram
  • Lean Techniques for Optimisation Flow
  • Failure Modes and Effects Analysis (FMEA)
  • What is Process Audits?
  • Six Sigma Implementation at Ford
  • IBM Uses Six Sigma to Drive Behaviour Change
  • Research Methodology
  • What is Research?
  • What is Hypothesis?
  • Sampling Method
  • Research Methods
  • Data Collection in Research
  • Methods of Collecting Data
  • Application of Business Research
  • Levels of Measurement
  • What is Sampling?
  • Hypothesis Testing
  • Research Report
  • What is Management?
  • Planning in Management
  • Decision Making in Management
  • What is Controlling?
  • What is Coordination?
  • What is Staffing?
  • Organization Structure
  • What is Departmentation?
  • Span of Control
  • What is Authority?
  • Centralization vs Decentralization
  • Organizing in Management
  • Schools of Management Thought
  • Classical Management Approach
  • Is Management an Art or Science?
  • Who is a Manager?

Operations Research

  • What is Operations Research?
  • Operation Research Models
  • Linear Programming
  • Linear Programming Graphic Solution
  • Linear Programming Simplex Method
  • Linear Programming Artificial Variable Technique
  • Transportation Problem Initial Basic Feasible Solution
  • Transportation Problem Finding Optimal Solution
  • Project Network Analysis with Critical Path Method

Project Network Analysis Methods

Project evaluation and review technique (pert), simulation in operation research.

  • Replacement Models in Operation Research

Operation Management

  • What is Strategy?
  • What is Operations Strategy?
  • Operations Competitive Dimensions
  • Operations Strategy Formulation Process
  • What is Strategic Fit?
  • Strategic Design Process
  • Focused Operations Strategy
  • Corporate Level Strategy
  • Expansion Strategies
  • Stability Strategies
  • Retrenchment Strategies
  • Competitive Advantage
  • Strategic Choice and Strategic Alternatives
  • What is Production Process?
  • What is Process Technology?
  • What is Process Improvement?
  • Strategic Capacity Management
  • Production and Logistics Strategy
  • Taxonomy of Supply Chain Strategies
  • Factors Considered in Supply Chain Planning
  • Operational and Strategic Issues in Global Logistics
  • Logistics Outsourcing Strategy
  • What is Supply Chain Mapping?
  • Supply Chain Process Restructuring
  • Points of Differentiation
  • Re-engineering Improvement in SCM
  • What is Supply Chain Drivers?
  • Supply Chain Operations Reference (SCOR) Model
  • Customer Service and Cost Trade Off
  • Internal and External Performance Measures
  • Linking Supply Chain and Business Performance
  • Netflix’s Niche Focused Strategy
  • Disney and Pixar Merger
  • Process Planning at Mcdonald’s

Service Operations Management

  • What is Service?
  • What is Service Operations Management?
  • What is Service Design?
  • Service Design Process
  • Service Delivery
  • What is Service Quality?
  • Gap Model of Service Quality
  • Juran Trilogy
  • Service Performance Measurement
  • Service Decoupling
  • IT Service Operation
  • Service Operations Management in Different Sector

Procurement Management

  • What is Procurement Management?
  • Procurement Negotiation
  • Types of Requisition
  • RFX in Procurement
  • What is Purchasing Cycle?
  • Vendor Managed Inventory
  • Internal Conflict During Purchasing Operation
  • Spend Analysis in Procurement
  • Sourcing in Procurement
  • Supplier Evaluation and Selection in Procurement
  • Blacklisting of Suppliers in Procurement
  • Total Cost of Ownership in Procurement
  • Incoterms in Procurement
  • Documents Used in International Procurement
  • Transportation and Logistics Strategy
  • What is Capital Equipment?
  • Procurement Process of Capital Equipment
  • Acquisition of Technology in Procurement
  • What is E-Procurement?
  • E-marketplace and Online Catalogues
  • Fixed Price and Cost Reimbursement Contracts
  • Contract Cancellation in Procurement
  • Ethics in Procurement
  • Legal Aspects of Procurement
  • Global Sourcing in Procurement
  • Intermediaries and Countertrade in Procurement

Strategic Management

  • What is Strategic Management?
  • What is Value Chain Analysis?
  • Mission Statement
  • Business Level Strategy
  • What is SWOT Analysis?
  • What is Competitive Advantage?
  • What is Vision?
  • What is Ansoff Matrix?
  • Prahalad and Gary Hammel
  • Strategic Management In Global Environment
  • Competitor Analysis Framework
  • Competitive Rivalry Analysis
  • Competitive Dynamics
  • What is Competitive Rivalry?
  • Five Competitive Forces That Shape Strategy
  • What is PESTLE Analysis?
  • Fragmentation and Consolidation Of Industries
  • What is Technology Life Cycle?
  • What is Diversification Strategy?
  • What is Corporate Restructuring Strategy?
  • Resources and Capabilities of Organization
  • Role of Leaders In Functional-Level Strategic Management
  • Functional Structure In Functional Level Strategy Formulation
  • Information And Control System
  • What is Strategy Gap Analysis?
  • Issues In Strategy Implementation
  • Matrix Organizational Structure
  • What is Strategic Management Process?

Supply Chain

  • What is Supply Chain Management?
  • Supply Chain Planning and Measuring Strategy Performance
  • What is Warehousing?
  • What is Packaging?
  • What is Inventory Management?
  • What is Material Handling?
  • What is Order Picking?
  • Receiving and Dispatch, Processes
  • What is Warehouse Design?
  • What is Warehousing Costs?

You Might Also Like

Linear programming: graphic solution, linear programming: artificial variable technique, what is linear programming assumptions, properties, advantages, disadvantages, linear programming: simplex method, transportation problem: initial basic feasible solution, operation research models and modelling, transportation problem: finding an optimal solution, project network analysis with critical path method, what is operations research (or) definition, concept, characteristics, tools, advantages, limitations, applications and uses, leave a reply cancel reply.

You must be logged in to post a comment.

World's Best Online Courses at One Place

We’ve spent the time in finding, so you can spend your time in learning

Digital Marketing

Personal growth.

linear programming duality solved examples


linear programming duality solved examples

Dual Problem: Linear Programming

Solving the dual problem.

Example: Duality

Maximize z = 40w 1 + 50w 2

2w 1 + 3w 2 ≤ 3 4w 1 + 2w 2 ≤ 3

w 1 , w 2 ≥ 0

After adding slack variables, we have

Maximize z = 40w 1 + 50w 2 + 0x 3 + 0x 4

2w 1 + 3w 2 + x 3 = 3 4w 1 + 2w2 + x 4 = 3 w 1 , w 2 , x 3 , x 4 ≥ 0

Where x 3 and x 4 are slack variables.

Initial basic feasible solution

w 1 = 0, w 2 = 0, z = 0 x 3 = 3, x 4 = 3

Table 1: Simplex Method

On small screens, scroll horizontally to view full calculation

Use Horizontal Scrollbar to View Full Table Calculation

The optimal solution is: w 1 = 3/8, w 2 = 3/4 z = 40 X 3/8 + 50 X 3/4= 105/2.

In case of primal problem , you noted that the values of z j -c j under the surplus variables x 3 and x 4 were 3/8 and 3/4. In case of dual problem , these values are the optimal values of dual variables w 1 and w 2 .

The optimal values of the dual variables are often called shadow prices.

Further, the values of the objective functions in both the problems are same (i.e., 105/2)

Important Primal-Dual Results

  • If one problem has an unbounded optimal solution, then the other problem cannot have a feasible solution.
  • Optimal solution of either problem gives complete information about the optimal solution of the other.
  • If either the primal or dual problem has a finite optimal solution, then the other has also a finite optimal solution.
  • The dual of a dual is primal.

Share this article with your friends

Operations Research Simplified Back Next

Goal programming Linear programming Transportation Problem Assignment Problem

  • School Guide
  • Mathematics
  • Number System and Arithmetic
  • Trigonometry
  • Probability
  • Mensuration
  • Maths Formulas
  • Integration Formulas
  • Differentiation Formulas
  • Trigonometry Formulas
  • Algebra Formulas
  • Mensuration Formula
  • Statistics Formulas
  • Trigonometric Table
  • CBSE Class 12 Maths Notes: Chapter Wise Notes PDF 2024

Chapter 1: Relations and Functions

  • Types of Functions
  • Composite functions - Relations and functions
  • Invertible Functions
  • Composition of Functions
  • Inverse Functions
  • Verifying Inverse Functions by Composition

Chapter 2: Inverse Trigonometric Functions

  • Inverse Trigonometric Functions
  • Graphs of Inverse Trigonometric Functions - Trigonometry | Class 12 Maths
  • Properties of Inverse Trigonometric Functions
  • Inverse Trigonometric Identities

Chapter 3: Matrices

  • Types of Matrices
  • Matrix Operations
  • Matrix Addition
  • Matrix Multiplication: How to Multiply Matrices, Methods, Examples
  • Transpose of a Matrix
  • Symmetric and Skew Symmetric Matrices
  • Elementary Operations on Matrices
  • Inverse of a Matrix by Elementary Operations - Matrices | Class 12 Maths
  • Invertible Matrix

Chapter 4: Determinants

  • Determinant of a Matrix with Solved Examples
  • Properties of Determinants
  • Area of a Triangle using Determinants
  • Minors and Cofactors
  • Adjoint of a Matrix
  • Applications of Matrices and Determinants

Chapter 5: Continuity and Differentiability

  • Continuity and Discontinuity in Calculus - Class 12 CBSE
  • Differentiability of a Function | Class 12 Maths
  • Derivatives of Inverse Functions
  • Derivatives of Implicit Functions - Continuity and Differentiability | Class 12 Maths
  • Derivatives of Composite Functions
  • Derivatives of Inverse Trigonometric Functions | Class 12 Maths
  • Derivative of Exponential Functions
  • Logarithmic Differentiation - Continuity and Differentiability
  • Proofs for the derivatives of eˣ and ln(x) - Advanced differentiation
  • Rolle's Theorem and Lagrange's Mean Value Theorem
  • Derivative of Functions in Parametric Forms
  • Second Order Derivatives in Continuity and Differentiability | Class 12 Maths
  • Mean Value Theorem
  • Algebra of Continuous Functions - Continuity and Differentiability | Class 12 Maths

Chapter 6: Applications of Derivatives

  • Critical Points
  • Derivatives as Rate of Change
  • Increasing and Decreasing Functions
  • Increasing and Decreasing Intervals
  • Tangents and Normals
  • Equation of Tangents and Normals
  • Relative Minima and Maxima
  • Absolute Minima and Maxima
  • Concave Function
  • Inflection Point
  • Curve Sketching
  • Approximations & Maxima and Minima - Application of Derivatives | Class 12 Maths
  • Higher Order Derivatives

Chapter 7: Integrals

  • Integration by Substitution Method
  • Integration by Partial Fractions
  • Integration by Parts
  • Integration of Trigonometric Functions
  • Functions Defined by Integrals
  • Definite Integral
  • Computing Definite Integrals
  • Fundamental Theorem of Calculus
  • Finding Derivative with Fundamental Theorem of Calculus
  • Evaluating Definite Integrals
  • Properties of Definite Integrals
  • Definite Integrals of Piecewise Functions
  • Improper Integrals
  • Riemann Sum
  • Riemann Sums in Summation Notation
  • Trapezoidal Rule
  • Definite Integral as the Limit of a Riemann Sum
  • Antiderivatives
  • Indefinite Integrals
  • Particular Solutions to Differential Equations
  • Integration by U-substitution
  • Reverse Chain Rule
  • Partial Fraction Expansion
  • Trigonometric Substitution: Method, Formula and Solved Examples

Chapter 8: Applications of Integrals

  • Area under Simple Curves
  • Area Between Two Curves - Calculus
  • Area between Polar Curves
  • Area as Definite Integral

Chapter 9: Differential Equations

  • Differential Equations
  • Homogeneous Differential Equations
  • Separable Differential Equations
  • Exact Equations and Integrating Factors
  • Implicit Differentiation
  • Implicit differentiation - Advanced Examples
  • Advanced Differentiation
  • Disguised Derivatives - Advanced differentiation | Class 12 Maths
  • Derivative of Inverse Trig Functions
  • Logarithmic Differentiation

Chapter 10: Vector Algebra

  • Vector Algebra
  • Dot and Cross Products on Vectors
  • How to Find the Angle Between Two Vectors?
  • Section Formula - Vector Algebra

Chapter 11: Three-dimensional Geometry

  • Direction Cosines and Direction Ratios
  • Equation of a Line in 3D
  • Angles Between two Lines in 3D Space: Solved Examples
  • Shortest Distance Between Two Lines in 3D Space | Class 12 Maths
  • Points, Lines and Planes

Chapter 12: Linear Programming

Linear programming.

  • Graphical Solution of Linear Programming Problems

Chapter 13: Probability

  • Conditional Probability and Independence - Probability | Class 12 Maths
  • Multiplication Theorem
  • Dependent and Independent Events
  • Bayes' Theorem
  • Probability Distribution
  • Binomial Distribution in Probability
  • Binomial Mean and Standard Deviation - Probability | Class 12 Maths
  • Bernoulli Trials and Binomial Distribution
  • Discrete Random Variable
  • Expected Value
  • NCERT Solution for Class 12 Math PDF 2024-25
  • RD Sharma Class 12 Solutions for Maths

Linear programming is a mathematical concept that is used to find the optimal solution of the linear function. This method uses simple assumptions for optimizing the given function. Linear Programming has a huge real-world application and it is used to solve various types of problems.

Linear programming is used in various industries such as shipping industries, manufacturing industries, transportation industries, telecommunications, and others.

Term “linear programming” consists of two words linear and programming, the word linear tells the relation between various types of variables of degree one used in a problem and the word programming tells us the step-by-step procedure to solve these problems.

In this article, we will learn about linear programming, its examples, formulas, and other concepts in detail.

Table of Content

What is Linear Programming? 

Components of linear programming, linear programming examples, linear programming problems, types of linear programming problems, linear programming formula, how to solve linear programming problems, linear programming methods, linear programming simplex method, linear programming graphical method, linear programming applications, importance of linear programming, up-to-date applications of linear programming, linear programming in operations research, simplex method.

Linear programming or Linear optimization is a technique that helps us to find the optimum solution for a given problem, an optimum solution is a solution that is the best possible outcome of a given particular problem.

In simple terms, it is the method to find out how to do something in the best possible way. With limited resources, you need to do the optimum utilization of resources and achieve the best possible result in a particular objective such as least cost, highest margin, or least time. 

The situation that requires a search for the best values of the variables subject to certain constraints is where we use linear programming problems. These situations cannot be handled by the usual calculus and numerical techniques.

Linear Programming Definition

Linear programming is the technique used for optimizing a particular scenario. Using linear programming provides us with the best possible outcome in a given situation. It uses all the available resources in a manner such that they produce the optimum result.

The basic components of a linear programming(LP) problem are:

  • Decision Variables: Variables you want to determine to achieve the optimal solution.
  • Objective Function: M athematical equation that represents the goal you want to achieve
  • Constraints: Limitations or restrictions that your decision variables must follow.
  • Non-Negativity Restrictions: In some real-world scenarios, decision variables cannot be negative

Additional Characteristics of Linear Programming

  • Finiteness: The number of decision variables and constraints in an LP problem are finite.
  • Linearity: The objective function and all constraints must be linear functions of the decision variables . It means the degree of variables should be one.

We can understand the situations in which Linear programming is applied with the help of the example discussed below,

Suppose a delivery man has to deliver 8 packets in a day to the different locations of a city. He has to pick all the packets from A and has to deliver them to points P, Q, R, S, T, U, V, and W. The distance between them is indicated using the lines as shown in the image below. The shortest path followed by the delivery man is calculated using the concept of Linear Programming.

Linear Programming Examples

Linear Programming Problems (LPP) involve optimizing a linear function to find the optimal value solution for the function. The optimal value can be either the maximum value or the minimum value.

In LPP, the linear functions are called objective functions. An objective function can have multiple variables, which are subjected to conditions and have to satisfy the linear constraints .

There are many different linear programming problems(LPP) but we will deal with three major linear programming problems in this article.

Manufacturing Problems

Manufacturing problems are a problem that deals with the number of units that should be produced or sold to maximize profits when each product requires fixed manpower, machine hours, and raw materials.

Diet Problems

It is used to calculate the number of different kinds of constituents to be included in the diet to get the minimum cost, subject to the availability of food and their prices.

Transportation Problems

It is used to determine the transportation schedule to find the cheapest way of transporting a product from plants /factories situated at different locations to different markets.

A linear programming problem consists of,

  • Decision variables
  • Objective function
  • Constraints
  • Non-Negative restrictions

Decision variables are the variables x, and y, which decide the output of the linear programming problem and represent the final solution. 

The objective function , generally represented by Z, is the linear function that needs to be optimized according to the given condition to get the final solution. 

The restrictions imposed on decision variables that limit their values are called constraints.

Now, the general formula of a linear programming problem is,

Objective Function : Z = ax + by

Constraints: cx + dy ≥ e, px + qy ≤ r

Non-Negative restrictions: x ≥ 0, y ≥ 0

In the above condition x, and y are the decision variables.

Before solving the linear programming problems first we have to formulate the problems according to the standard parameters. The steps for solving linear programming problems are,

Step 1: Mark the decision variables in the problem. Step 2: Build the objective function of the problem and check if the function needs to be minimized or maximized. Step 3: Write down all the constraints of the linear problems. Step 4: Ensure non-negative restrictions of the decision variables. Step 5: Now solve the linear programming problem using any method generally we use either the simplex or graphical method.

We use various methods for solving linear programming problems. The two most common methods used are,

  • Graphical Method

Let’s learn about these two methods in detail in this article,

One of the most common methods to solve the linear programming problem is the simplex method. In this method, we repeat a specific condition ‘n’ a number of times until an optimum solution is achieved.

The steps required to solve linear programming problems using the simplex method are,

Step 1: Formulate the linear programming problems based on the given constraints. Step 2: Convert all the given inequalities to equations or equalities of the linear programming problems by adding the slack variable to each inequality where ever required. Step 3: Construct the initial simplex table. By representing each constraint equation in a row and writing the objective function at the bottom row. The table so obtained is called the Simplex table. Step 4: Identify the greatest negative entry in the bottom row the column of the element with the highest negative entry is called the pivot column Step 5: Divide the entries of the right-most column with the entries of the respective pivot column, excluding the entries of the bottommost row. Now the row containing the least entry is called the pivot row. The pivot element is obtained by the intersection of the pivot row and the pivot column. Step 6: Using matrix operation and with the help of the pivot element make all the entries in the pivot column to be zero. Step 7: Check for the non-negative entries in the bottommost row if there are no negative entries in the bottom row, end the process else start the process again from step 4. Step 8: The final simplex table so obtained gives the solution to our problem.

Graphical Method is another method than the Simplex method which is used to solve linear programming problems. As the name suggests this method uses graphs to solve the given linear programming problems. This is the best method to solve linear programming problems and requires less effort than the simplex method. 

While using this method we plot all the inequalities that are subjected to constraints in the given linear programming problems. As soon as all the inequalities of the given LPP are plotted in the XY graph the common region of all the inequalities gives the optimum solution. All the corner points of the feasible region are calculated and the value of the objective function at all those points is calculated then comparing these values we get the optimum solution of the LPP.

Example: Find the maximal and minimal value of z = 6x + 9y when the constraint conditions are,

  • 2x + 3y ≤ 12
  • x and y ≥ 0
Step 1 : First convert the inequations into normal equations. Hence the equations will be 2x+3y = 0, x = 0, y = 0 and x + y = 5. Step 2 : Find the points at which 2x + 3y and x + y = 5 cut the x-axis and y-axis. To find the point of intersection of the x-axis put y = 0 in the respective equation and find the point. Similarly for y-axis intersection points put x = 0 in the respective equation. Step 3 : Draw the two lines cutting the x-axis and y-axis. We find that the two axes cut each other at (3,2). Step 4 : For x ≥ 0 and y ≥ 0, we find that both inequations are followed. Hence the region will include an area region enclosed by two axes and both lines including the origin. The plotted region is shown below in the figure. Step 5 : Find Z for each point and maxima and minima. Coordinates Z = 6x + 9y (0,5) Z = 45 (0,4) Z = 36 (5,0) Z = 30 (6,0) Z = 36 (3,2) Z = 36 Hence, we find that Z = 6x + 9y is maximum at (0,5) and minimum at (5,0).

Linear Programming has applications in various fields. It is used to find the minimum cost of a process when all the constraints of the problems are given. It is used to optimize the transportation cost of the vehicle, etc. Various applications of Linear Programming are

Engineering Industries

Engineering Industries use linear programming to solve design and manufacturing problems and to get the maximum output from a given condition.

Manufacturing Industries

Manufacturing Industries use linear programming to maximize the profit of the companies and to reduce the manufacturing cost.

Energy Industries

Energy companies use linear programming to optimize their production output.

Transportation Industries

Linear programming is also used in transportation industries to find the path to minimize the cost of transportation.

Linear Programming has huge importance in various industries it maximizes the output value while minimizing the input values according to various constraints.

LP is highly applicable when we have multiple conditions while solving a problem and we have to optimize the output of the problem i.e. either we have to find the minimum or the maximum value according to a given condition.

Linear Inequalities Algebraic Solution of Linear Inequalities

Problem 1: A company manufactures and sells two types of products and the cost of production of each unit a and b is rupees 200 and 150 respectively each unit of product yields a profit of 20 rupees and each unit of product b yields a profit of 15 rupees on selling. The company estimates the monthly demand of A and B to be at a maximum of the harvested unit in all the production budget for the month is set at rupees 50000. How many units should the company manufacture to earn maximum profit from its monthly sales from a and b?

Let x = number of units of type A y = Number of units of type B Maximize Z = 40x + 50y Subject to the constraints 3x + y ≤ 9 x + 2y ≤ 8 and x, y ≥ 0 Consider the equation, 3x + y = 9 x = 3 y = 0 and x + 2y = 8 x = 8 y = 0 Now, we can determine the maximum value of Z by evaluating the value of Z at the four points (vertices) is shown below Vertices Z = 40x + 50y (0, 0) Z = 40 × 0 + 50 × 0 = Rs. 0 (3, 0) Z = 40 × 3 + 50 × 0 = Rs. 120 (0, 4)  Z = 40 × 0 + 50 × 4 = Rs. 200 (2, 3) Z = 40 × 2 + 50 × 3 = Rs. 230 Maximum profit, Z = Rs. 230 ∴ Number of units of type A is 2 and the number of units of type B is 3.

Problem 2: Maximize Z = 3x + 4y.

Subject to constraints , x + y ≤ 450, 2x + y ≤ 600 and x, y ≤ 0.

We have from the given Constraints (1) X + Y = 450 Putting x = 0, ⇒ 0 + y = 450 ⇒ y = 450 Putting y = 0, ⇒ x + 0 = 450 ⇒ x = 450 From, Constraints (2) 2x + y = 600 Putting x = 0, ⇒ 0 + y = 600 ⇒ y = 600 Putting  y = 0, ⇒ 2x + 0 = 600 ⇒ x = 300 Now, we have the points co-ordinate Z = 3x + 4y
Therefore, the optimal solution maximum Z = 1800 at co-ordinate x = 0 and y = 450. The graph is given below.

LPP Graph for Z = 3x + 4y

Linear programming, a powerful mathematical technique, is used to solve optimization problems in various industries. Here are some modern applications:

  • Supply Chain Optimization : Linear programming helps companies minimize costs and maximize efficiency in their supply chains. It’s used for determining the most cost-effective transportation routes, warehouse operations, and inventory management strategies.
  • Energy Management : In the energy sector, linear programming is utilized to optimize the mix of energy production methods. This includes balancing traditional energy sources with renewable ones to reduce costs and environmental impact while meeting demand.
  • Telecommunications Network Design : Linear programming aids in designing efficient telecommunications networks. It helps in allocating bandwidth, designing network layouts, and optimizing the flow of data to ensure high-speed communication at lower costs.
  • Financial Planning : Businesses and financial analysts use linear programming for portfolio optimization, risk management, and capital budgeting. It helps in making investment decisions that maximize returns while minimizing risk.
  • Healthcare Logistics : In healthcare, linear programming is applied to optimize the allocation of resources, such as hospital beds, medical staff, and equipment. It’s crucial for improving patient care, reducing wait times, and managing costs effectively.
  • Manufacturing Process Optimization : Linear programming is used to determine the optimal production levels for multiple products within a manufacturing facility, considering constraints like labor, materials, and machine availability.
  • Agricultural Planning : Farmers and agricultural planners use linear programming to decide on crop selection, land use, and resource allocation to maximize yields and profits while conserving resources.
  • Airline Crew Scheduling : Airlines employ linear programming to schedule crews efficiently, ensuring that flights are staffed in compliance with regulations and minimizing operational costs.

These applications demonstrate the versatility and power of linear programming in solving complex optimization problems across various sectors, showcasing its relevance in today’s data-driven world.

  • Core Tool : Linear programming is a foundational tool in operations research for optimizing resources.
  • Decision Making : Helps in making the best decisions regarding resource allocation, maximizing profits, or minimizing costs.
  • Wide Applications : Used in various fields such as logistics, manufacturing, finance, and healthcare for solving complex problems.
  • Modeling Real-World Problems : Transforms real-world problems into mathematical models to find the most efficient solutions.
  • Optimization Algorithm : The Simplex Method is a powerful algorithm used in linear programming to find the optimal solution to linear inequalities.
  • Step-by-Step Approach : It iteratively moves towards the best solution by navigating the edges of the feasible region defined by constraints.
  • Efficiency : Known for its efficiency in solving large-scale linear programming problems.
  • Versatility : Applicable in various domains like diet planning, network flows, production scheduling, and more, showcasing its versatility.

Linear Programming – FAQs

What is linear programming.

Linear programming is a mathematical concept which is used to optimize a given linear problem which has a variety of constraints. Using linear programming we the optimum output of the given problem

What are Linear Programming Problems?

Linear Programming Problems (LPP) are the problems which give the optimum solution to the given conditions.

What is Linear Programming Formula?

General Linear Programming Formulas are, Objective Function: Z = ax + by Constraints: px + qy ≤ r, sx + ty ≤ u Non-Negative Restrictions: x ≥ 0, y ≥ 0

What are the different types of Linear Programming?

Different types of linear programming methods are, Linear Programming by Simplex Method Linear Programming by R Method Linear Programming by Graphical Method

What are requirements of Linear Programming?

Various requirements of linear programming problems are, Linearity Objective Function Constraints Non-negativity

What are the advantages of Linear Programming?

Various advantages of linear programming are, It provides the optimal solution to any given linear problem. It is easy to use and always gives consistent results It helps to maximize profits and to reduce the input cost.

Please Login to comment...

Similar reads.

  • Math-Concepts
  • Maths-Class-12
  • Technical Scripter 2020
  • School Learning
  • Technical Scripter

Improve your Coding Skills with Practice


What kind of Experience do you want to share?


  1. V4-10. Linear Programming. The Duality Theorem. part 4

    linear programming duality solved examples

  2. Dual Programming Part 3: Writing the Dual Programming of a Linear Programming Problem

    linear programming duality solved examples

  3. Duality in Linear Programming

    linear programming duality solved examples

  4. The essence of duality

    linear programming duality solved examples

  5. Solved Problem 2: (Linear Programming Duality 30 points]

    linear programming duality solved examples

  6. Duality in linear programming

    linear programming duality solved examples


  1. #5 Duality

  2. Linear Programming

  3. 6

  4. Lecture 7 Linear Programming Duality

  5. 18

  6. V4-07. Linear Programming. The Duality Theorem


  1. PDF Lecture 6 1 The Dual of Linear Program

    In which we introduce the theory of duality in linear programming. 1 The Dual of Linear Program Suppose that we have the following linear program in maximization standard form: maximize x 1 + 2x 2 + x 3 + x 4 subject to x 1 + 2x 2 + x 3 2 x 2 + x 4 1 x 1 + 2x 3 1 x 1 0 x 2 0 x 3 0 (1) and that an LP-solver has found for us the solution x 1:= 1 ...

  2. PDF Duality in Linear Programming 4

    Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal ... In Chapter 2, the example was solved in detail by the simplex method, resulting in the final tableau, repeated here as Tableau 2. 130. 4.1 A Preview of Duality 131 Tableau 1 Basic Current

  3. PDF LinearProgrammingII-Duality

    LinearProgrammingII-Duality In this lecture we discuss the general notion of Linear Programming Duality, a powerful tool that can allow us to solve some linear programs easier, gain theoretical insights into the proper-ties of a linear program, and has many more applications that we might see later in the course.

  4. PDF Linear Programming: Chapter 5 Duality

    Linear Programming: Chapter 5 Duality Robert J. Vanderbei October 17, 2007 ... Example of in nite gap: maximize 2x 1 x 2 subject to x 1 x 2 1 x 1 + x 2 2 x 1; x 2 0: ... By Strong Duality Theorem, the inequalities are equalities at optimality. Dual Simplex Method When: dual feasible, primal infeasible (i.e., pinks on the left, not on top). ...

  5. PDF Duality in Linear Programming

    Duality in Linear Programming Defn. Consider the linear programming problem (in standard form): maximize cT x subject to A x ≤ b and x ≥ 0, The dual of this LP problem is the LP minimization problem: minimize yT b subject to yTA ≥ cT and y ≥ 0. These two LP problems are said to be duals of each other.

  6. PDF 1Linear Programming Duality

    Lecture #19: LP Duality last changed: November 2, 2017 In this lecture we discuss the general notion of Linear Programming Duality, a powerful tool that you should de nitely master. 1Linear Programming Duality Consider the following LP P = max(2x 1 + 3x 2) s.t. 4x 1 + 8x 2 12 2x 1 + x 2 3 3x 1 + 2x 2 4 x 1;x 2 0 (1) 4x 1+ 8x 2 12 2x 1+x 2 3 3x ...

  7. PDF Linear Programming Duality and Algorithms

    Linear Programming Duality and Algorithms. Lecturer: Debmalya Panigrahi Scribe: Tianqi Song. 1 Overview. In this lecture, we will cover more examples of linear programming and introduce linear program- ming duality. We will also present several algorithms for solving linear programs.1. 2 Formulate Problems as Linear Programs. 2.1 Maximum Matching.

  8. PDF Lecture 15: Linear Programming

    Lecture 15: Linear Programming. Linear programming (LP) is a method to achieve the optimum outcome under some requirements represented by linear relationships. More precisely, LP can solve the problem of maximizing or minimizing a linear objective function subject to some linear constraints. In general, the standard form of LP consists of ...

  9. Lecture 15

    This lecture covers weak and strong duality, and also explains the rules for finding the dual of a linear program, with an example. Before we move on to duality, we shall first see some general facts about the location of the optima of a linear program. Structure of LP solutions Some intuition in two dimensions. Consider a linear program -

  10. PDF 28 Duality in Linear Programming

    a minimum cut, is an example of a non-trivial and useful duality statement. 3. Network Flow: There are a number of practical problems related to flow of commodities or fluids in networks, which can be phrased as linear programs. A network is another name for a graph. Suppose one has a graph and in it a source vertex and a sink vertex.

  11. PDF 12.1 Linear Programming Duality

    to actually solve an LP relaxation, leading to e cient algorithms that are purely combinatorial. We will apply this technique to the Vertex Cover and Steiner Forest problems. The primal-dual method in the context of approximation algorithms was rst used by Goemans and Williamson [1]. 12.1 Linear Programming Duality Consider the general linear ...

  12. PDF Lecture notes 5: Duality in applications

    Lecture notes 5: Duality in applications. Vincent Conitzer. We have already seen how to take the dual of a linear program in general form. However, when we are solving a problem using linear programming, it can be very enlightening to take the dual of the linear program for that particular problem. Typically, in the context of the problem under ...

  13. PDF Lecture 18: Linear Programming Relaxation, Duality and Applications

    1-4 Lecture 18: Linear Programming Relaxation, Duality and Applications 1.2 The LP Duality Before introducing the concept of the duality, a simple example is rstly shown as below. min 2x 1 + 3x 2 s.t., x 1 + 2x 2 1 3x 1 + 2x 2 2 x 1;x 2 0 (1.8) By multiplying y 1 and y 2 separately to the two constraints and sum them up, then we get (y 1 + 3y 2 ...

  14. PDF Duality in Linear Programming

    Linear Programming Duality: Motivation •Yields another linear program: the dual linear program: min 12w1 − 3w2 s.t. 3w1 − w2 ≥ 1 2w1 ≥ 2 w1 − w2 ≥ −1 w1, w2 ≥ 0 •To distinguish, the original linear program will be called the primal •Interestingly, the dual optimal solution is also 6 •In fact this is guaranteed to hold and, what is more, there ...

  15. PDF Linear Programming, Lagrange Multipliers, and Duality

    mentions one algorithm for linear programming, that algorithm is not new, ... of the log-barrier method for solving linear programs. lp.nb 2. Lagrange Multipliers ... linear subspace of —n is an example of a flat cone. The following picture shows another flat cone, along with its dual (which is not flat).

  16. Duality In Linear Programming

    The duality theorem in linear programming states that for every linear programming problem, there exists another linear programming problem related to it and therefore, can be derived from it. Following are the duality theorem: Sometimes, initial feasible solution to the dual is easier. Sometimes, solving the dual is easier.

  17. Dual problem: Linear Programming, Duality Examples, Primal

    0. 15. 5/2. The optimal solution is: w 1 = 3/8, w 2 = 3/4. z = 40 X 3/8 + 50 X 3/4= 105/2. In case of primal problem, you noted that the values of z j -c j under the surplus variables x 3 and x 4 were 3/8 and 3/4. In case of dual problem, these values are the optimal values of dual variables w 1 and w 2.

  18. Dual linear program

    The duality theorems. Below, suppose the primal LP is "maximize c T x subject to [constraints]" and the dual LP is "minimize b T y subject to [constraints]".. Weak duality. The weak duality theorem says that, for each feasible solution x of the primal and each feasible solution y of the dual: c T x ≤ b T y.In other words, the objective value in each feasible solution of the dual is an upper ...

  19. PDF Lecture 12 Linear programming : Duality in LPP

    Linear programming : Duality in LPP 1. AT W ≥ CT and W ≥ 0 12.2 Important characteristics of Duality 1. Dual of dual is primal ... minimum gains. However, if one problem is solved, the solution for other also can be obtained from the simplex tableau. 5. When a problem does not yield any solution in primal, it can be verified with dual ...

  20. Linear programming

    Duality theory tells us that if the primal is unbounded then the dual is infeasible by the weak duality theorem. Likewise, if the dual is unbounded, then the primal must be infeasible. However, it is possible for both the dual and the primal to be infeasible. ... Solve example Linear Programming (LP) problems through MATLAB, Python, or a web ...

  21. Linear Programming: Definition, Formula, Examples, Problems

    Step 1: Mark the decision variables in the problem. Step 2: Build the objective function of the problem and check if the function needs to be minimized or maximized. Step 3: Write down all the constraints of the linear problems. Step 4: Ensure non-negative restrictions of the decision variables.