In mathematics, a constraint is a condition of an optimization problem that the solution must satisfy. There are several types of constraints—primarily equality constraints, inequality constraints, and integer constraints. The set of candidate solutions that satisfy all constraints is called the feasible set.[1]
Example
The following is a simple optimization problem:
\( {\displaystyle \min f(\mathbf {x} )=x_{1}^{2}+x_{2}^{4}} \)
subject to
\( {\displaystyle x_{1}\geq 1} \)
and
\( {\displaystyle x_{2}=1,} \)
where \( \mathbf {x} \) denotes the vector (x1, x2).
In this example, the first line defines the function to be minimized (called the objective function, loss function, or cost function). The second and third lines define two constraints, the first of which is an inequality constraint and the second of which is an equality constraint. These two constraints are hard constraints, meaning that it is required that they be satisfied; they define the feasible set of candidate solutions.
Without the constraints, the solution would be (0,0), where \( f({\mathbf x}) \) has the lowest value. But this solution does not satisfy the constraints. The solution of the constrained optimization problem stated above is \( {\displaystyle \mathbf {x} =(1,1)} \) , which is the point with the smallest value of \( f({\mathbf x}) \) that satisfies the two constraints.
Terminology
If an inequality constraint holds with equality at the optimal point, the constraint is said to be binding, as the point cannot be varied in the direction of the constraint even though doing so would improve the value of the objective function.
If an inequality constraint holds as a strict inequality at the optimal point (that is, does not hold with equality), the constraint is said to be non-binding, as the point could be varied in the direction of the constraint, although it would not be optimal to do so. Under certain conditions, as for example in convex optimization, if a constraint is non-binding, the optimization problem would have the same solution even in the absence of that constraint.
If a constraint is not satisfied at a given point, the point is said to be infeasible.
Hard and soft constraints
If the problem mandates that the constraints be satisfied, as in the above discussion, the constraints are sometimes referred to as hard constraints. However, in some problems, called flexible constraint satisfaction problems, it is preferred but not required that certain constraints be satisfied; such non-mandatory constraints are known as soft constraints. Soft constraints arise in, for example, preference-based planning. In a MAX-CSP problem, a number of constraints are allowed to be violated, and the quality of a solution is measured by the number of satisfied constraints.
Global constraints
Global constraints[2] are constraints representing a specific relation on a number of variables, taken altogether. Some of them, such as the alldifferent constraint, can be rewritten as a conjunction of atomic constraints in a simpler language: the alldifferent constraint holds on n variables \( {\displaystyle x_{1}...x_{n}} \), and is satisfied iff the variables take values which are pairwise different. It is semantically equivalent to the conjunction of inequalities \( {\displaystyle x_{1}\neq x_{2},x_{1}\neq x_{3}...,x_{2}\neq x_{3},x_{2}\neq x_{4}...x_{n-1}\neq x_{n}} \). Other global constraints extend the expressivity of the constraint framework. In this case, they usually capture a typical structure of combinatorial problems. For instance, the regular constraint expresses that a sequence of variables is accepted by a deterministic finite automaton.
Global constraints are used[3] to simplify the modeling of constraint satisfaction problems, to extend the expressivity of constraint languages, and also to improve the constraint resolution: indeed, by considering the variables altogether, infeasible situations can be seen earlier in the solving process. Many of the global constraints are referenced into an online catalog.
See also
Constraint algebra
Karush–Kuhn–Tucker conditions
Lagrange multipliers
Level set
Linear programming
Nonlinear programming
Restriction
Satisfiability modulo theories
References
Takayama, Akira (1985). Mathematical Economics (2nd ed.). New York: Cambridge University Press. p. 61. ISBN 0-521-31498-4.
Rossi, Francesca; Van Beek, Peter; Walsh, Toby (2006). "7". Handbook of constraint programming (1st ed.). Amsterdam: Elsevier. ISBN 9780080463643. OCLC 162587579.
Rossi, Francesca (2003). Principles and Practice of Constraint Programming CP 2003 00 : 9th International Conference, CP 2003, Kinsale, Ireland, September 29 October 3, 2003. Proceedings. Berlin: Springer-Verlag Berlin Heidelberg. ISBN 9783540451938. OCLC 771185146.
Further reading
Beveridge, Gordon S. G.; Schechter, Robert S. (1970). "Essential Features in Optimization". Optimization: Theory and Practice. New York: McGraw-Hill. pp. 5–8. ISBN 0-07-005128-3.
Undergraduate Texts in Mathematics
Graduate Studies in Mathematics
Hellenica World - Scientific Library
Retrieved from "http://en.wikipedia.org/"
All text is available under the terms of the GNU Free Documentation License