Definition¶
Problems have to be defined, and some information has to be provided. In contrast to other frameworks, we do not share the opinion of just defining a problem by a function is the most convenient one. In pymoo
the problem is defined by an object that contains some metadata, for instance the number of objectives, constraints, lower and upper bounds in the design space. These attributes are supposed to be defined in the constructor and thus by overriding the __init__
method.
Argument 
Description 


Integer value representing the number of design variables. 

Integer value representing the number of objectives. 

Integer value representing the number of constraints. 

Float or 

Float or 

(optional) A type hint for the user what variable should be optimized. 
Moreover, in pymoo there exists three different ways for defining a problem:
Overview
Problem: Objectoriented definition
Problem
which implements a method evaluating a set of solutions.ElementwiseProblem: Objectoriented definition
ElementwiseProblem
which implements a function evaluating a single solution at a time.FunctionalProblem: Define a problem
FunctionalProblem
by using a function for each objective and constraint.
Problem (vectorized)¶
The majority of optimization algorithms implemented in pymoo are populationbased, which means that more than one solution is evaluated in each generation. This is ideal for implementing a parallelization of function evaluations. Thus, the default definition of a problem retrieves a set of solutions to be evaluated. The actual function evaluation takes place in the _evaluate
method, which aims to fill the out
dictionary with the corresponding data. The function values are supposed
to be written into out["F"]
and the constraints into out["G"]
if n_constr
is greater than zero.
Tip
How the objective and constraint values are calculate is irrelevant from a pymoo’s point of view. Whether it is a simple mathematical equation or a discreteevent simulation, you only have to ensure that for each input the corresponding values have been set.
The example below shows a modified Sphere problem with a radial constraint located at the center. The problem consists of 10 design variables, one objective, one constraint, and the lower and upper bounds of each variable are in the range of 0 and 1.
[1]:
import numpy as np
from pymoo.core.problem import Problem
class SphereWithConstraint(Problem):
def __init__(self):
super().__init__(n_var=10, n_obj=1, n_constr=1, xl=0.0, xu=1.0)
def _evaluate(self, x, out, *args, **kwargs):
out["F"] = np.sum((x  0.5) ** 2, axis=1)
out["G"] = 0.1  out["F"]
Assuming the algorithm being used requests to evaluate a set of solutions of size 100, then the input NumPy matrix x
will be of the shape (100,10)
. Please note that the twodimensional matrix is summed up on the first axis which results in a vector of length 100 for out["F"]
. Thus, NumPy performs a vectorized operation on a matrix to speed up the evaluation.
ElementwiseProblem (loop)¶
[2]:
import numpy as np
from pymoo.core.problem import ElementwiseProblem
class ElementwiseSphereWithConstraint(ElementwiseProblem):
def __init__(self):
xl = np.zeros(10)
xl[0] = 5.0
xu = np.ones(10)
xu[0] = 5.0
super().__init__(n_var=10, n_obj=1, n_constr=2, xl=xl, xu=xu)
def _evaluate(self, x, out, *args, **kwargs):
out["F"] = np.sum((x  0.5) ** 2)
out["G"] = np.column_stack([0.1  out["F"], out["F"]  0.5])
Regardless of the number of solutions being asked to be evaluated, the _evaluate
function retrieves a vector of length 10. The _evaluate
, however, will be called for each solution. Implementing an elementwise problem, the Parallelization available in pymoo using processes or threads can be directly used. Moreover, note that the problem above uses a vector definition for the lower and upper bounds (xl
and xu
) because the first variables should cover
a different range of values.
FunctionalProblem (loop)¶
Another way of defining a problem is through functions. One the one hand, many function calls need to be performed to evaluate a set of solutions, but on the other hand, it is a very intuitive way of defining a problem.
[3]:
import numpy as np
from pymoo.problems.functional import FunctionalProblem
objs = [
lambda x: np.sum((x  2) ** 2),
lambda x: np.sum((x + 2) ** 2)
]
constr_ieq = [
lambda x: np.sum((x  1) ** 2)
]
n_var = 10
problem = FunctionalProblem(n_var,
objs,
constr_ieq=constr_ieq,
xl=np.array([10, 5, 10]),
xu=np.array([10, 5, 10])
)
F, CV = problem.evaluate(np.random.rand(3, 10))
print(f"F: {F}\n")
print(f"CV: {CV}")
F: [[20.9946496 66.62683195]
[24.27320351 61.3111027 ]
[25.11442612 59.87052714]]
CV: [[2.40269519]
[3.53267831]
[3.80345138]]
Add Known Optima¶
If the optimum for a problem is known, this can be directly defined in the Problem
class. Below, an example shows the test problem ZDT1
where the Paretofront has been analytically derived and discussed in the paper. Thus, the _calc_pareto_front
method returns the Paretofront.
[4]:
class ZDT1(Problem):
def __init__(self, n_var=30, **kwargs):
super().__init__(n_var=n_var, n_obj=2, n_constr=0, xl=0, xu=1, type_var=anp.double, **kwargs)
def _calc_pareto_front(self, n_pareto_points=100):
x = anp.linspace(0, 1, n_pareto_points)
return anp.array([x, 1  anp.sqrt(x)]).T
def _evaluate(self, x, out, *args, **kwargs):
f1 = x[:, 0]
g = 1 + 9.0 / (self.n_var  1) * anp.sum(x[:, 1:], axis=1)
f2 = g * (1  anp.power((f1 / g), 0.5))
out["F"] = anp.column_stack([f1, f2])
Automatic Differentiation (Autograd)¶
Not available yet.