GradientsΒΆ

If the problem is implemented using autograd then the gradients through automatic differentiation are available out of the box. Let us consider the following problem definition for a simple quadratic function without any constraints:

[1]:
import autograd.numpy as anp

from pymoo.core.problem import Problem
from pymoo.problems.bounds_as_constr import BoundariesAsConstraints
from pymoo.problems.autodiff import AutomaticDifferentiation

class MyProblem(Problem):

    def __init__(self):
        super().__init__(n_var=10, n_obj=1, n_constr=0, xl=-5, xu=5)

    def _evaluate(self, x, out, *args, **kwargs):
         out["F"] = anp.sum(anp.power(x, 2), axis=1)


problem = AutomaticDifferentiation(MyProblem())

The gradients can be retrieved by appending F to the return_values_of parameter:

[2]:
X = anp.array([anp.arange(10)])
F, dF = problem.evaluate(X, return_values_of=["F", "dF"])

The resulting gradients are stored in dF and the shape is (n_rows, n_objective, n_vars):

[3]:
print(X, F)

print(dF.shape)
print(dF)
[[0 1 2 3 4 5 6 7 8 9]] [[285.]]
(1, 1, 10)
[[[ 0.  2.  4.  6.  8. 10. 12. 14. 16. 18.]]]

Analogously, the gradient of constraints can be retrieved by appending dG.