CallbackΒΆ
A Callback
class can be used to receive a notification of the algorithm object each generation. This can be useful to keep track of metrics, do additional calculations or even modify the algorithm object during the run. The latter is only recommend for experienced users.
For instance, to keep track of the best solution each generation:
[1]:
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.algorithms.so_genetic_algorithm import GA
from pymoo.factory import get_problem
from pymoo.model.callback import Callback
from pymoo.optimize import minimize
import numpy as np
import matplotlib.pyplot as plt
class MyCallback(Callback):
def __init__(self) -> None:
super().__init__()
self.data["best"] = []
def notify(self, algorithm):
self.data["best"].append(algorithm.pop.get("F").min())
problem = get_problem("sphere")
algorithm = GA(pop_size=100, callback=MyCallback())
res = minimize(problem,
algorithm,
('n_gen', 20),
seed=1,
save_history=True,
verbose=True)
val = res.algorithm.callback.data["best"]
plt.plot(np.arange(len(val)), val)
plt.show()
=============================================
n_gen | n_eval | favg | fopt
=============================================
1 | 100 | 0.831497479 | 0.387099336
2 | 200 | 0.578035582 | 0.302189349
3 | 300 | 0.443801185 | 0.267733594
4 | 400 | 0.347200983 | 0.188215259
5 | 500 | 0.272644726 | 0.083479177
6 | 600 | 0.212567874 | 0.083479177
7 | 700 | 0.173574163 | 0.072492126
8 | 800 | 0.140740462 | 0.051256476
9 | 900 | 0.110370322 | 0.041778020
10 | 1000 | 0.089125798 | 0.041778020
11 | 1100 | 0.071339910 | 0.031644566
12 | 1200 | 0.057941249 | 0.030055810
13 | 1300 | 0.047786695 | 0.021855327
14 | 1400 | 0.040676540 | 0.017620999
15 | 1500 | 0.034902705 | 0.014756395
16 | 1600 | 0.029778240 | 0.014756395
17 | 1700 | 0.026185115 | 0.012976416
18 | 1800 | 0.022664820 | 0.008637920
19 | 1900 | 0.019648350 | 0.006399439
20 | 2000 | 0.016725603 | 0.006399439

If the analysis of the run should be done during post-processing the option save_history
can be used as well. If a callback is used the history does not need to be saved. By using the history object the same as above can be achieved by using the stored information during the run:
[2]:
val = [e.pop.get("F").min() for e in res.history]
plt.plot(np.arange(len(val)), val)
plt.show()

If the save_history
is true a deep copy of the algorithm object takes place each generation. Please note that this can be quite expensive and might not be desired for all runs. However, it provides great post-processing options because all data can be accessed respectively.