main module

main.run(device, opt_type, learning_rate, train_batch_size, eval_batch_size, train_epochs, momentum, w_decay, betas, save_model, _seed, _run)[source]

Main function of the continual learning framework. This uses Avalanche lib to create continual learning scenarios, and Sacred to store the experiments.

Note

The Avalanche library uses multiple main concepts to enable continual learning with pytorch:

  • Strategies : Strategies model the pytorch training loop. One can thus create strategies for special loop cycles and algorithms.

  • Scenarios : A particular setting, i.e. specificities about the continual stream of data, a continual learning algorithm will face. For example we can have class incremental scenarios or task incremental scenarios.

  • Plugins : A module designed to simply augment a regular continual strategy with custom behavior. Adding evaluators for example or enabling replay learning.

For more detailed information about the use of this library check out their main website and their API

Parameters:
  • opt_type (str) – Optimizer type

  • learning_rate (float) – Learning rate

  • train_batch_size (int) – Train mini batch size

  • eval_batch_size (int) – Eval mini batch size

  • train_epochs (int) – Number of training epochs on each experience

  • momentum (float) – Momentum value in optimizer

  • PolynomialHoldDecayAnnealing_schedule (bool) – Enable or not the learning rate scheduler

  • save_model (bool) – Save model as artifact or not

  • _seed (int) – Random seed generated by the sacred experiment. This seed is common to all the used libraries capable of randomness

  • _run – Sacred runtime environment

Returns:

Top1 average accuracy on eval stream.

Return type:

int