项目作者: AvivNavon

项目描述 :
Official implementation of Auxiliary Learning by Implicit Differentiation [ICLR 2021]
高级语言: Python
项目地址: git://github.com/AvivNavon/AuxiLearn.git
创建时间: 2020-06-07T19:05:59Z
项目社区:https://github.com/AvivNavon/AuxiLearn

开源协议:MIT License

下载


AuxiLearn - Auxiliary Learning by Implicit Differentiation

This repository contains the source code to support the paper Auxiliary Learning by Implicit Differentiation, by Aviv Navon, Idan Achituve, Haggai Maron, Gal Chechik and Ethan Fetaya, ICLR 2021.




  1. Paper
  2. Project page

Installation

Please note: We encountered some issues and drops in performance while working with different PyTorch versions. Please install AuxiLearn on a clean virtual environment!

  1. python3 -m venv <venv>
  2. source <venv>/bin/activate

On a clean virtual environment clone the repo and install:

  1. git clone https://github.com/AvivNavon/AuxiLearn.git
  2. cd AuxiLearn
  3. pip install .

Usage

Given a bi-level optimization problem in which the upper-level parameters (i.e., auxiliary parameters) are only
implicitly affecting the upper-level objective, you can use auxilearn to compute the upper-level gradients through implicit differentiation.

The main code component you will need to use is auxilearn.optim.MetaOptimizer. It is a wrapper over
PyTorch optimizers that updates its parameters through implicit differentiation.

Code example

We assume two models, primary_model and auxiliary_model, and two dataloaders.
The primary_model is optimized using the train data in the train_loader, and the auxiliary_model is optimized using the auxiliary set in the aux_loader.
We assume a loss_fuction that return the train loss if train=True, or auxiliary set loss if train=False.
Also, we assume the training loss is a function of both the primary parameters and the auxiliary parameters,
and that the loss on the auxiliary set (or validation set) is a function of the primary parameters only.
In Auxiliary Learning, the auxiliary set loss is the loss on the main task (see paper for more details).

  1. from auxilearn.optim import MetaOptimizer
  2. primary_model = MyModel()
  3. auxiliary_model = MyAuxiliaryModel()
  4. # optimizers
  5. primary_optimizer = torch.optim.Adam(primary_model.parameters())
  6. aux_lr = 1e-4
  7. aux_base_optimizer = torch.optim.Adam(auxiliary_model.parameters(), lr=aux_lr)
  8. aux_optimizer = MetaOptimizer(aux_base_optimizer, hpo_lr=aux_lr)
  9. # training loop
  10. step = 0
  11. for epoch in range(epochs):
  12. for batch in train_loder:
  13. step += 1
  14. # calculate batch loss using 'primary_model' and 'auxiliary_model'
  15. primary_optimizer.zero_grad()
  16. loss = loss_func(train=True)
  17. # update primary parameters
  18. loss.backward()
  19. primary_optimizer.step()
  20. # condition for updating auxiliary parameters
  21. if step % aux_params_update_every == 0:
  22. # calc current train loss
  23. train_set_loss = loss_func(train=True)
  24. # calc current auxiliary set loss - this is the loss over the main task
  25. auxiliary_set_loss = loss_func(train=False)
  26. # update auxiliary parameters - no need to call loss.backwards() or aux_optimizer.zero_grad()
  27. aux_optimizer.step(
  28. val_loss=auxiliary_set_loss,
  29. train_loss=train_set_loss,
  30. aux_params=auxiliary_model.parameters(),
  31. parameters=primary_model.parameters(),
  32. )

Citation

If you find auxilearn to be useful in your own research, please consider citing the following paper:

  1. @inproceedings{
  2. navon2021auxiliary,
  3. title={Auxiliary Learning by Implicit Differentiation},
  4. author={Aviv Navon and Idan Achituve and Haggai Maron and Gal Chechik and Ethan Fetaya},
  5. booktitle={International Conference on Learning Representations},
  6. year={2021},
  7. url={https://openreview.net/forum?id=n7wIfYPdVet}
  8. }