Project Overview#

Discrete neural networks with binary or ternary weights are computationally and memory efficient. In contrast to their continuous counterparts, scalable uncertainty-aware training techniques for these networks remain largely unexplored.

The aims of this project are:

  1. deriving new efficient algorithms for training discrete neural networks based on variational inference with continuous relaxation or Langevin-like sampling methods
  2. investigating the performance of large-scale networks trained with these algorithms on predictive uncertainty quantification, active learning, and continual learning.

Requirements#

  • Some machine learning familiarity.
  • Experience in Bayesian methods and/or deep learning is preferred.

Info#

bars search times arrow-up