Discrete neural networks with binary or ternary weights are computationally and memory efficient. In contrast to their continuous counterparts, scalable uncertainty-aware training techniques for these networks remain largely unexplored.
The aims of this project are:
- deriving new efficient algorithms for training discrete neural networks based on variational inference with continuous relaxation or Langevin-like sampling methods
- investigating the performance of large-scale networks trained with these algorithms on predictive uncertainty quantification, active learning, and continual learning.
- Some machine learning familiarity.
- Experience in Bayesian methods and/or deep learning is preferred.