Multi-modal Learning for Next Generation Quantum Sensors

Collaborate on a real-world project using contrasive deep learning and eXplainable AI to develop next generation quantum sensors.

Picture of amanda-barnard.md Amanda Barnard AM

5 Nov 2025

Quantum sensing is set to revolutionise how we detect and monitor the world around us, from defence and national security to environmental monitoring and advanced communications. These sensors rely on defects in materials known as photon emitters, which offer ultra-sensitive and reliable performance far beyond classical technologies. However, our limited understanding of the atomic structures that enable this performance is severely limiting the progress. This project addresses this challenge by developing a powerful new method that combines advanced microscopy with machine learning to directly observe and understand the atomic-scale mechanisms controlling photon emission. By unlocking this fundamental knowledge, the project will pave the way for designing next-generation materials that power more sensitive, reliable, and scalable quantum sensors. With downstream applications ranging from secure communications to remote surveillance and environmental sensing, this work is funded by the Australian Research Council, and will commence in January 2026.

As part of the research team, located at ANU and UNSW, this PhD position will contribute to the development of a new correlative Transmission Electron Microscopy and PhotoLuminescence (TEMPL) methodology. The multimodal datasets used in TEMPL are notoriously difficult to integrate without introducing bias or losing critical correlations. To address this the PhD project will design and implement a new multimodal imaging platform that overcomes the intrinsic spatial resolution mismatch between the physical optical techniquees, using contrastive learning and learning embeddings to map the structure and optical features into a shared latent dimension that automatically capture features correlating the electronic, optical and structural information. Unlike traditional supervised learning, which requires large labeled datasets, contrastive learning excels at extracting robust representations from limited data. By maximising similarity between true structural-optical (positive) pairs and distinguishing them from unrelated (negative) pairs in a shared latent space, this approach captures intrinsic structure–property relationships with high fidelity. This will enable the project to pinpoint the atomic origins of photon emission, as well as the local “environmental factors”, such as strain fields or local chemistry, that impact emission characteristics and determine the efficiency as quantum sensors.

Research Questions and Tasks

This project will focus on the following tasks:

  1. Use self-supervised contrastive learning to fuse structural and optical data, enabling accurate identification of photon-emitting defects critical for quantum sensing applications.
  2. Train the model to simultaneously maximise agreement of the related pairs with dual encoders, and build upon this method to enable image-map/spectral multimodal combinations.
  3. Use explainable AI to extract interpretable insights from models to guide the rational design of defect photon emitters tailored for quantum sensing technologies.

Supervision

Hybrid, including periodic project meetings with UNSW collaborators.

References

Requirements

Background and experience in deep learning and machine vision is essential. Experience with Python is strongly desirable.

arrow-left bars magnifying-glass xmark