[Full PhD Scholarship] Transfer Learning with Diffusion Models (DARPA)

Full PhD stipend, apply by 10 November

This project is only available for prospective PhD students and is part of the DARPA Transfer from Imprecise and Abstract Models to Autonomous Technologies (TIAMAT) program.

Goal

To develop techniques for transferring knowledge

  1. from low-fidelity to high-fidelity simulation environments; and
  2. from simulation environments to real environments

by leveraging diffusion models to bridge the gap between domains.

Summary

Learning in abstract low-fidelity simulation environments can be more efficient than learning in the real world. However, attempts to directly transfer learning between these environments have been brittle, due to the significant domain gaps in appearance, geometry, and physics of the world and the agent (e.g., a robot). This project aims to address some of these gaps by considering how generative models, such as diffusion models, may be used to bridge the domains. Concretely, previous work [1] has shown that pre-trained diffusion models can smoothly connect abstract geometries (e.g., a schematic of a pentagon) with a real image (e.g., a satellite image of The Pentagon). In this project, we propose to develop techniques to enable the transport of trajectories in one domain, with their associated visual imagery, into trajectories in another domain, forming a continuum of observations. We further propose to develop this into a continuum of simulators, where the actions are transported as well as the observations. Recent work [2] suggests that this diffusion-model-as-simulator paradigm is a plausible one, whose generalisation capabilites we can extend.

Desired Applicant Profile

  • Coding Proficiency: Applicants should be comfortable coding in Python and familiar with deep learning and the PyTorch framework.
  • Mathematical Proficiency: Applicants should be familiar with linear algebra, optimisation, probability, and geometry.
  • Experience with simulation environments, such as Habitat, NVIDIA Isaac Sim, or MuJoCo, desirable.
  • Experience with robots desirable.

Contact

Please send

  • your CV,
  • your academic transcripts,
  • a sample of your academic writing, and
  • a short (~2 paragraphs) proposal outlining how you would approach this project and how your skills and interests align with this project

to the TIAMAT Project Mailbox by Sunday 10 November. Any questions about the project can also be directed to that email address.

References

[1] Yang, Zhaoyuan, Zhengyang Yu, Zhiwei Xu, Jaskirat Singh, Jing Zhang, Dylan Campbell, Peter Tu, and Richard Hartley, “IMPUS: Image Morphing with Perceptually-Uniform Sampling Using Diffusion Models”, ICLR, 2024.

[2] Valevski, Dani, Yaniv Leviathan, Moab Arar, and Shlomi Fruchter, “Diffusion models are real-time game engines”, arXiv:2408.14837, 2024.

arrow-left bars search times