Multimodal Analytics to Record History and Recover from Pain: Story Sequencing for Reconciling Traumatised Individuals

Research areas



Motivation: Can we facilitating refugees’ recover, record their possibly fragmented memories, and form an overall understanding of what has happened in a crisis area by piecing the puzzle of individuals’ stories together?

Approach: First, the project will analyse the biosignals (e.g., heart rate or oxygen values), spoken audio (e.g., whispering or yelling), speech-to-text language (e.g., tense, active/passive voice, topic, and sentiment), and video visual (e.g., vertical wrinkles and other stress indicators) to assess the patient’s pain status and personalise its question posing and other interaction appropriately. Second, it will study topic modelling, question answering, and logic techniques to analyse the recorded content for what has and has not been said and formulate topics that would benefit from elaboration. Finally, it will use visual text summarisation and storytelling techniques to formulate the individualised topical questions and illustrate the recorded corpus both for individual patients and their populations.


The project methodology consists of either by studying a small set of methods in depth or larger set in combination. Research on state-of-the-art deep, transfer, and active learning methods is called for to minimise the amount of annotated data for method setup whilst maximising the processing correctness and system adaptability. Multi-modal aspects are considered to evaluate the gain and complexity of the additional data modalities. The project is truly interdisciplinary and tightly connected to authentic data, real-life applications, and business/provider internships. Both experimental and theoretical work, in other words, applied and fundamental research, go hand in hand with their emphasis depending on the student’s individual interests and expertise. 


This project will appeal to students with excellent skills in experimentation, programming, and teamwork. The preference is on students who have finished/are taking the units of Artificial Intelligence, Document Analysis, and/or Machine Learning in The ANU or similar.

Background Literature

See, for example, the following paper:  Suominen H, Lundgrén-Laine H, Salanterä S, Salakoski T. Evaluating pain in intensive care. In Saranto K, Flatley Brennan P, Park H-A, Tallberg M, Ensio A (eds.): Proceedings of the 10th International Congress of Nursing Informatics (NI 2009), Studies in Health Technology and Informatics 2009;146:191–196.


This student project is a part of the activities of the NLP Team within ML Group in The Australian National University (ANU) and Data61 in Canberra, the capital of Australia. The OECD Regional Well-Being Report 2014 evaluated Canberra as the most livable city in the world.

The ML Group has been recently (in 2014) ranked among the top five in the world in ML, the others being Microsoft Research, Max Planck Institute Tübingen, University of Berkeley, and University of Cambridge. According to the QS World University Rankings for 2015-16, The ANU ranks within the top-20 universities globally with the overall score of 91.0 out of 100.0 (19th) whilst the next best Australian university scored 83.1 (42nd) and for the field of research (FOR) code of AI and Image Processing, applicable to ML and NLP, under Information and Computer Sciences, The ANU has obtained the top 5 out of 5 score in the Excellence in Research for Australia (ERA) evaluations, both in 2010 and 2012.


The NLP Team is experienced in developing powerful low-cost techniques to free-form text them into structured representations. Our deep and transfer ML methods are able to use less than a hundred expert-annotated sentences to achieve performance comparable to the state-of-the-art systems, initialised with ten times more data. Similarly, our language processing methods have been among the finest elite in the ALTA, CLEF, and TREC shared tasks on automated understanding, use, summarisation, and  translation in difficult genres of “Doctors’ Latin” in electronic health records and “Lawyers’ French” in patents.


Artificial Intelligence (AI), Digital Story Telling, Automated Speech Recognition (ASR), Natural Language Processing (NLP), and Visual-Interactive Text Search and Exploration with Logic, Computer Vision (CV), and Machine Learning (ML) 

Updated:  1 June 2019/Responsible Officer:  Dean, CECS/Page Contact:  CECS Marketing