AI, ML and Friends is a weekly seminar series within the School of Computing on Artificial Intelligence, Machine Learning, and related topics. We are open to attendees and presenters external to the school. Please sign up to the mailing list to receive weekly announcements including zoom details, and email the seminar organiser to schedule a talk.

Upcoming Seminars #

06 February 2025, 11:00 #

Counter-Example Based Planning #

Speaker: Xiaodi Zhang

Abstract: I will introduce advancements in solving conformant planning (CP) and probabilistic conformant planning (PCP) through counter-example-based approaches. For CP, I enhance the CPCES algorithm by introducing certain-facts to optimize classical planning reductions, implementing a warm-start strategy to improve initialization, and integrating an incremental SAS+ representation for better compatibility with the FD planner. For PCP, I propose p-CPCES, which uses counter-tags, a probabilistic abstraction of counter-examples, to reduce search complexity, using d-DNNF representations for efficient probability computation. To address the limitations of single-threaded computation in p-CPCES, I propose parallel-CPCES, a parallelized system utilizing multi-core CPUs. By introducing specialized modules to manage counter-tags, hitting sets, and candidate plans, parallel-CPCES enables concurrent computations at each depth, significantly reducing planning time.

Bio: Xiaodi Zhang earned a Master of Computing degree from the Australian National University in 2019. Following that, he worked as a software engineer in Beijing for one and a half years. Since August 2021, Xiaodi has been pursuing his PhD degree at the Australian National University, focusing on counter-example based planning. His primary supervisors are Alban Grastien and Charles Gretton. Xiaodi has published related research papers at the AAAI-20, SoCS-23 and ICAPS-24 conferences, receiving Best Student Paper (Honorable Mention) at ICAPS-24.

Where: Building 145, room 3.41

13 March 2025, 11:00 #

Human Autonomy in the Age of A.I. #

Speaker: Joshua Krook

Abstract: Recommender systems form the backbone of modern e-commerce, suggesting items to users based on the collection of algorithmic data of a user’s preferences. Companies that use recommender systems claim that they can give users what they want, or more precisely, what they desire. Netflix, for example, gives users recommended movies based on the user’s behaviour on the platform, thereby listing new movies that the user may want to watch. This article explores whether there is a difference between what engages us, on the one hand, and what we truly want to want, on the other. This builds on the hierarchical structure of desires, as posed by Harry Frankfurt and Gerald Dworkin. Recommender systems, to use Frankfurt’s terminology, may not allow for the formation of second-order desires, or for users to consider what they want to want. Indeed, recommender systems may rely on a narrow form of human engagement, a voyeuristic mode, rather than an active wanting. In bypassing second-order desires, there is a risk that recommender systems can start to control the user, rather than the user controlling the algorithm. This raises important questions concerning human autonomy, trustworthiness, and Byung-Chul Han’s conception of an information regime, where the owners of the data make decisions about what users consume online, and ultimately, how they live their lives.

Bio: Joshua Krook is a Visiting Academic at the School of Electronics and Computer Science, at the University of Southampton, UK, as part of the Responsible AI UK research network. For the past few years, he has worked in the European context, developing strategic research papers on AI and platform regulation for universities and public bodies. He has organized workshops with stakeholders across Europe to ideate policy solutions to problems as diverse as the use of drones on construction sites, A.I. transparency, and AI Skills education. More recently, he co-drafted the Munich Convention on AI, Data and Human Rights, and is working in the Transparency Working Group, to co-draft the Code of Practice for the European AI Act. Previously, he worked for the Australian Federal Government in technology policy, for the Department of Industry.

Where: Building 145, room 1.33

bars search times arrow-up