2021 Program

Week 1

Week 2



Fundamentals of Metalogic
(John Slaney)

Keywords Completeness theorems, Model theory, proof theory.

Abstract This course provides an introduction to the metatheory of elementary logic. Following a “refresher” on the basics of notation and the use of classical logic as a representation language, we concentrate on the twin notions of models and proof. An axiomatic system of first order logic is introduced and proved complete for the standard semantics, and then we give a very brief overview of the basic concepts of proof theory and of formal set theory. The material in this course is presupposed by other courses in the Summer School, which is why it is presented first.

Pre-requisites It is assumed that students are at least familiar with logical notation for connectives and quantifiers, and can manipulate truth tables, some kind of proof system or semantic tableaux or the like. Normally they will have done a first logic course at tertiary level. Some basic mathematical competence is also presupposed, to be comfortable with the notion of a formal language, follow proofs by induction, be OK with basic set-theoretic notation and the like.

Lecturer John Slaney is the founder and convenor of the Logic Summer school. He originated in England, ever so long ago, and discovered Australia in 1977. Undeterred by the fact that it had apparently been discovered before, he gradually moved here, joining the Automated Reasoning Project in 1988 on a three-year contract because he could not think of anywhere better to be a logician. He is still here and still can’t. His research interests pretty much cover the waterfront, including non-classical logics, automated deduction and all sorts of logic-based AI. He was formerly the leader of the Logic and Computation group at the ANU, and of the NICTA program of the same name.

Slides


Computability and Incompleteness
(Michael Norrish)

Keywords Computability; recursive Functions and Turing machines; diagonalisation; Peano arithmetic and Gödel numbering; undecidability of first-order logic; incompleteness of Peano arithmetic.

Abstract We begin with two formal accounts of the intuitive notion of computability: recursive functions and Turing machines. They turn out to be the same, hence Church’s Thesis: functions that can be computed by any means are precisely the partial recursive functions. Then we revisit the old Cretan who says that all Cretans always lie, and other forms of diagonalisation argument such as Halting Problem. Next we look at an axiomatic theory of arithmetic, known as Peano Arithmetic (PA), and show how we can represent all recursive functions in PA.

This will lead to Goedel numbering: a neat trick enabling us to effectively encode notions like “theorem”, “proof” and “provability in PA” within PA itself. We spend a while discussing Diagonalisation Lemma and Derivability Conditions. Finally, in one fell swoop we prove undecidability of first-order logic (Church’s Theorem), undefinability of truth (Tarski’s Theorem), incompleteness of PA given consistency of PA (First Goedel’s Theorem) and unprovability of consistency of PA given consistency of PA (Second Goedel’s Theorem).

Pre-requisites Foundations of first-order logic

Background reading G. Boolos and R. Jeffrey, Computability and Logic.

Lecturer Michael Norrish is an Associate Professor at the Australian National University. Before joining the ANU, he worked for Data61, CSIRO. He is originally from Wellington, New Zealand, and is very happy to be back in the southern hemisphere, after an extended stint in Cambridge, England. It was there that he did his PhD, and then spent three years as a research fellow at St. Catharine’s College. His research interests include interactive theorem-proving, and the application of this technology to areas of theoretical computer science, particularly the semantics of programming languages.

Slides


Introduction to CTL Model Checking
(Peter)

Keywords Computation Tree Logic (CTL)

Abstract It is virtually impossible to guarantee correctness of a system, and in turn the absence of bugs by standard software engineering practice such as code review, systematic testing and good software design alone. The complexity of systems is typically too high to be manageable by informal reasoning or reasoning that is not tool supported. The formal methods community has developed various rigorous, mathematically sound techniques and tools that allow the automatic analysis of systems and software. The application of these fully automatic techniques is typically called algorithmic verification. The course will describe one famous automatic verification technique, the algorithms they are based on, and the tools that support them.

The topics covered by the lectures will educate students on the foundations of computation tree logic (CTL) and CTL model checking techniques and model checking tools.

Pre-requisites Familiarity with predicate logic.

Background reading

  • Christel Baier and Joost-Pieter Katoen. Principles of model checking. MIT Press, 2008. ISBN 978-0-262-02649-9
  • Edmund Clarke, Orna Grumberg and Doron Peled. Model Checking. MIT Press, 2000.
  • Michael Huth and Mark Ryan. Logic in Computer Science (2nd edition). Cambridge University Press, 2004.
  • Beatrice Berard et al. Systems and Software Verification: Model-Checking Techniques and Tools. Springer, 2001

Lecturer

Peter is an Associate Professor at The Australian National University (ANU), Canberra, Australia. Before joining ANU, he was principal research scientist at Data61, CSIRO, Australia’s leading data innovation group, and also conjoint Associate Professor at the University of New South Wales (UNSW) Australia.

His research focuses on applications of formal methods in computer science, including protocol verification, software engineering and hybrid system analysis. Currently the main focus of his research lies on algebraic calculi for these topics.

Slides


Modelling Concurrent Systems
(Robert J. van Glabbeek)

Abstract This course introduces students to state-of-the-art techniques in modelling concurrent systems. The focus will be on the rationale behind the design decisions underlying the more successful models of concurrency found in the literature, viewed from philosophical, mathematical and computational perspectives.

The course covers labelled transition systems, process algebra and Petri nets; operational and denotational semantics; semantic equivalences and refinement relations; modal and temporal logic for concurrent systems; fairness assumptions and proving liveness properties - stating that something good will happen eventually - and impossibility results in modelling mutual exclusion.

Lecturer Professor Rob van Glabbeek has been active as a research scientist in the field of Formal Methods since 1984, of which five years were spent at CWI in Amsterdam and twelve years at Stanford University. In addition he has had visiting appointments at the Technical University of Munich, GMD in Bonn, INRIA in Sophia Antipolis, the University of Edinburgh, the University of Cambridge, and l’Université de la Méditerranée in Marseilles. He has also been active as a consultant for Ricoh Innovations, California, in the area of workflow modelling.

Since 2004 he worked for NICTA, Sydney, Australia, until that research institute merged into Data61, CSIRO, on 1 July 2016. Since then he is Chief Research Scientist at Data61, CSIRO in Sydney. Additionally, he is Conjoint Professor at the School of Computer Science and Engineering at the University of New South Wales, and a Research Affiliate at the Concurrency Group in the Computer Science Department of Stanford University.

Slides


Software Verification with Whiley: the Complete Guide
(David J. Pearce)

Abstract The goal of a verifying compiler is to blend software verification with programming to the point they are indistinguishable. Researchers have pursued this idea for decades but, finally, we are starting to see real progress. Systems such as Dafny, Why3, Frama-C and Whiley are pushing software verification beyond the boundaries of academic research.

But, what is a verifying compiler? How does it work? What is it like to verify software with one? In this lecture series, you’ll get an insiders look at the Whiley verifying compiler. This uses Boogie/Z3 under-the-hood to offer powerful verification, whilst providing a surface language resembling modern programming languages. We’ll use Whiley to verify some realistic software, whilst exploring challenges and pitfalls encountered along the way. We’ll also take a deep dive into how a verifying compiler, such as Whiley, works and study practical aspects of Hoare logic and Separation Logic. Finally, we’ll reach the limits of tools like Whiley and consider what the future holds.

Lecturer David Pearce (@whileydave) is an Associate Professor at Victoria University of Wellington, New Zealand.

He graduated with a PhD from Imperial College London in 2005, and took up a lecturer position at Victoria University of Wellington, NZ. David’s PhD thesis was on efficient algorithms for pointer analysis of C. During his time as a PhD student he undertook internships at Bell Labs, New Jersey, where he worked on compilers for FPGAs; and at IBM Hursely, UK, where he worked with the AspectJ development team on profiling systems.

Over the years, David has developed a number of algorithms which have seen practical uptake. His algorithm for field-sensitive pointer analysis was subsequently incorporated into GCC. Likewise, he developed a space-efficient variant of Tarjan’s algorithm for finding Strongly Connected Comnponents (now included in SciPy). His algorithm for dynamic topological sort is used in the Abseil C++ library, Google’s TensorFlow, the Monosat SAT solver and JGraphT. Finally, he co-designed the most efficient algorithm for computing Tutte Polynomials to date, developing a C++ implementation from scratch. This has since been incorporated into Mathematica.

Since 2009, David has been developing the Whiley Programming Language (whiley.org) which is designed specifically to simplify program verification. The language employs Boogie / Z3 for verification, and supports several backends including JavaScript and Java (with prototype backends for TypeScript and C++ underway). You can find out more about David’s work on his personal homepage (http://whileydave.com) and even try out Whiley for yourself (whileylabs.com).

Slides


Information flow security for concurrent code
(Kirsten Winter)

Abstract Secure software development relies on the detection of vulnerabilities and information leaks present in the program. Ideally this should be supported by a rigorous analysis. This course presents an approach to such an analysis suitable for concurrent code that uses shared-variable communication between its threads. It is conducted through backwards-directed formal reasoning in a thread-local fashion which is enabled through standard rely/guarantee reasoning being paired with the backwards analysis.

To complicate matters further, the analysis is set out to also handle shared-variable concurrency where mutual exclusion of accesses to shared variables is not enforced through a locking mechanism. This can occur through oversight during program development or in the encoding of non-blocking algorithms which are used for efficiency reasons. Such code displays data races between threads and as a result can be affected by weak memory behaviour of the underlying hardware. Our approach additionally investigates whether weak memory behaviour can influence the secure information flow within the code.

The course will be structured as follows:

  • Lecture 1: Introduction to information flow security
  • Lecture 2: Weakest precondition reasoning adapted to information flow security
  • Lecture 3: Weakest precondition reasoning enhanced with rely/guarantee reasoning for information flow security
  • Lecture 4: Weak memory behaviour on modern multi-core hardware
  • Lecture 5: An analysis on reordering-interference freedom on concurrent code under the effect of hardware weak memory

Lecturer Kirsten Winter received a PhD degree in Computer Science from the Technical University Berlin, Germany, in 2001, after which she held a position as a Research Fellow at the University of Queensland, Australia, for 18 years. During this time her research interests were centered around the analysis of concurrent software and systems, spanning a variety of approaches and techniques, including model checking, static program analysis, and proof systems based on a rely-guarantee refinement calculus. With those techniques she has targeted the analysis of formal requirements models, concurrent programs, as well as concurrent objects suitable for deployment on multi-core architectures. This background became useful when she joined the Department of Defence, Science and Technology - the research branch of the the department - as a researcher in 2019 to tackle the security analysis of programs. Not surprisingly this research has taken the path of a formal approach to handle the problem.

Slides


Constructive Logic and Realisability
(Dirk Pattinson)

Keywords Constructive Logic; Program Extraction; Heyting Arithmetic; Realisability Interpretation;

Abstract Compared with classical logic the main difference is that existential quantifiers (resp. disjunction) are not dual to universal quantifiers (resp. conjunction): to establish truth of an existentially quantified formula, one needs to produce a witness of existence, and the (classically valid) principle of reductio-ad-absurdum is not available. As a consequence, every proof of a formula of the form `forall x. exists y. A(x, y)’ engenders an algorithm (a recursive function) that computes y from x.

The realisability interpretation makes this precise in that it allows us to compute a realiser of a formula from its proof that represents its computational content.

We begin with an informal introduction to constructive reasoning, and the Brouwer-Heyting-Kolmogorov interpretation of constructive logic. We then introduce a natural deduction system to make this precise, and study the relationship between classical and constructive logic, in particular the double negation translation.

We then introduce Heyting Arihmetic, and the notion of number realisability. Our first main result is the soundness theorem: every provable formula is realisable. Here, the realiser is a natural number that we think of as a Goedel number of a partial recursive function that represents the computational content of the formula. We then vary the notion of realisability to obtain the disjunction and existence property for constructive logic, and time permitting, introduce Heyting Arithmetic in Higher Types, together with function realisability. Here, realisers are terms of Goedel’s `System T’, i.e. can be thought of as functional programs.

Pre-requisites Familiarity with predicate logic and basic recursion theory, in the scope of the `Foundations of Metalogic’ and the ‘Computability and Incompleteness’ courses.

Background reading

  • Thomas Streicher, Introduction to Constructive Logic and Mathematics, Lecture notes. Available from the author’s home page and indeed an excellent resource that covers almost all of the material presented.
  • Helmut Schwichtenberg and Stan Wainer, Proofs and Computation. Cambridge University Press, 2012. Advanced material that in particular studies different logical systems and program extraction from classical proofs. In particular, it develops a theory of computable functions that allows to discuss program extraction in the presence of general recursion.
  • Anne Troelstra, Metamathematical Investigations into Intuitionistic Arithmetic, Springer 1973. A classical reference for intuitionistic mathematics.

Lecturer

Dirk is a mathematician turned computer scientist. Prior to joining ANU he held a (senior) lectureship at Imperial College London, a lectureship at the University of Leicester and was a Research Associate at LMU Munich.


Interactive Theorem Provers and Cryptography
(Thomas Haines)

Keywords cryptography, interactive theorem provers; easycrypt; encryption; commitments

Abstract This course will introduce several fundamental cryptographic primitives and examples of situations in which they are useful. We will rigorously define what the primitives are and what it means for them to be secure in the interactive theorem prover easycrypt. We will then code various instantiations of these primitives and prove to the interactive theorem prover that they are secure. The main primitives analysed will be commitment schemes and encryption schemes. The aim of this course is to give a basic understanding of how cryptography can provide formal security guarantees and how those guarantees can be checked by interactive theorem provers.

Pre-requisites Basic abstract algebra

Background reading D. Boneh and V. Shoup, A Graduate Course in Applied Cryptography

Lecturer Thomas Haines is an applied cryptographer specialising in the security of distributed systems. After completing his PhD in 2017, he worked as a research and development manager at Polyas GmbH in Berlin. While at Polyas he was responsible for development of electronic voting solutions, including design, implementation, documentation and certification. Thomas was a research fellow at the Norwegian university of Science and Technology before joining ANU in 2021. He has been involved with auditing over a dozen different electronic voting systems.

Slides


Introduction to Homotopy Type Theory and Univalent Foundations of Mathematics
(Taichi Uemura)

Abstract This course introduces participants to homotopy type theory and univalent foundations of mathematics. Homotopy type theory is a branch of mathematics, logic, and computer science that combines homotopy theory and type theory. Univalent foundations are a style of doing mathematics in which mathematics is formalized in a type theory with homotopy-theoretic constructions and axioms.

Voevodsky’s univalence axiom asserts that equivalent types are identical. This conflicts with the set-theoretic interpretation and forces the type theory to be higher-dimensional. In this sense, the type theory with univalence works best as an internal language for higher categories, but it also has advantages in formalizing lower-dimensional or set-level mathematics. An example is Aczel’s structure identity principle: anything in the type theory is invariant under structure isomorphism.

The course covers some fundamental concepts in homotopy type theory and univalent foundations: the univalence axiom; h-levels. Higher inductive types are also important, but we probably do not have much time to discuss them. We then discuss the structure identity principle.

Lecturer Taichi Uemura has been working on the semantics of type theories, especially those related to homotopy type theory. He received a PhD degree from the University of Amsterdam in 2021 and is currently working at Stockholm University as a postdoc.

bars search times arrow-up