2015 Complogic Workshop - July 16th, 2015

McConnell 320, McGill University

10:00am

Between modal and first-order: hybrid logic
Agata Murawska

The basic concepts of hybrid logic were introduced by Arthur N. Prior in his work on tense logics. The novel idea was to treat time instants as propositions. Hybrid logic has since been adapted for numerous variants of modal, temporal and description systems. Not only can it increase the expressive power of the underlying logic, it may also make the resulting system better behaved in terms of proof theoretic properties. In principle, any kind of logic can be "hybridised" by adding a new kind of propositions: nominals, which are true at exactly one point of a model.

In this talk I'm going to give an overview of hybrid logic, the new operators it introduces as well as its applications. In particular, I will present Hybrid Logical Framework, a framework for linear logic developed by Jason Reed. HLF, as the name suggests, uses a variant of hybrid logic as its foundation. Finally, I will also discuss a generalisation of HLF that we are currently working on at ITU.

10:30am

Contextal Linear LF
Shawn Otis

While the logical framework LF is well understood and has been implemented in many programming and reasoning frameworks, there are systems for which it is not well suited. Examples of this come in the form of Resources, Session Types and Stateful systems, which are growing in popularity. In order to model these systems, one solution is to consider a linear logical framework, which allows us to limit the usage of assumptions, an intrinsic property of these systems.

We are interested in the meta-theoretic properties of such systems. However, a framework for this does not, to our knowledge, exist. We would thus like to incorporate this linear framework as an extension to Beluga, taking advantage of its two-level approach. Although this contains some additional constructs, such as context variables, which result in design challenges due to their properties in a linear setting, they also allow us to prove theorems involving contexts, which are an important aspect of a linear system.

11:00am

The marriage of effects, monads and Beluga
Francisco Ferreira

In this talk, I will discuss the Philip Wadler's paper "The marriage of effects and monads" from the point of view of a Beluga user. The talk will mostly discuss how one goes trying to understand a paper using Beluga. It will be a talk with Category Theory, programming languages and friendly cetacea, what's not to like?

11:30am

Lunch

1:00pm

Overview on Krivine's classical realisability
Nicolas Jeannerod

Before 1990, there's a strong limitation to the Curry-Howard correspondance: it is restricted to constructive logic, where proof explicitly build the objects that they introduce.

In 1990, Timothy Griffin removes this restriction and extends the correspondance to classical logic by adding the **callcc** operator. Even if some control operators had been introduced earlier, Thimothy Griffin is the first to type a control operator, showing a deep connection between **callcc** and the Pierce's law.

Following this result, various techniques have been introduced to encompass classical logic into computation. Jean Louis Krivine's classical realizability stands among them, and will be the topic of this talk.

1:30pm

A Case Study on Logical Relations using Contextual Types
Andrew Cave

Proofs by logical relations play a key role in establishing rich properties such as normalization or contextual equivalence. They are also challenging to mechanize. In this paper, we describe the completeness proof of algorithmic equality for simply typed lambda-terms by Crary where we reason about logically equivalent terms in the proof environment Beluga. There are three key aspects we rely upon: 1) we encode lambda-terms together with their operational semantics and algorithmic equality using higher-order abstract syntax 2) we directly encode the corresponding logical equivalence of well-typed lambda-terms using recursive types and higher-order functions 3) we exploit Beluga’s support for contexts and the equational theory of simultanous substitutions. This leads to a direct and compact mechanization, demonstrating Beluga’s strength at formalizing logical relations proofs.

2:00pm

Well-founded Recursion over Contextual Objects
Brigitte Pientka

We present a core programming language that supports writing well-founded structurally recursive functions using simultaneous pattern matching on contextual LF objects and contexts. The main technical tool is a coverage checking algorithm that also generates valid recursive calls. To establish consistency, we define a call-by-value small-step semantics and prove that every well-typed program terminates using a reducibility semantics. Based on the presented methodology we have implemented a totality checker as part of the programming and proof environment Beluga where it can be used to establish that a total Beluga program corresponds to a proof.

Winter 2009 Seminar

18 June 2009

4:00pm (McConnell 103, McGill)

Shifting the Stage: Staging with Delimited Control
Yukiyoshi Kameyama (Tsukuba), Oleg Kiselyov (FNMOC), Chung-chieh Shan (Rutgers)

It is often hard to write programs that are efficient yet reusable. For example, an efficient implementation of Gaussian elimination should be specialized to the structure and known static properties of the input matrix. The most profitable optimizations, such as choosing the best pivoting or memoization, cannot be expected of even an advanced compiler because they are specific to the domain, but expressing these optimizations directly makes for ungainly source code. Instead, a promising and popular way to reconcile efficiency with reusability is for a domain expert to write code generators.

Two pillars of this approach are types and effects. Typed multilevel languages such as MetaOCaml ensure safety: a well-typed code generator neither goes wrong nor generates code that goes wrong. Side effects such as state and control ease correctness: an effectful generator can resemble the textbook presentation of an algorithm, as is familiar to domain experts, yet insert "let" for memoization and "if" for bounds-checking, as is necessary for efficiency. However, adding effects blindly renders multilevel types unsound.

We introduce the first two-level calculus with control effects and a sound type system. We give small-step operational semantics as well as a continuation-passing style (CPS) translation. For soundness, our calculus restricts the code generator's effects to the scope of generated binders. Even with this restriction, we can finally write efficient, realistic code generators for dynamic programming, Gaussian elimination, and other numerical methods in direct style—like in algorithm textbooks—rather than in CPS or monadic style.

7 April 2009

3:00pm (McConnell 320, McGill)

Joshua Dunfield: "Bidirectional polymorphism through greed and unions"

Bidirectional typechecking has become popular in advanced type systems because it works in many situations where inference is undecidable. In this paper, I show how to cleanly handle parametric polymorphism in a bidirectional setting, even in the presence of subtyping. The first contribution is a bidirectional type system that supports first-class (higher-rank and impredicative) polymorphism but is complete for predicative polymorphism (including ML-style polymorphism and higher-rank polymorphism). This power comes from bidirectionality combined with a "greedy" method of finding polymorphic instances inspired by Cardelli's early work on System F<:. The second contribution adds subtyping; combining bidirectional typechecking with intersection and union types fortuitously yields a simple but rather powerful system. Finally, I present a more powerful algorithm that forms intersections and unions automatically. This paper demonstrates that bidirectionality is a strong foundation for traditionally vexing features like first-class polymorphism and subtyping.

26 February 2009

11:30am (McConnell 320, McGill)

Harry Mairson: "Linear Logic and the Complexity of Control Flow Analysis"

Control flow analysis (CFA) is a kind of static program analysis performed by compilers, where the answers to questions like "can call site X ever call procedure P?" or "can procedure P ever be called with argument A?" can be used to optimize code output by the compiler. Such questions can be answered exactly by Girard's geometry of interaction (GoI), but in the interest of efficiency and time-bounded computation, control flow analysis computes a crude approximation to GoI, possibly including false positives.

Different versions of CFA are parameterized by their sensitivity to calling contexts, as represented by a contour—a sequence of k labels representing these contexts, analogous to Lévy's labelled lambda calculus. CFA with larger contours is designed to make the approximation more precise. A naive calculation shows that 0CFA (i.e., with k=0) can be solved in polynomial time, and kCFA (k>0, a constant) can be solved in exponential time. We show that these bounds are tight, namely, that the decision problem for 0CFA is PTIME-hard, and for kCFA is EXPTIME-hard. Each proof depends on fundamental insights about the linearity of programs. In both the linear and nonlinear case, contrasting simulations of the linear logic exponential are used in the analysis. The key idea is to take the approximation mechanism inherent in CFA, and exploit its crudeness to do real computation.

This is joint work with David Van Horn (Brandeis University), presented at the 2008 ACM International Conference on Functional Programming.

Fall 2008 Weekly Seminar

For Fall 2008, the usual meeting time is Thursdays 10am–11am (McGill: McConnell 103).

 11 Dec. 2008

10am (McGill)

Giulia Alberini: "A Constructive Proof of Gödel's Incompleteness Theorems"

We introduce the class of recursive functions and consider one of its generalizations in order to define the notion of partial recursive functions. We then see how a logical system can represent by formulas the self-referential recursive predicates that we define at the meta-level. We introduce the notion of representability and prove a fundamental theorem that correlates the notion of recursive functions with the notion of representable functions. We then introduce a coding that enables us to define the predicate "Thm(x)" that denotes what the system itself can prove. Each step will use only valid intuitionistic logic principles. We are then in the position to give a complete constructive treatment of the Incompleteness Theorems.

 13 Nov. 2008

10am (McGill)

Samuel Gélineau: "Commutative types: a conservative approach to aspect composition"

I argue that commutativity is a strong, but important property to demand of aspect-oriented weaving strategies. I show how the lack of commutativity can explain the inter-aspect interferences which plague aspect-oriented programming. I am working on a statically-typed, functional programming language featuring "commutative types", that is, functions for which composition is commutative; but in this talk I will focus on the problem, not the solution.

 23 Oct. 2008

10am (McGill)

David Haguenauer: "Proof assistants and meta-programming"

I will talk about how different programming languages and proof assistants handle the issue of algorithmically constructing expressions at compile-time, and how that question relates to my prototype compiler named CCint.

 2 Oct. 2008

10am (McGill)

Darin Morrison: "Assurance of the GHC RTS"

The Glasgow Haskell Compiler (GHC) is a production quality Haskell compiler with a large, complex runtime system (RTS) of around 60KLoC or more. Having assurances for the correctness of GHC's RTS is desirable when executing mission critical Haskell applications, but to provide these assurances requires a daunting engineering effort which is beyond the resources of most parties. In this talk, we describe a test-driven approach developed at Galois which makes progress toward a less rigorous form of assurance for the GHC RTS. To make the task easier, our approach utilizes several Haskell libraries such as QuickCheck and the GHC-API, and several idiomatic features of advanced Haskell programming, such as monads and monad transformers, monadic knot-tying, type classes and overloading, functional dependencies, and applicative functors.

18 Sept. 2008

10am (McGill)

Louis-Julien Guillemette: "A Type-Preserving Closure Conversion in Haskell"

The talk presents our compiler from System F to typed assembly language, whose main property is that the GHC type checker verifies mechanically that each phase of the compiler properly preserves types. Our particular focus is on "types rather than proofs": reasonably few annotations that do not overwhelm the actual code.

We believe it should be possible to write such a type-preserving compiler with an amount of extra code comparable to what is necessary for typical typed intermediate languages, and with the advantage of static checking. This goal is already a reality for a simply typed language, and we show what extra work is still needed to handle polymorphism.

This is joint work with Stefan Monnier.

11 Sept. 2008

10am (McGill)

Tom Schrijvers: Type Checking with Open Type Functions

This talk presents an extension of Haskell with open type-level functions and equality constraints that unifies earlier work on GADTs, functional dependencies, and associated types. It identifies and characterise the key technical challenge of entailment checking; and summarizes a novel, decidable, sound, and complete algorithm to solve it. This system is implemented in GHC, and is already in active use.

Winter 2008 Weekly Seminar

15 April 2008

(McGill)

Derek Dreyer: "Mixin' Up the ML Module System"

ML modules provide hierarchical namespace management, as well as fine-grained control over the propagation of type information, but they do not allow modules to be broken up into separately compilable, mutually recursive components. Mixin modules facilitate recursive linking of separately compiled components, but they are not hierarchically composable and typically do not support type abstraction. We synthesize the complementary advantages of these two mechanisms in a novel module system design we call MixML. A MixML module is like an ML structure in which some of the components are specified but not defined. In other words, it unifies the ML structure and signature languages into one. MixML seamlessly integrates hierarchical composition, translucent ML-style data abstraction, and mixin-style recursive linking. Moreover, the design of MixML is clean and minimalist; it emphasizes how all the salient, semantically interesting features of the ML module system (as well as several proposed extensions to it) can be understood simply as stylized uses of a small set of orthogonal underlying constructs, with mixin composition playing a central role.

This is joint work with Andreas Rossberg.

29 Jan 2008

(McGill)

Brigitte Pientka

A summary of the main ideas in "Focusing and higher-order abstract syntax" (Noam Zeilberger, POPL 2008; paper and slides), plus an overview of work during the break.

Fall 2007 Weekly Seminar

10 Dec 2007

(McGill)

Seyed Heidar Pirzadeh Tabari: "Encoding the Program Correctness Proofs as Programs in PCC Technology"

One of the key issues with the practical applicability of Proof-Carrying Code and its related methods is the difficulty in communicating and storing the proofs which are inherently large. The approaches proposed to alleviate this, suffer with drawbacks of their own especially the enlargement of the Trusted Computing Base, in which any bug may cause an unsafe program to be accepted. We propose to transmit, instead, a proof generator for the program in question in a generic extended PCC framework. This framework enables the execution of the proof generator at the consumer side in a secure manner.

3 Dec 2007

(McGill)

Renaud Germain: "A Prototype Implementation for Programming with Higher-Order Abstract Syntax"

26 Nov 2007

(McGill)

Samuli Heilala: "Some Proof Theory of Multimodal Logics"

19 Nov 2007

(McGill)

Maja Frydrychowicz: "Ongoing Work on Authorization Logics"

12 Nov 2007

(McGill)

Ian Clement: "Constructive Description Logic"

Description logic (DL) languages, successors of terminological representation languages or concept languages, are knowledge representation formalisms with logic-based semantics. Virtually all research done consider DLs with a basis in classical logic. Expanding on what work has been done on constructive Description Logics, we present a natural deduction system for a basic DL and prove soundness and completeness to known Kripke semantics. An associated sequent calculus is presented along with some proof theoretic results. A few possible methods for showing decidability are considered and briefly sketched.

22 Oct 2007

(McGill)

Peter O'Hearn: "Automatic Program Verification with Separation Logic"

Separation logic is a program logic geared toward specifying and verifying properties of dynamically-allocated linked data structures, and shared variable concurrent programs. It has led to simpler specifications and proofs of low-level imperative programs than was previously possible, and this has been leveraged in automatic tools that do proofs of certain lightweight properties of programs. This will be a mixed theory/demo talk, where I introduce the ideas in the logic and exemplify them using the experimental program verification tools Smallfoot and Space Invader.

15 Oct 2007

(UdM)

Joshua Dunfield: "Refined Typechecking with Stardust"

We present Stardust, an implementation of a type system for a subset of ML with type refinements, intersection types, and union types, enabling programmers to legibly specify certain classes of program invariants that are verified at compile time. This is the first implementation of unrestricted intersection and union types in a mainstream functional programming setting, as well as the first implementation of a system with both datasort and index refinements. The system—with the assistance of external constraint solvers—supports integer, Boolean and dimensional index refinements; we apply both value refinements (to check red-black tree invariants) and invaluable refinements (to check dimensional consistency). While typechecking with intersection and union types is intrinsically complex, our experience so far suggests that it can be practical in many instances.

1 Oct 2007

(McGill)

Samuli Heilala: "An Introduction to Matrix Characterizations of Validity and Connection-Based Theorem Proving"

Semantically, a formula of classical first-order logic is valid if it is true under every interpretation on every nonempty domain. To determine a formula's validity in the context of theorem proving, however, more syntactic characterizations of validity, such as the existence of a resolution or tableau proof, are usually more convenient and tractable. Building on some pioneering work by Dag Prawitz in the 1960's, Wolfgang Bibel and Peter Andrews in the 1980's independently developed the matrix characterization, an alternative characterization of validity for first-order formulas that attempts to eliminate much of the redundancy that encumbers the proof spaces of formalisms such as natural deduction, sequent calculus, tableau, and resolution systems.

I will introduce the matrix characterization of validity, first for propositional and then for first-order logic, and will discuss how it can be used for theorem proving. If time permits, I will also present matrix characterizations of validity for some modal logics.

24 Sep 2007

(UdM)

Stefan Monnier: "The Swiss Coercion"

17 Sep 2007

(McGill)

Louis-Julien Guillemette: "A Type-Preserving Closure Conversion in Haskell"

I will present a closure conversion implemented with GADTs, a step towards the construction of a complete type-preserving compiler in Haskell. It aims to provide the same guarantees as a conventional compiler with typed intermediate languages, but does the checking once and for all, when the compiler is compiled, rather than every time an object program is compiled.

After a review of closure conversion and the way existential types are used to make it type-safe, I will show how the type safety argument translates into Haskell types. I will argue that the greater precision of de Bruijn indices make it a better choice than HOAS for this particular phase, and discuss how it affects the implementation.

This is joint work with Stefan Monnier.

Summer 2007 Weekly Seminar

26 Jul 2007

Renaud Germain: "Implementing Functional Programming with Hoas and Explicit Substitution"

In this talk I will give an overview of what I did so far in my implementation of the language proposed by Brigitte Pientka in her paper.

12 Jul 2007

Florent Pompigne: "An Overview of the Focusing Issues in the Intuitionistic Inverse Method"

In this talk I will present some theoretical issues raised in the implementation of a forward theorem proving strategy. I will start by giving a basic forward sequent calculus on hereditary Harrop formulas, and show what the advantages of the forward method are. Such a system can be improved by introducing focusing, which enables us to skip some non-determinism in the proof search. By statically considering focusing on a particular program's clause, we can also compute big-step rules which are sound, complete and more efficient during the proof search. An idea developed by Kaustuv Chaudhuri for linear logic, where there are two different foci, is to choose for each atomic proposition a focusing bias. This enables us to compute more efficient big-step rules by a clever choice. This choice, which comes from an intuitive knowledge about the behaviour of the search, could maybe be theorized through techniques like mode checking or magic sets.

5 Jul 2007

Samuli Heilala: "Bidirectional Decision Procedures for the Intuitionistic Propositional Modal Logic IS4"

I will discuss a multi-context focused sequent calculus whose derivations are in bijective correspondence with normal natural deductions in the propositional fragment of the intuitionistic modal logic IS4. This calculus, suitable for the enumeration of normal proofs, is the starting point for the development of a sequent calculus-based bidirectional decision procedure for propositional IS4. In this system, relevant derived inference rules are constructed in a forward direction prior to proof search, while derivations constructed using these derived rules are searched over in a backward direction. I will also present a variant which searches directly over normal natural deductions. Experimental results show that on most problems, the bidirectional prover is competitive with both conventional backward provers using loop-detection and inverse method provers, significantly outperforming them in a number of cases.

This is a shortened, high-level version of my talk from the 8 March 2007 seminar.

7 Jun 2007

Christian Skalka: "Type Safe Dynamic Linking for JVM Access Control"

The Java JDK security model provides an access control mechanism for the JVM based on dynamic stack inspection. A number of previous results have shown how stack inspection can be enforced at compile time via whole-program type analysis, but features of the JVM present significant remaining technical challenges. For instance, dynamic dispatch at the bytecode level requires special consideration to ensure flexibility in typing. Even more problematic is dynamic class loading and linking, which disallow a purely static analysis in principle, though the intended applications of the JDK exploit these features. This presentations describes an extension to existing bytecode verification, that enforces stack inspection at link time, without imposing new restrictions on the JVM class loading and linking mechanism, in particular the analysis does not force earlier loading or linking of classes. Our solution is more flexible than existing type based approaches, and establishes a formal type safety result for bytecode-level access control in the presence of dynamic class linking.

31 May 2007

Brigitte Pientka: "Functional programming with HOAS and Explicit Substitutions"

24 May 2007

Maja Frydrychowicz: "Applying Formal Logic to the Study of Authorization"

Authorization refers to restrictions on the actions of authenticated users in a secure system. This talk will provide an overview of the principles and challenges of effective authorization, as well as an introduction to methods for reasoning about authorization with formal logic. We will examine two logic-based approaches, one classical and one constructive, to illustrate several key concepts that arise in the study of this topic.

17 May 2007

Peter Thiemann: "Interface Types for Haskell"

Interface types are a very useful concept in object-oriented programming languages. They can enable a clean programming style which relies only on interfaces without revealing their implementation.

Haskell's type classes provide a closely related facility for stating an interface separately from its implementation. However, there is no simple mechanism to hide the identity of the implementation type of an interface as is often desired for constructing libraries. This work provides such a mechanism through the integration of lightweight existential datatypes into Haskell. A simple source-to-source transformation enables the use of interface types with subtyping in Haskell.

Winter 2007 Weekly Seminar

26 Apr 2007

Ian Clement: "An Introduction to Description Logics"

A Description Logic is a formalism for Knowledge Representation used to represent ontologies. The purpose of the talk is to introduce a basic Description Logic language ALC and to outline the different types of reasoning done in DL-based systems. We will also give the relationship to First Order Logic and Multi modal Logic Km.

20 Apr 2007

Amy Felty: "Higher-Order Abstract Syntax, de Bruijn Indices, and Two-Level Reasoning in Coq and Isabelle"

We present a mechanism for reasoning using higher-order syntax in theorem proving systems such as Coq and Isabelle. Our approach is inspired by the Hybrid tool originally implemented in Isabelle and two-level reasoning systems such as McDowell and Miller's Foldn logic.

Higher-order syntax of the object logics we wish to reason about often cannot be defined using inductive types of systems such as Isabelle and Coq. Using the Hybrid approach, we solve this problem by defining a de Bruijn representation of terms at the base level, and then defining higher-order syntax encodings on top of the base level so that terms encoded this way expand to de Bruijn terms. We provide a library of operations and lemmas to reason on the higher-order syntax, hiding the details of the de Bruijn representation.

Judgments of object logics are often most elegantly specified using hypothetical judgments, which also typically cannot be encoded as inductive types. The two-level approach solves this problem by defining an intermediate layer, which encodes a "specification logic" which can be defined inductively. Object-level judgments are then encoded in the specification logic. We prove properties of the specification layer which can be directly applied to reasoning at the object-level. We discuss examples of both specification and object logics.

This is joint work with Venanzio Capretta, Alan Martin, and Alberto Momigliano. The talk will be a sequel to the one I gave in January on Hybrid.

12 Apr 2007

Dustin Wehr: "Logical Frameworks with Refinement Types"

Logical frameworks provide a meta-language for specifying and implementing formal systems given via axioms and inference rules together with the proofs about them. One of its applications is for example in the Logosphere project, where the aim is to formalize, organize, and retrieve mathematical knowledge across different theorem proving platforms.

Experiments with this and other applications also highlight some of the limitations in its expressive power. One of such weakeness is the absence of subtyping, which may impose a significant burden on the programmer. This project investigates the combination of logical frameworks, a dependently typed lambda-calculus, with a simple form of subtyping, called refinement types. This will involve two steps: first gathering examples which demonstrate the use of refinement types in such a setting, and second developing the theoretical foundation which allows us to extend the logical framework LF with refinement types while preserving canonical forms and decidability of type-checking.

This talk provides an introduction to the problem and outlines a possible solution.

05 Apr 2007

Brigitte Pientka: "Functional Programming with HOAS"

I will be giving a talk on functional programming with HOAS, with an emphasis on what programs we can write. This is an updated version of the talk I gave last summer at PLPV (Programming Languages meets Program Verification).

28 Mar 2007

Deepak Kapur: "Automating Induction: Decidable Subclasses and Prediction of Failure"

Inductive reasoning is critical for ensuring reliability of computational descriptions, especially of algorithms defined on recursive data structures. A brief review of decidable subclasses of inductive conjectures will be given. The concepts of theory-based recursive definitions and compatibility among definitions will be presented. A framework for predicting a priori failure of proof attempts by induction is proposed. Failure analysis is shown to provide information that can be used to make those proof attempts succeed for valid conjectures. A method for speculating intermediate lemmas in such cases is discussed. The analysis can be automated and is illustrated on several examples.

22 Mar 2007

Stefan Monnier: "The Ultimate Cast?"

Recent type systems allow the programmer to use types that describe more precisely the invariants on which the program relies. But in order to satisfy the type system, it often becomes necessary to help the type checker with extra annotations that justify why a piece of code is indeed well-formed. One important tool used for that purpose is coercions, or casts. We show a coercion that can apply, open, pack, abstract, analyze, or do any combination thereof. And all that, of course, for the price of a single coercion, which costs absolutely nothing at run-time.

15 Mar 2007

David Xi Li: "Forward Logic Programming in Twelf"

Logical frameworks allow us to specify formal systems and prove properties about them. One interesting logical framework is Twelf, a language that uses higher order abstract syntax to encode object languages into the meta language. Currently, uniform proofs have been used for describing proof search in backwards logic programming style. However, there are certain limitations to a backward system for example, loop-detection mechanisms are required for some of the simplest problems to yield a solution. As a consequence, the search for a more effective proof search algorithm prevails and a forward system is proposed. This talk will discuss a focused inverse method prover for Twelf and present some preliminary results.

8 Mar 2007

Samuli Heilala: "Bidirectional Decision Procedures for the Intuitionistic Propositional Modal Logic IS4"

The intuitionistic modal logic IS4 is a constructive logic incorporating operators of necessity and possibility. I will first present a multi-context sequent calculus whose derivations are in bijective correspondence with normal natural deductions in the propositional fragment of IS4. This calculus, suitable for proof enumeration, is the starting point for the development of a sequent calculus-based bidirectional decision procedure for propositional IS4. In this system, big-step inference rules are constructed in a forward direction, but searched over in a backward direction. I will also present a variant which searches directly over normal natural deductions. Experimental results show that on most problems, the bidirectional prover is competitive with conventional backward provers using loop-detection and inverse method provers, significantly outperforming them in a number of cases.

This is joint work with Brigitte Pientka.

8 Feb 2007

David Haguenauer: "A Mainstream Language with Provisions for Proofs"

In this talk, I will present my project of creating a compiler for a programming language that will allow the programmer to add type annotations expressed in a variant of the calculus of inductive constructions, the calculus underlying Coq.

This presentation will include examples of possible uses of the system, as well as some background on related systems.

This is under the supervision of Stefan Monnier.

1 Feb 2007

Louis-Julien Guillemette: "Type-Safe Compilation in Haskell"

I will be presenting my project of constructing a compiler that uses typeful program representations throughout every transformation step in order to guarantee type safety. It aims to provide the same guarantees as a conventional compiler with typed intermediate languages, but does the checking once and for all, when the compiler is compiled, rather than every time an object program is compiled.

In this talk I will first give an overview of my compiler. I will review the HOAS+GADTs representation that I use and its associated iterator, and I will illustrate its use with a few examples. I will also address more specific topics, including an extension of the basic iteration scheme to side-effecting traversals using monads, as well as some issues relating to closure conversion.

This is joint work with Stefan Monnier.

25 Jan 2007

Amy Felty: "Combining de Bruijn Indices and Higher-Order Abstract Syntax in Coq"

We present a mechanism for reasoning using higher-order syntax in the Coq Proof Development System. Our approach is inspired by the Hybrid tool in Isabelle HOL. At the base level, we define a de Bruijn representation of terms with basic operations and a reasoning framework. We define higher-order syntax encodings on top of the base level so that they expand to de Bruijn terms. We provide a library of operations and lemmas to reason on the higher-order syntax, hiding the details of the de Bruijn representation.

I will present two versions of Coq-Hybrid that I have been working on. One is classical and follows the original Isabelle-Hybrid implementation closely. The other takes advantage of Coq's constructive logic by formulating many definitions as Coq programs. I'll give some preliminary thoughts on the comparison of the two approaches.

This is joint work with Venanzio Capretta, Alan Martin, and Alberto Momigliano.

18 Jan 2007

Brigitte Pientka: "Functional Programming with HOAS"

A major complication in programming with higher-order abstract syntax is that recursion over such encodings requires one to traverse a lambda-abstraction and hence necessitates reasoning about "open" objects. I present a novel approach based on contextual modal types which overcomes these difficulties and allows recursion over open terms. In addition, it also supports first-class substitutions and contexts.

I will present some examples to motivate the approach and highlight the differences with existing approaches, and then present a bi-directional type system together with progress and preservation proofs. Finally, if time permits, I will talk about some open problems.