I am generally interested in the design, semantics, and implementation of declarative programming languages, in parallel and distributed computing, and in compiler construction. In the last couple of years, I also had a closer look at operating systems. I am a co-head of the Programming Languages and Systems (PLS) research group.
Functional Programming (FP)
Despite growing up with imperative programming languages, I have become very fond of functional languages, and in particular, with the purely functional language Haskell (it is so nice that there is even a poem about it).1 The intriguing aspect of functional programming is that it originates from, and is founded on, a rigorous mathematical discipline and yet it is practical. Functional languages are right were theory and practice meet to produce exciting results.
Type Level Programming
One of the most fascinating aspects of modern functional languages are their type systems. The invention of the ML type system and the associated type inference mechanisms had a dramatic impact on language development and programming practice (since Java, type safety has also made major inroads into mainstream object-oriented programming). The advanced type systems explored by the FP community right now will probably influence language design for the next couple of decades. A recent topic in type systems are mechanisms that enable the programmer to specify computations on types that are executed at compile time. This is useful for generic programming, self-optimising libraries, embedding of domain-specific languages, and meta-programming. (In fact, the C++ community is exploring similar applications with template meta-programming, but this has the disadvantages that template definitions cannot be checked before application and that there are many loop holes in the C++ type system.)
My recent work includes the study of ad-hoc overloaded type definitions, in the form of type declarations in Haskell type classes. My collaborators and I have written two papers describing associated data types and associated type synonyms, respectively. Both papers are about open type functions and type-indexed types. In a second step, we designed the novel typed intermediate language FC to support a whole new class of source language features that rely on non-syntactic type equality, including GADTs, associated types, functional dependencies, and general type functions. See also type functions in GHC.
Foreign Function Interfacing
A while ago, I put substantial work into foreign language interfacing for Haskell. In particular, I am the editor of a standards document in the form of an addendum to the Haskell 98 standard that defines a standard foreign function interface (FFI). This addendum was joint work with other Haskell hackers from the Haskell FFI mailing list and forms the basis of the FFI for the three actively maintained Haskell systems GHC, Hugs, and nhc98.
Past Work: Functional Logic Programming
For quite a while, I worked on the integration of features from logic (relational) programming languages into functional languages like Haskell. Much of that work has been in the context of the language Distributed Haskell (aka Goffin), which extends Haskell with a co-ordination sublanguage that is based on ideas from concurrent constraint programming. However, I reached the conclusion that it is more practical to stay within the framework provided by purely functional languages, even if this sometimes requires compromises on a conceptual level.
High Performance Computing
High-performance computing, and especially parallel computing, has been going through an interesting historical development. Most of the traditional high-performance computing vendors died a while ago and their products have been replaced by workstation clusters. This made parallel computing affordable to a wider audience, but it also made parallel programming harder and ultimately less attractive, as the communication hardware of clusters tends to have fairly high latencies.
However, the semiconductor industry has finally hit the limit of the von Neumann model, or more accurately, the limit of pretending to application software that CPUs are based on the von Neumann model. Instead, Intel pushes many-core architectures and general purpose computing on GPUs becomes increasingly attractive.
Nested Data Parallelism
In high-performance computing, my main interest is in data parallel programming, and more specifically, in nested data parallelism (NDP). Nested data parallelism supports computations on irregular structures that are tedious to impossible with a purely flat data-parallel model - hence, people tend to use control parallel code for the corresponding applications. Gabriele Keller and I have been working at the integration of nested data parallelism into standard functional languages for a while. In particular, we proposed an integration of NDP into Haskell, and in particular, developed an extended flattening transformation that works in the context of general functional languages, an array fusion framework that is needed for an efficient implementation, and a notion of distributed types to guide transformational parallelisation. Since a while, we are collaborating with Roman Lechtchinsky, who has been pivotal in developing a full higher-order version of the flattening transformation.
In collaboration with Simon Peyton Jones and Simon Marlow, from Microsoft Research Cambridge, we are now working on a full implementation of nested data parallelism in Haskell targeting multi-core CPUs. This project runs under the name Data Parallel Haskell.
This project has received funding from the Australian Research Council under the title Portable High-Performance Computing Based on Flattening and Fusion.
Simulating Polymerisation Processes
Together with The Centre for Advanced Macromolecular Design (CAMD) our research group is working on the computer simulation of polymerisation. We have designed some new data structures for extremely space-efficient and fast simulations using the Monte Carlo method (we can run realistic systems entirely out of the second level cache of modern PCs). Currently, we are re-implementing our simulator using a novel approach based on generative programming and dynamic code loading.
This project currently receives internal funding from The University of New South Wales.
Safe Execution of Untrusted Code & Portable Code
The Sec project investigates the safe execution of untrusted code, portable certified code, and runtime code generation and optimisation for multi-language component software. Personally, I don't believe in Java's one-language-to-rule-them-all philosophy, which is why I think multi-language systems that support a wide range of programming languages are an important development. An exciting aspect of Sec is it's amalgamation of languaged-based techniques with those from operating systems.
So far, we have developed a protable, certified object code, called SecCode, based on ANF (a variant of the lambda calculus) and a language of dependent types (namely, LF) to encode certificates. We have studied the formal properties of SecCode and developed a prototype implementation that generates SecCode from a simple functional language, checks certificates (by using the Twelf proof checker), and executes SecCode programs. We have also developed a typed assembly language that includes a notion of protection domains (in the OS sense) as well as a binary rewriter that secures untrusted binaries (on the Alpha architecture). In addition, we have worked on optimising programs in ANF and on runtime code optimisations.
|• Copyright 2005 Manuel M T Chakravarty • Last modified: Fri Feb 23 11:50:25 EST 2007 •|