Work (study)

Right now I work for SIFT doing computer science R&D, mostly AI-related. In the past I’ve worked on the faculty of two universities, South Australia and DePaul; studied at Tulane, Karlsruhe and (for a year) Glasgow; and worked at other private companies. This page archives some of my work over the years.

Research work

Below are the research themes I’d explored. Each abstract has links to a selected publication or two, and a complete list of publications is also available.

  1. Plan recognition (2006-present). If we have a plan library describing how tasks are decomposed into actions, then given observations of actions we can make hypotheses about the high-level intention(s) of the observed actor(s). I’ve worked on plan recognition for the Integrated Learning, Self-Reconfiguring Systems and Deep Green programs.

    • This ICAPS 2008 paper with Chris Geib and Robert Goldman is probably the most technical, and describes the underlying algorithm is good detail.
    • There’s a paper about the project for Self-Reconfiguring Systems from HICS’09, with many co-authors but mostly by Tom Haigh and Steve Harp.
    • Robert, Chris and I have a workshop paper at PAIR’09 – to be posted.
  2. On Semantic Web languages (2006-2010). As part of DARPA’s Integrated Learning project, I implemented an interpreter for the LTML planning language focused on automated learning for web service usage. Three papers describe this work:

  3. On design patterns (1997). Design patterns allow common programming idioms to be re-used from implementation to implementation. With Chris Salzmann, at the time a master’s student at Karlsruhe, we have explored the use of design patterns to design self-extendible systems. Chris designed and implemented a prototype in the Pizza programming language; for years this page said he’d release the source code after he got a chance to clean up the code, but at this point it’s safe to assume that that’s not going to happen.

    Our experience is summarized in this conference paper (with Salzmann).

  4. The lambda calculus and minimal intuitionistic linear logic (1996). The motivating idea of this work was that one can better compare different ways of reducing terms in the lambda calculus by mapping different reduction strategies in different ways into a single third system. The standard example from the distant past is Plotkin’s two continuation passing transforms for call-by-name and call-by-value reduction. After the appropriate transformation, each reduction order can simulate the other; in fact, the order is no longer relevant.

    There is a journal paper (with Odersky, Wadler and Turner) describing the translation of call-by-name and call-by-value into a linear lambda calculus, and call-by-need into an affine calculus; and a conference paper describing how all three calling conventions can be mapped into a single system with separate modes for weakening and contraction. My doctoral thesis is also about this sort of thing.

  5. The call-by-need lambda calculus (1994). The intuitive idea of the call-by-need lambda calculus is to capture both the property of the call-by-name calculus that only needed terms are reduced and the call-by-value behavior that only values are copied. Moreover, call-by-need operates only on terms, without requiring a separate heap of bindings, which can make reasoning with it simpler.

    There is a journal paper (with Odersky and Wadler).

  6. On classical linear logic and its lambda calculus (1999). The main idea is a natural deduction system and lambda calculus for full, classical linear logic. (There are also decorations of the Schellinx translations from the usual lambda calculi into this system, corresponding to call-by-name and by-value, although it turned out that Ichiro Ogata had done more-or-less the same sort of thing, looking at LKQ and LKT, a few months before.)

    This work is described in Technical Report ACRC/00/014 from the Advanced Computing Research Centre of the University of South Australia (ask the School’s secretary to mail you a hard copy).

  7. On compiling functional-logic programming languages (1994). Implementations of logic languages, generally speaking, have some sort of computational step which allows reconsideration of previous decisions to bind certain quantifiers to certain identifiers, usually some sort of backtracking. Functional logic languages – again, generally speaking – augment the reduction semantics of functional languages with the backtracking mechanism of logic languages. This work describes an implementation technique called quasisharing which allows information computed in one branch of a search to be reused when relevant after backtracking into a new branch. The technique resembles Lamping’s strategy for optimal lambda reduction.

    There is a conference paper focusing on the design and the graph calculus, and a workshop paper that discusses the abstract machine (both with Silbermann). My master’s thesis is also about this topic.

There is also a BibTeX file with entries for the above papers, their abstracts, and various other superseded earlier papers on those topics. Just about everything here is also mirrored on the Hypatia archive, based in England, which may be useful to know if this site is giving you trouble. Another nice site for research in CS is the NEC research index, which is a citation index but which also seems to have some pointers to papers (my entry there).

Code

Lisp code. I’m the primary author of the NST unit test framework for Common Lisp. Unlike most of the other projects in the section, this one is live!

  • There’s a paper about NST from the 2010 International Lisp Conference.
  • The slides from the associated talk are more focused than the paper, concentrating on the choice of abstractions which a Lisp unit test framework should provide.

As part of NST I’ve begun writing a a system defdoc that provides support for more structured Lisp documentation – essentially, a replacement for docstrings which generates output in various formats. Currently LaTeX and text outputs are supported (the latter of which automatically generates the traditional docstring). Right now defdoc is shipped with NST (I’ll eventually pull it out). It’s not very well documented yet, but there are slides from a ILC2010 lightning talk about it.

LaTeX code. For my thesis I wrote a package which accumulates the page numbers on which each bibliographic entry is cited, and provides an interface command which allows the page list for each label to be included in the document. N.B.: there is no interface yet for BibTeX; the code is a TeX/LaTeX package only.

Elisp modes. Some time ago I wrote a number of Elisp packages.

  • Package extraregional.el separates the notions of point, mark and region from the notion of selection. The idea is that, usually in a different window, you should be able to select text without affecting the selected window, the current buffer or the values of point at the various windows and buffers. Version 0.2 is a rather smaller package than version 0.1.1: I found that the extras were not really all that useful, and were actually pretty fragile. The older version is still available, although it does not work under Xemacs version 19.3.

  • Package rmail-labelsorter.el makes labels a little bit easier to handle in Rmail. The code is usable as it is, but is not documented – this situation probably won’t change, as I no longer use rmail. Caveat emptor.

Back to the main page.