Papers and code

This year I’m working part-time, and raising my young child the other part of the time. Until recently I worked for SIFT doing computer science R&D, mostly AI-related. Previously I worked on the faculties of two universities, South Australia and DePaul, worked at other private companies, and studied at Tulane, Karlsruhe and (for a year) Glasgow. This page archives some of my work over the years.

Research work

  1. On plan recognition (2006-present). If we have a plan library describing how tasks are decomposed into actions, then given observations of actions we can make hypotheses about the high-level intention(s) of the observed actor(s). I’ve worked on plan recognition for the Integrated Learning, Self-Reconfiguring Systems and Deep Green programs.
  2. On qualitative probability (2013-2015). Z+ is a well-studied system of qualitative probability, concerned with (a formalization of) level of surprise instead of exact numeric values. We used Z+ in the intrusion detection system (IDS) MIFD, as in MIFD’s predecessor Scyllarus, to assess the likelihood of various cyber attack events based on reports from IDSs. We experimentally analyzed the accuracy of the Z+ as its qualitative approximation becomes less and less representative of true distributions. The interest of these results is not limited to intrusion detection: the method used in our systems is a general abductive scheme, based on qualitative Bayes networks, so the results are applicable to other information fusion and diagnostic applications. We were surprised to find that ours is the only experimental investigation of the accuracy of Z+ as an approximation of conventional probability.
  3. On hardware verification (2011-2012). As part of our work on DARPA’s META program for cyber-physical systems we experimented with culprit identification in probabilistic verification, determining the design components contributing to a failure.
  4. On Semantic Web languages (2006-2010). As part of DARPA’s Integrated Learning project, I implemented an interpreter for the LTML planning language focused on automated learning for web service usage. Three papers describe this work:
  5. On design patterns (1997). Design patterns allow common programming idioms to be re-used from implementation to implementation. With Chris Salzmann, at the time a master’s student at Karlsruhe, we have explored the use of design patterns to design self-extendible systems, in particular an extensible editor. Chris designed and implemented a prototype in the Pizza programming language; for years this page said he’d release the source code after he got a chance to clean up the code, but at this point it’s safe to assume that that’s not going to happen.
  6. The lambda calculus and minimal intuitionistic linear logic (1996). The motivating idea of this work was that one can better compare different ways of reducing terms in the lambda calculus by mapping different reduction strategies in different ways into a single third system. The standard example from the distant past is Plotkin’s two continuation passing transforms for call-by-name and call-by-value reduction. After the appropriate transformation, each reduction order can simulate the other; in fact, the order is no longer relevant.
    • There is a journal paper (with Odersky, Wadler and Turner) describing the translation of call-by-name and call-by-value into a linear lambda calculus, and call-by-need into an affine calculus; and a conference paper describing how all three calling conventions can be mapped into a single system with separate modes for weakening and contraction.
    • My doctoral thesis is also about this sort of thing.
  7. The call-by-need lambda calculus (1994). The intuitive idea of the call-by-need lambda calculus is to capture both the property of the call-by-name calculus that only needed terms are reduced and the call-by-value behavior that only values are copied. Moreover, call-by-need operates only on terms, without requiring a separate heap of bindings, which can make reasoning with it simpler.
  8. On classical linear logic and its lambda calculus (1999). The main idea is a natural deduction system and lambda calculus for full, classical linear logic. (There are also decorations of the Schellinx translations from the usual lambda calculi into this system, corresponding to call-by-name and by-value, although it turned out that Ichiro Ogata had done more-or-less the same sort of thing, looking at LKQ and LKT, a few months before.)
    • At some point after putting this work aside – perhaps in moving back from Australia, or transferring data between computers, or after a disk failure – I lost track of its files. But it is described in Technical Report ACRC/00/014 from the Advanced Computing Research Centre of the University of South Australia (ask the School’s secretary to mail you a hard copy).
  9. On compiling functional-logic programming languages (1994). Implementations of logic languages, generally speaking, have some sort of computational step which allows reconsideration of previous decisions to bind certain quantifiers to certain identifiers, usually some sort of backtracking. Functional logic languages – again, generally speaking – augment the reduction semantics of functional languages with the backtracking mechanism of logic languages. This work describes an implementation technique called quasisharing which allows information computed in one branch of a search to be reused when relevant after backtracking into a new branch. The technique resembles Lamping’s strategy for optimal lambda reduction.

There is also a BibTeX file with entries for (most of) the above papers, their abstracts, and various other superseded earlier papers on those topics. My Google Scholar page is probably more comprehensive than this list in terms of minor papers, workshops, etc.


  1. I’m the primary author of the NST unit test framework for Common Lisp. Unlike most of the other projects in the section, this one is live!
    • There’s a paper about NST from the 2010 International Lisp Conference.
    • The slides from the associated talk are more focused than the paper, concentrating on the choice of abstractions which a Lisp unit test framework should provide.
  2. As part of NST I wrote a system defdoc that provides support for more structured Lisp documentation – essentially, a replacement for docstrings which generates output in various formats. Currently LaTeX and text outputs are supported (the latter of which automatically generates the traditional docstring).

    I don’t like defdoc – it’s just too much; I don’t find it easy or satisfying to use. Your milage may vary.

    Right now defdoc is shipped with NST (I’ll eventually pull it out). It’s not very well documented yet, but there are slides from a ILC2010 lightning talk about it.

  3. For my thesis I wrote a LaTeX package which accumulates the page numbers on which each bibliographic entry is cited, and provides an interface command which allows the page list for each label to be included in the document. N.B.: there is no interface yet for BibTeX; the code is a TeX/LaTeX package only.
  4. Some time ago I wrote a number of Elisp packages. Warning: this is probably all very bit-rotted.
    • Package extraregional.el separates the notions of point, mark and region from the notion of selection. The idea is that, usually in a different window, you should be able to select text without affecting the selected window, the current buffer or the values of point at the various windows and buffers. Version 0.2 is a rather smaller package than version 0.1.1: I found that the extras were not really all that useful, and were actually pretty fragile. The older version is still available, although it does not work under Xemacs version 19.3.
    • Package rmail-labelsorter.el makes labels a little bit easier to handle in Rmail. The code is usable as it is, but is not documented – this situation probably won’t change, as I no longer use rmail. Caveat emptor.

Back to the main page.