Friday, July 29, 2005

Hmmm.

The presentation is over and done with. It didn't go too badly, though I did run under-time: I finished my presentation and asked (begged) for questions half way through my time slot. I'm not sure how much of it the audience caught, or what sort of comments the assessing academics will give me, but I'm reasonable happy with my performance.

Having seen all of the presentations, I find myself wondering at how many of the projects involve writing compilers and optimisers. One is attempting to learn tests to avoid computing expensive functions unless they're required (with collision detection as the experimental framework). Another is doing something with symbolic optimisation of robot dynamics (which looks a lot like single static assignment). I'm compiling a DSL to Haskell. Someone else is experimenting with interval arithmetic (which I want to implement in Haskell) for scientific computations.

Thursday, July 28, 2005

Presentations...

I have to give my mid-term seminar tomorrow afternoon and I'm getting rather nervous about it.

My supervisor has told me that my slides aren't accessible enough (i.e. I require too much prior knowledge), but I can't see any way to simplify things or remove content without missing important things. I'm not happy, but I don't the time fine-tune it any more.

If you're at the ANU (I'm not sure if the public is allowed) and free tomorrow at 1:00 (my presentation is at 1:26), come along to the Department of Computer Science seminar room (N101) and watch me embarrass myself.

Monday, July 25, 2005

Stupidity or How Dumb Can I Get

I've just been trying to install hs-plugins on my iBook with GHC 6.4. Sorting out the dependancies, getting stuff to build and getting the libraries installed has been ridiculously complex.

Now that I've got it built and installed (part of which required using hugs because runghc refused to run the setup programme), it doesn't appear to have registered the package, which means I need to manually provide the details to GHC every time I compile some code using System.Plugins (rather than just tell the compiler to use the package: -package plugins).

I need to figure out what it going wrong with my environment and fix it. Perhaps by tearing some capabilities out...

Update: I've got that problem fixed (make install doesn't appear to update the package database). Now all I need to do is work out a way to be able to specify the data-types in the plug-ins.

Dependancies...

I'm currently experimenting with hs-plugins, a library which supports dynamically loading plugins in Haskell. Or, rather, I would be if I could manage to sort out the various hassles I'm encountering in installing it.

The only problem is that hs-plugins depends on Language.Haskell.Hsx. This would be fine, if it came in the GHC distribution. Alas it doesn't, so I've had to find and install it myself. This isn't too much trouble - I have, after all, been using UNIX-like systems for at least 7 years now. The only bump in this stretch of the road is that Language.Haskell.Hsx depends on Happy.

Haskell is getting to the point where it could really do with a CPAN-style package archive, if only because introducing it now will be so much easier than later when it is actually necessary. Cabal already provides these sorts of features (dependancy chasing, etc).

Generality for its own sake

System.Console.GetOpt is a Haskell port of the GNU getopt library (which is almost impossible to find, unless they mean getopt(3) in glibc). In spite of being written in Haskell, the awfulness of getopt shines through.

Thursday, July 21, 2005

More on Literate Haskell

I've been having fun extending my environment for literate Haskell programming in LaTeX. While I still haven't looked at the suggestion my previous post provoked (though I'll get round to it eventually), I have been taking the opportunity to learn some of the more programme-y things one can do in LaTeX.

The macro I use for the title of each module (called \module funnily enough) now checks to see if it is a sub-module (i.e. contains a '.' in its name) and makes it a \subsection if it does. For now, this will be enough, but later on there will be some configurability to this to make sure that we don't typeset modules as sub-sections of random other modules.

I'm also planning on making the pretty-printing code build a list of imported modules as it typesets the Haskell and then include the appropriate files (I've got an \import macro all ready and waiting) afterward.

I like playing with LaTeX.

Tuesday, July 19, 2005

Licencing

A fourth post for today, also on a non-logic related topic. I'll be releasing an initial, non-modal, version of my code soon (in the next couple of weeks) and need to think about a licence for it. I've narrowed it down to four alternatives:I'm leaning toward the Creative Commons licences at this point, but the Australian Public License could be good as well.

If anyone has suggestions or arguments for or against any licence (please don't bother suggesting the GPL and friends), please leave a comment.

Thoughts on Hypergraphs

I'll start this post with a disclaimer that I am a mathematically naive person and don't really know what I'm talking about.

After my post last night (or this morning), I though a bit more about hypergraphs and started playing with a few ideas for a data structure based on incidence lists (lists of every edge incident on a particular vertex). While I've got something that works (after a fashion and in a naive and trivial sort of way), I've come to a realisation that I don't really know what a lot of the things I might want to do to such graphs actually mean.

What,if anything, do reflexivity, transitivity and symmetry mean in the context of a hypergraph? I've been trying to figure this out using the composition relation from arrow logic as an example but I'm still not sure I'm getting anywhere.

Be that as it may, I've started working on a prototype of a hypergraph package for Haskell and if I can figure out how various things ought to work and how to generalise it to relations of arbitrary arity, I'll release it separately.

Literate Haskell

Last night I spent a while wrestling with latex, vim, make and ghc as part of my source-code reorganisation. One of the things I've been wanting for quite a while, is a vim syntax highlighting mode that understands the LaTeX literate Haskell syntax and displays both the Haskell code and the LaTeX code properly. Last night, I finally got it to work, though it has taken a bit of hackery in my Haskell files: I need to have a comment in every file so that the highlighter can detect the LaTeX.

This is a problem as the LaTeX detection heuristic appears to be:
  1. Look for a \documentclass macro (the begining of a LaTeX document); or
  2. Look for a LaTeX command; or
  3. Assume that the "literate" part is not LaTeX
Needless to say, this is rather annoying as I'm writing a program with a number of modules (and thus a number of files) and want to produce a single document from them. To ameliorate this problem, I just make sure to include a LaTeX-style comment on the first line of every literate Haskell file and use a few macros to do include imported modules into the LaTeX document.

I'm also using the listings package to pretty-print the Haskell code and my next goal is to use its custom keyword handling capabilities to automatically include the modules that we import (if they exist in our source tree and are LaTeX-style literate code).

Renovation in Progress

I'm about one third of the way through the rewrite of the system so far, this time using doing the job properly (checking that there aren't too many or too few of anything in particular, that the numbers all agree, etc). Even if they had no other worth, the clarity that the monadic approach lends to code (parsing and error handling at the moment) more than justifies the presence of monads in Haskell.

I expect to have the system into a fairly robust state in the next day or two after which I'll knock together a small test suite (De Morgan's Laws, a bunch of tautologies and the like) and then move on. I hope to have the features required to implement K finished well within two weeks.

If anyone knows of a way to define a hypergraph inductively, please drop a note in the comments. If I can't find such a representation then I'm going to have a hard time supporting ternary and higher relations.

Monday, July 18, 2005

The Natural Numbers

One of the statements in my calculus language is the operator declaration:
Neg operator "~" has arity 1, fixity 0, priority 1.
As negative numbers don't really make much sense as the arity and fixity of an operator, I've just spent a while knocking up a Haskell implementation of the natural numbers.

While I dislike programming in Java and other "OO" languages, they do make it a lot easier to extend types than is the case in Haskell. It would be much easier to do this sort of thing if Haskell supported instance declarations (the things that make most polymorphic functionality work in Haskell) on type synonyms. If it did, Natural.hs would be about 6 lines of code (plus a few lines of boiler plate) rather than nearly 60 lines.

My new type Natural (just a wrapper around Integer) is an instance of eight separate type classes:
  1. Eq (they can be compared for equality)
  2. Ord (values can be ordered with respect to each other)
  3. Show (values can be converted to strings)
  4. Read (values can be parsed out of strings)
  5. Bounded (the range of values is bounded), though Natural shouldn't have an upper bound
  6. Enum (values can be enumerated)
  7. Num (values are numbers)
  8. Real (values are real numbers)
  9. Integral (values are integral numbers)
It probably ought to have a few more like:
  • Random (values can be randomly generated); and
  • Ix (values can used as array indices)
but it'll get the job done -- all I need from it is Eq and Ord.

Tuesday, July 12, 2005

A Delay or Murphy Strikes

I've just had to take my iBook in to The Mac Shop to get its charger (or perhaps the power regulation circuitry) fixed as it no longer wants to charge.

Unfortunately, this means that I'm unable to do any work for the next few days and may not get as much as I'd like done before my return to Canberra.

Sunday, July 10, 2005

Cool Things That'll Help

In the cool things that will help implement this project, the Haskell Hierarchical Libraries contain a Graph library which supports querying the existence a path between specified nodes, just what I'll need to implement box rules.

Another cool thing I need to look at more is the definition of "a monad for stateful non-deterministic computations" in the All About Monads tutorial.

I need to learn more about monad transformers.

Parsing and Patterns

The logic compiler generates Haskell code to implement a calculus. This includes a data-type to represent formulae, a function to resolve those formulae into sub-formulae, functions to format them for output, miscellaneous utility functions and a parser to parse formulae in the language the calculus operates with. As the code stands currently, the logic language allows the user to define operators with arbitrary arity and fixity. One might, if one were of a mind, define a 4-fix # operator with an arity of 3. A formula with such an operator might look like: α1 α2 α3 # α4
As the code stands, the generated parser is built using the buildExpressionParser facility of the Parsec parser library. Unfortunately, buildExpressionParser is only capable of handling expression languages with unary prefix, binary infix and unary postfix operators, meaning that we can't generate parsers for all calculi that can be defined with our language.

To fix this, I'm going to have to write a more general expression parser generator (hopefully using buildExpressionParser as a base) that can handle operators of arbitrary arity and fixity. Either that, restrict the system to only unary and binary operators which would make implementing some [conceivable] logics very difficult, if not impossible.

On the other hand, buildExpressionParser has greatly simplified the pattern handling code in the compiler. In our language (which is so simple as to not need a name), we define rules such as: Resolve "a->b" to "~a" or "b". The patterns (the bits in quote marks) are formulae using the operators of the calculus, with arbitrary alphabetic names for variables (which can stand for any sub-formula - we can make no distinction at this point between sub-formulae and propositions as I haven't got round to implementing that yet). Our compiler generates at run time a parser for the formulae of the calculus and uses it to parse the rule patterns and generate appropriate snippets of code which we use for pattern-matching in the Haskell code implementing the calculus. For example, the rule above would be translated to the Haskell code resolve (Impl a b) = Or [(Neg a), b]

Once we've parsed the pattern and generated a function, we get the pattern matching at run-time for free from Haskell.

Saturday, July 09, 2005

Done! or It's A Start!

Last night (or, more correctly, this morning) I constructed a model: ~a, b for the formulæ: ~a, a∨b using only the tableau framework and an automatically generated implementation of the propositional calculus:
Tableau> prove [Neg (Prop "a"), Disj (Prop "a") (Prop "b")]
Model [b,~a]
Now all I have to do is fix the bugs, extend what's there to labelled tableau and test it all. :-)

Sunday, July 03, 2005

I can do it, I can do it...

The system is nearly to the stage where I can write the propositional logic in it. I've been held up by a bit of trouble writing the code to generate a formula parser from the definitions of the logical operators. Once that is written, I need to modify it slightly (to generate a parser, rather than generate code to generate a parser) to parse the rule patterns, and logic compiler will be finished.

Then it'll need a Makefile (or something similar) to link a logic with the tableau engine (the simple, unlabelled version of it is finished), and we'll be finished with this initial stage of development.