Monthly Archives: May 2010

Running Resolver One on Mono for Windows

28 May 2010

Mono is an open source version of the .NET framework; it allows you to run .NET applications not just on Windows but on Linux and the Mac. I’ve spent quite some time over the last week getting our Python spreadsheet, Resolver One, to run on the Windows version, and thought it would be worth sharing some experiences.

Some background first: one of our long-term goals at Resolver Systems is to get our currently Windows-only products working on other platforms. Everything’s coded in .NET, so in an ideal world we’d just be able to run it all under Mono. However, there are two problems:

  1. Some of the third-party components we use are “.NET” in the sense that they offer us a .NET interface, but under the hood they call down to lower-level Windows functions using P/Invoke. Because they’re using Windows-specific stuff, they won’t run on non-Windows operating systems, even with Mono.
  2. As always with these things, while there is a formal specification for what .NET implementations like Microsoft’s .NET or Mono are meant to do (called the CLI), implementations differ due to bugs, things that are awaiting implementation, or ambiguities in the spec.

Obviously, we need to handle both of these kinds of problem to successfully port our software. We’re handling the first kind by moving over to newer, “pure” .NET components, for example by writing our own grid. But we didn’t want to finish all that work and only then discover all of the problems caused by the second kind of incompatibility. Now, Mono does support P/Invoke, so while the first kind of problem clearly prevents us from running on Mono right now on, say, a Mac, it does not prevent us from running on Mono on Windows. So we decided to do that simpler port in parallel with our development of the new components, so that we could find out any nasty issues of the second kind as early as possible.

First things first: it really was surprisingly easy! All kudos to the Mono developers, this really is an example of an open source project that works really well. The problems below are really low-level details, and most of them are unlikely to hit anyone apart from us. However, it’s worth enumerating them, at least for posterity’s sake — and perhaps they’ll be helpful for people Googling for solutions to similar obscure problems.

Problem 1: A change to the process binary

The first problem we hit was our code to load up our DLLs. Resolver One is comprised of quite a few libraries, and we need to be careful to load the specific ones that it’s shipped with rather than others that might be on the user’s path. We do this by finding our binary’s location, using Path.GetDirectoryName(Process.GetCurrentProcess().MainModule.Filename), and then using Assembly.LoadFile(filename) to load the DLLs explicitly rather than the more normal clr.AddReference, which uses the path.

The problem is, when you run a .NET application under Mono, the current process’s binary is not (say) Resolver-One.exe, but instead mono.exe. So Resolver One was trying to find its libraries in the Mono install directory, which obviously didn’t work. In the short term, we were able to work around this by hard-wiring the DLL path. In the longer term, we’ll have to do something more clever…

Problem 2: ImeModeBase

Once we’d fixed the first problem we got a new error: Could not find "System.Windows.Forms.Control.get_ImeModeBase". A bit of investigation made it clear that the current version of Mono doesn’t support this method (though when it does, the target of that link will show it). It looks like the method was introduced in the (relatively recent) .NET 2.0 SP2, and presumably it will be implemented sometime, but it’s not there right now.

The question was, could Resolver One run without this method? The answer seemed likely to be “yes”, as we were able to run on earlier versions of .NET 2.0. We took a look at our codebase and tried to find out what it was that was referencing the method. It turned out to be our “precompiled subclass” DLLs. These are something we introduced a while back to improve our startup performance; simplifying a bit, what happens is that when we package up Resolver One for distribution, we run a compile process on all of the classes in our code that are subclasses of .NET components. Doing this process once before we release the software means that it’s not done every time Resolver One starts up, with obvious performance benefits. The downside is that the compiled subclasses have explicit references to the members of their .NET superclasses, whether they use them or not. And because we run the compilation process under .NET 2.0 SP2, our compiled subclasses wind up with explicit references to stuff that doesn’t exist in earlier versions of .NET, or (as it turns out) in Mono.

The good news is that if you’re willing to take a performance hit, the subclass compilation can be made optional. This isn’t something we put in the production version of Resolver One right now, as it’s Windows only (and so there’s little point in having a “start up more slowly for no useful reason” command-line option), but it was easy to add in. Using non-precompiled subclasses got us past this problem. Perhaps our future Linux/Mac-compatible version can use subclasses that are precompiled against Mono.

Problem 3: System.Reflection:MonoGenericMethod

This one was the easiest one to find out how to work around, but took the longest time. Once we’d got past the precompiled assembly problem, trying to start Resolver One gave us a dialog saying “** ERROR **: Can’t handle methods of type System.Reflection:MonoGenericMethod aborting…”. Mono then bombed out with a Windows “application has stopped working” dialog.

A bit of Googling followed, and we were delighted to discover that a bug that triggered this exact error message had been fixed just ten days previously on the Mono trunk and the 2.6 branch. There’s luck for you.

Unfortunately we also discovered that the Mono team don’t release nightly builds of their Windows installer, so the only ways to get this fix would be either to wait until the 2.6.5 release, or to build it ourselves. Being impatient by nature, we decided to do the latter, and this took quite a while. I’ll post separately about what we had to do, as it may be useful for others; there’s a lot of excellent documentation on building Mono for Windows, but some of it’s a bit out of date. Luckily, the people on the Mono email list are friendly and helpful, so we were able to get there in the end.

So, after a bit of work we had a working version of Mono built from the tip of the 2.6 branch.

Problem 4: Logfile locations

When we started up Resolver One with the new Mono, we got the error SystemError: Access to the path "C:\ResolverOne.log" is denied. This was interesting, because our default logfile location is determined using Path.GetTempPath(). I’m not sure where Mono picks up the value for that, but presumably it was returning an empty string. Perhaps we were missing something in our environment variables? Either way, we decided to work around it by using Resolver One’s --logfile command-line option.

Problem 5: Madness in our methods

When told to log to an appropriate directory, Resolver One started up, and things started looking pretty good! The splash screen appeared, the “starting” progress bar moved and then… it crashed. The log file had this in it:

Exception: Method DLRCachedCode:FormulaLanguage.parsetab$16 (IronPython.Runtime.CodeContext,IronPython.Runtime.FunctionCode) is too complex.
CLS Exception: System.InvalidProgramException: Method DLRCachedCode:FormulaLanguage.parsetab$16 (IronPython.Runtime.CodeContext,IronPython.Runtime.FunctionCode) is too complex.
  at IronPython.Compiler.OnDiskScriptCode.Run () [0x00000] in :0 

A bit of Googling lead us to two pages that suggested that “too complex” means that the method in question was too long. The module FormulaLanguage.parsetab is, as you might expect, related to the code we use to parse the formula language — that is, the language in which you write formulae in cells. This language is specified in our code in BNF with associated handler code, and compiled down into Python by the excellent PLY. The parsetab module is the generated code, and as you might expect it has some pretty unreadable stuff in it; there’s one dictionary literal that is on one 81,000-character line.

The easy fix to work around this problem was to split up into multiple modules. There were three variables that were being initialised with oversized literal expressions, _lt_action_items, _lt_goto_items, and _lt_productions. We created a separate module for each, which simply contained the initialisation code for the specific variables:,, and Finally, we replaced the code in that had been moved to the new files with appropriate import statements: for example, from lt_action_items_file import _lt_action_items.

This fixed the problem, and allowed us to move onto the next one! It’s not obvious how we’ll fix this in the production release, though — the file is auto-generated, so either we’ll have to patch PLY or post-process it. Something for us to think about.

Problem 6: Extra vars

The error we got after fixing the parsetab problem was a bit obscure:

Exception: Finalize
CLS Exception: System.Collections.Generic.KeyNotFoundException: Finalize
at IronPython.Runtime.PythonDictionary.GetItem (object) 

The actual location of the error took quite a long time to track down, due to the vagaries of the way we import modules and its effects on stack traces. We eventually wound up having to do a binary chop with log statements in our startup code until we managed to narrow it down to a single line!

It turned out that we have some code that needed to create wrapper functions around all of the functions in a different module. It did this by looping over the values in the dictionary returned by vars(other_module). However, it didn’t want to wrap functions that were internal to .NET, so it had a list of function names to exclude; specifically, MemberwiseClone and Equals. Clearly these were two function names that had been found by experiment to belong to IronPython modules when running under .NET. The error we were getting was ultimately being caused by IronPython under Mono having just one extra function visible on the module: Finalize. Adding that to the list of exclusions got us past this error, and on to:

Problem 7: Um… that’s it!

…on to nothing else! Once we’d fixed the Finalize problem Resolver One started up and ran reasonably happily under Mono on Windows. There were glitches, of course; our code editor component, in particular, didn’t like being resized. But the software worked well enough to test, which is all we need for now.

There’s obviously a lot to be done before we can get Resolver One running on Macs and Linux machines; the creation of our grid component is going well, but takes time, and we need to do something about the code editor. But the good news is that we’ve identified the incompatibilities between Mono and Microsoft .NET that will hit us beyond the obvious operating system issues, and there’s nothing we can’t work around, given a bit of ingenuity. It took a while, but at the end of the day, it was surprisingly easy :-)

An odd crontab problem

18 May 2010

This took a little while to work out, so it’s worth sharing here just in case anyone else has the same problems and is googling for solutions. We had a problem on one of our web servers at Resolver which manifested itself in some (but not all) cron jobs being run twice, which was causing all kinds of problems. Here’s how we tracked it down and solved it.

The main symptom of the problem was that something was going wrong with Apache logfile rotation. The files appeared to be being rotated twice, so each week we’d get a properly rotated one and then a zero-length one created immediately after:

-rw-r--r-- 1 root root  861 2010-05-16 06:23 access.log.2.gz
-rw-r----- 1 root adm     0 2010-05-16 06:25 access.log.1
-rw-r----- 1 root adm  5590 2010-05-18 11:20 access.log

This was annoying, and it was making it unnecessarily difficult to measure some of our website metrics. It also made us worry that data was being lost; there were occasionally gaps in the logfiles, where it looked like a week’s worth of data had been lost while rotating.

Our first thought was that because something seemed to be going wrong with the log rotate script, it was odd that we weren’t receiving any email about it. Normally if a cron job writes output, it gets emailed to root. A bit of investigation revealed a problem with the mail setup (which I won’t go into now), and fixing led to some interesting information. Once we’d fixed the email problem, we started getting messages like this at 17 minutes past every hour (when the hourly jobs were scheduled to run):

Subject: Cron  root	test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )

/bin/sh: root: command not found

It appeared that it was prepending the name of the user account that a cron job should be run as onto the job’s command. Now, the hourly jobs are triggered by a line in /etc/crontab that looks like this:

17 *    * * *   root    cd / && run-parts --report /etc/cron.hourly

So our first guess was that it was some kind of whitespace thing; the format is

MM HH DD MM WW username command

So perhaps there were spaces separating the day-of-week (WW) field from the username, when there should have been a single space or a tab? On first examination, this looked like it might be it: every other line in the crontab used a tab to separate the two fields, but the hourly cron job line used a number of spaces. We fixed that, and waited for the next 17 minutes past the hour.

But it didn’t work — we got the same error. By this time, it was getting quite late in the evening, so we left it to run overnight to see if we got any more useful information.

The next morning, we found that (as you’d expect) we’d got an error message at 17 minutes past each hour. However, more usefully, we got this pair of emails:

Time: 06:25
Subject: Cron  root	test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )

/bin/sh: root: command not found
error: Failed to remove old log /var/log/mysql.log.8.gz: No such file or directory
run-parts: /etc/cron.daily/logrotate exited with return code 1
mv: cannot stat `/var/log/': No such file or directory
Time: 06:25
Subject: Cron  test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )

mv: cannot stat `.//dpkg.status.5.gz': No such file or directory
gzip: /var/log//syslog.0: No such file or directory
mv: cannot stat `/var/log//syslog.0.gz': No such file or directory

So, the problem seemed clear. Each line in the crontab was being run twice; once with the username being mistakenly taken as part of the command, and once without. This was what was causing the double log-rotation — and perhaps other problems besides.

It didn’t take long to work out what had happened. The format of the file /etc/crontab is unusual in having a username field between the timing information and the command to run; there are also separate crontabs for each user, which omit that field. You set up your own per-user crontab by running crontab filename; this installs the specified file as your personal crontab. However, there’s no need to do that with /etc/crontab — it’s always run, and there’s no need to install it.

Clearly, someone had been unaware of that last point, and after modifying /etc/crontab, had run crontab /etc/crontab as root to install it. We checked this by running as root crontab -l, which outputs the installed crontab for the current user — as we suspected, it displayed an out-of-date version of the contents of /etc/crontab. Running crontab -r to remove it fixed the problem; jobs were being run from /etc/crontab only, and things started working again.

Generating political news using NLTK

4 May 2010

It’s election week here in the UK; on Thursday, we’ll be going to the polls to choose our next government. At Resolver Systems, thanks to energy and inventiveness of our PR guys over at Chameleon, we’ve been doing a bunch of things related to this, including some analysis for the New Statesman that required us to index vast quantities of tweets and newspaper articles.

Last week I was looking at the results of this indexing, and was reminded of the fun I had playing with NLTK back in February. NLTK is the Python Natural Language Toolkit; as you’d expect, it has a lot of clever stuff for parsing and interpreting text. More unexpectedly (at least for me), it has the ability to take some input text, analyse it, and then generate more text in the same style. Here’s something based on the Book of Genesis:

In the selfsame day entered Noah , and asses , flocks , and Maachah . And Joseph
said unto him , Abrah and he asses , and told all these things are against me . And 
Jacob told Rachel that he hearkened not unto you . And Sarah said , I had seen the 
face of the air ; for he hath broken my covenant between God and every thing that 
creepeth upon the man : And Eber lived after he begat Salah four hundred and thirty 
years , and took of every sort shalt thou be come thither . 

It was the work of a moment to knock together some code that would read in all of the newspaper articles that we’d tagged as being about a particular subject, run them through a Beautiful Soup-based parser to pull out the article text, and feed that into NLTK, then to dump the results into a WordPress blog (after a little manual polishing for readability).

The result? REABot, the Resolver Electoral Analysis Robot. Here’s a sample of what I think is its finest post, which was based on articles about the Nick Clegg:

They’re interested in local government, free TV licences, pension credits and child 
trust fund, Carrousel Capital, run by local Liberal Democrats. TV Exclusive Trouser 
Clegg Nick Clegg, but clashed on how the vexing issue of honesty, principles and 
policies of electric shock. It is easy to do. "Louis Vuitton advertising used to pay back 
your debts", he declared that he has delivered his strongest warning yet on the party 
first place and still obsessed with outdated class structures. Inspired by Barack 
Obama’s repertoire, they advise you to send a message to voters at home. "You 
haven’t want to try to counter the threat of it yet," he says. 

So, what does the code look like? It’s actually trivially simple. Let’s say that we’ve downloaded all of contents of the newspaper articles (I shan’t waste your time with HTML-munging code here) and put them into objects with content fields. Here’s what REABot does:

import nltk

tokenizer = nltk.tokenize.RegexpTokenizer(r'\w+|[^\w\s]+')

content_text = ' '.join(article.content for article in articles)
tokenized_content = tokenizer.tokenize(content_text)
content_model = nltk.NgramModel(3, tokenized_content)

starting_words = content_model.generate(100)[-2:]
content = content_model.generate(words_to_generate, starting_words)
print ' '.join(content)

It’s a bit of a hack — I’m sure an NLTK expert could write something much more elegant — but it works :-) What this does is generate a single string, which is formed of the text of all of our relevant articles, and runs it through a tokeniser, which splits it up into words and punctuation symbols, so that (for example) the string "I spent some time this afternoon playing with NLTK, the Python Natural Language Toolkit; the book is highly recommended." would be turned into the list ['I', 'spent', 'some', 'time', 'this', 'afternoon', 'playing', 'with', 'NLTK', ',', 'the', 'Python', 'Natural', 'Language', 'Toolkit', ';', 'the', 'book', 'is', 'highly', 'recommended', '.']

This is then fed into an NgramModel. This is nothing to do with Scientology; Ngram is a word created by extension from “bigram” and “trigram” to refer to collections of n tokens. What we’re doing with the expression nltk.NgramModel(3, tokenized_content) is creating an NLTK object that, in effect, knows about every three-token sequence (trigram) that occurs in the tokenised text (['I', 'spent', 'some'], ['spent', 'some', 'time'], ['some', 'time', 'this'], and so on), and knows how frequently each one occurs.

Once we’ve got the set of all possible trigrams and their respective frequencies, it’s pretty easy to see how we can generate some text given two starting words and a simple Markov-chain algorithm:

  • Let’s say that we start off with ['The', 'tabloid'].
  • Our analysis might tell us that there are three trigrams starting with those two tokens, ['The', 'tabloid', 'headlines'] 50% of the time, ['The', 'tabloid', 'newspapers'] 10% of the time, and ['The', 'tabloid', 'titles'] 40% of the time.
  • We generate a random number, and if it’s less than 0.5, we emit “headlines”, if it’s between 0.5 and 0.6, we emit “newspapers”, and if it’s between 0.6 and 1.0, we emit “titles”. Let’s say it was 0.7, so we now have ['The', 'tabloid', 'titles'].
  • The next step is to look at the trigrams starting ['tabloid', 'titles']; we work out the probabilities, roll the dice again, and get (say) ['tabloid', 'titles', 'have']
  • Repeat a bunch of times, and we can generate any arbitrarily long text.

This is pretty much what the NgramModel‘s generate method does. Of course, the question is, how do we get two words to start with? By default, the method will always use the first two tokens in the input text, which means that every article we generate based on the same corpus starts with the same words. (Those who know the Bible will now know why the bit from Genesis started with the words “In the”.)

I worked around this by telling it to first generate a 100-token stream of text and pick out the last two:

starting_words = content_model.generate(100)[-2:]

…and then to generate the real output using those two as the starting point:

content = content_model.generate(words_to_generate, starting_words)

It’s kind of (but not really ;-) like seeding your random number generator.

And that’s it! Once the text has been generated, I just copy and paste it into a WordPress blog, do a bit of prettification (for example, remove the spaces from before punctuation and — perhaps this is cheating a little — balance brackets and quotation marks), add appropriate tags, and hit publish. It takes about 5 minutes to generate an article, and to be honest I think the end result is better than a lot of the political blogs out there…

[An aside to UK readers: does anyone know if the business news in The Day Today was generated by something like this?]