Programming languages tend to evolve in response to user needs, hardware advances, and research developments. Language evolution artefacts may include new compilers and interpreters or new language standards. Evolving programming languages is however challenging at various levels. Firstly, the impact on developers can be negative. For example, if two language versions are incompatible (e.g., Python 2 and 3) developers must choose to either co-evolve their codebase (which may be costly) or reject the new language version (which may have support implications). Secondly, evaluating a proposed language change is difficult; language designers often lack the infrastructure to assess the change. This may lead to older features remaining in future language versions to maintain backward compatibility, increasing the language’s complexity (e.g., FORTRAN 77 to Fortran 90). Thirdly, new language features may interact badly with existing features, leading to unforeseen bugs and ambiguities (e.g., the addition of Java generics). This workshop brings together researchers and developers interested in programming language evolution, to share new ideas and insights, to discuss challenges and solutions, and to advance programming language design.
Topics include (but are not limited to):
We are accepting two kinds of submission:
Please submit your abstracts/papers via EasyChair. Papers will be subject to full peer review, and talk abstracts will be subject to light peer-review/selection. Accepted submissions will be published in the ACM DL and must adhere to ACM SIGPLAN’s republication policy.
If you have any questions relating to the suitability of a submission please contact the program chairs at email@example.com.
We are proud to be supported by the Software Sustainability Institute
This isn’t the first time I’ve heard of football fans complaining about the lack of accessibility in FIFA. For those fans watching in black and white, I guess the guy kicking the ball is one of the players.
You might think that this is a fixable problem, and it certainly would be if UEFA asked a Computer Scientist. Apparently there are 76 teams in the UEFA Champions League (I cheated and asked Wikipedia). Each team wears a “home” kit and an “away” kit (I guessed that one). So, how many ways are there of choosing 2 teams from 76, counting “home” and “away” matches as distinct choices? A bit of A-level maths tells me that this is P(76, 2) or 5700.
5700 is a lot of unique games that might be chosen. Of course, I have simplified this, I haven’t looked into things like group matches, or different rounds in the tournament, so the possible number of different games in the season will be a little different. However, the point is that if you had to design kits for all those teams so that no matter which team played which a colour-blind fan could tell them apart, you’d be designing for a very long time indeed.
Is all that design really necessary though? Or is there maybe some other way to think about the problem that might make it a little easier? In Computer Science this problem is very similar to something we call hashing. When we hash some data we want to store it so that similar data is kept together and cannot be easily confused with other data. A simple example would be voting slips. If we have three political parties, Left Wing, Right Wing, and Raving we want to put all the ballot papers into three buckets, one for each party (we can ignore the spoiled papers). We don’t care to differentiate between ballot papers, since every vote is equal, we just need to put each one into the right bucket so we can count them. The only important criteria for organising our data is that the Left Wing votes shouldn’t get muddled up with the Right Wing or the Raving votes.
Placing the votes into buckets is simple and makes intuitive sense. Is there a neat way to organise the football shirts like this? Can we find a simple hashing function that will work for team kits? Here is a really simple suggestion: every team has a home kit with some configuration of block colours and emblems. Every team also has an away kit which is striped. It doesn’t matter what the colours are or what writing or graphics is on each shirt. It doesn’t matter whether the stripes are vertical or horizontal or what thickness or colour they are – each team can be easily differentiated by fans, whether colour blind or not.
How do you unit test a piece of code that generates a PDF file? There are a number of interesting answers to this question around the web, including some neat ideas such as:
This seems like overkill to me! A simple way forward is just to use the diff tool that comes as standard on UNIX platforms.
Usually diff is used with plain text files, but it can work with binary files as well. Here’s a very simple example:
$ diff report.pdf expected.pdf Binary files report.pdf and expected.pdf differ $
Hmm! Neat, but not terribly useful. What else can we do? A quick browse through the diff man-page show that the -a command-line switch tells diff to treat a binary file as if it were text. This sounds like a step forward.
diff -a report.pdf expected.pdf 162,163c162,163 < /CreationDate (D:20140812210344+01'00') < /ModDate (D:20140812210344+01'00') --- > /CreationDate (D:20140812012140+01'00') > /ModDate (D:20140812012140+01'00') 187c187 < /ID [<3428D71EEBFEECF7176993643DEA57D0> <3428D71EEBFEECF7176993643DEA57D0>] --- > /ID [<3FD57F91F32489646331D1DBBF510CDA> <3FD57F91F32489646331D1DBBF510CDA>] $
As you’d expect with PDF, there is some metadata inside the files that we would expect to differ between PDF files, even if the files have the same content. What we need to do next is to tell diff to ignore this metadata, and we can do that with the -I switch. We might also want to ignore whitespace, which we can do with -w:
$ diff -w -a -I .*Date.* -I \/ID.* report.pdf expected.pdf $
Just what we wanted! As with all UNIX tools here, the command was successful (the files were ‘identical’) so we didn’t get any output. To put that in a unit testing context, we can write that up as pytest unit test:
import os import subprocess def test_pdf(): # Generate PDF here ... assert os.path.exists('expected.pdf') assert os.path.exists('report.pdf') # Diff the resulting PDF file with a ground truth. diff_command = ['diff', '-w', '-a', '-I', '.*Date.*', '-I', '\/ID.*', 'report.pdf', 'expected.pdf'] child = subprocess.Popen(diff_command, stdout=subprocess.PIPE, cwd=os.path.dirname(__file__)) out, err = child.communicate() assert 0 == child.returncode
writeLaTeX is my new favourite thing. If you haven’t heard of it, writeLaTeX is an online service for writing collaborative LaTeX documents. Think of it like Google docs for scientists and people who like to typeset very beautiful documents.
Why does this matter? Well, it solves a whole bunch of simple problems for me. I can move between different machines at home and work and keep the same environment. This is more difficult than just auto-syncing my own documents via Dropbox or similar. It also means I need the same LaTeX environment, whether I am working on a locked-down Windows machine at work or a completely open laptop running FOSS software at home. Already that’s something that removes many of my document writing headaches.
More than providing just a synchronisation service, I can collaborate with colleagues in real-time, so I never need to worry about using the “latest” version of any document — even if my colleagues don’t use versioning software like Git or Mercurial. Beyond that, writeLaTeX automatically compiles my projects in the background, so I can always see a nearly up to date version of the resulting PDF. My favourite way of editing on writeLaTeX is to have a full “editor” window in my main monitor (straight ahead of me) and a full “pdf render” window on another monitor (off to the side). It’s super-convenient and allows me to concentrate without feeling interrupted by compiler warnings or errors when I’m half way through a complicated edit.
I could go on and on, but you get the point – writeLaTeX is a very, very neat way to typeset beautiful documents.
Of course, writeLaTeX is not the only start-up in this space. Authorea and ShareLaTeX make similar offerings, and both have different and interesting strengths. It happens that when I needed a service like this, writeLaTeX was the app that had all the built-in style and class files I needed, and the right combination of features, for me. In fact, the pre-installed TeX packages are exactly what you get from installing all of TeXLive on Ubuntu — so writeLaTeX essentially mirrors my own Linux set-up, minus my dodgy Makefiles. That said, I’m very excited that a few competitors are working on these problems. This tells me that online, collaborative LaTeX services have a serious long-term future, and that should benefit users of all these different services.
Once you start using a new shiny toy, there’s always the sense that this is *so* awesome, I wish it could do X… So this is my current wish list for writeLaTeX. This is no criticism of the awesome service, but if you happen to have a few million dollars lying around, please pay the company to implement the below…
It’s really convenient to have all my writeLaTeX projects together on a writeLaTeX project page, but it also breaks the structure of my projects and documents and imposes a second, different structure.
This is what I call the expression problem of scientific projects (Computer Scientists will get the joke) – you can either organise your documents and code around each project you take part in (Option 1), or you can organise your documents around their type (Option 2). Either choice is good, and just a matter of personal taste, but it makes a big difference to your personal workflow and how quickly you can find information and track the progress of your projects. Like many things, consistency is the key principle here.
Option 1 looks like this:
science_project1/ ....papers/ ........paper1/ ............main.tex ............figures/ ................chart1.png ................petri_dish.png ............refs.bib ........paper2/ ... ....talks/ ........talk1/ ............main.tex ............figures/ ................chart1.png ................petri_dish.png ............refs.bib ........talk2/ ... ....software/ ........some_code.py ... ... science_project2/ ...
Option 2 looks like this:
papers/ ....paper_about_project1/ ........main.tex ........figures/ ............chart1.png ............petri_dish.png ........refs.bib ....paper_about_project_2/ ... talks/ ....talk_about_project1/ ........main.tex ........figures/ ............chart1.png ............petri_dish.png ........refs.bib ....talk_about_project2/ ... software_about_project1/ ....some_code.py ... ...
But what happens when you start to use services like writeLaTeX? Your whole workflow gets a lot more complex. You might have all of your projects sync’s to a service like GitHub, or not, but now your papers and talks are on writeLaTeX and can’t be “checked out”. Your software might well be on GitHub or similar. You might well be sending your figures and data off to FigShare. It is suddenly more difficult to keep everything together and it isn’t immediately clear how much progress you have made with each part of the project.
In my view the answer to this problem has to come in two parts. Firstly a way to expose writeLaTeX projects as git repositories so that they can be incorporated as git submodules inside an existing GitHub project (other SCMs and hosting companies are available). This means that it doesn’t matter whether you choose Option 1 or Option 2 above to structure your project files. writeLaTeX could then issue pull requests on GitHub when you update your documents to “send” your updates to GitHub. Secondly, existing CI services such as Travis can be configured to send documents off to FigShare once a tagged release of a paper has been created. This costs a little time to set up, but it is an automated workflow that can be reused over different projects, so that small set-up cost is nicely amortized.
A lint is a tool to check code for errors before it has been compiled. There are a number of these for LaTeX (the one I currently use is chkTeX), and it would be useful to have them run automatically during the background built-compile-render cycle that writeLaTeX already runs.
If you are not writeLaTeX one option here is to use a continuous integration tool to run the lint for you, together with your normal build cycle. For example, this:
$ chktex -W ChkTeX v1.6.4 - Copyright 1995-96 Jens T. Berger Thielemann. The command "chktex -W" exited with 0. $ chktex -q -n 6 *.tex chapters.*.tex 2>/dev/null | tee lint.out The command "tee lint.out" exited with 0. $ test ! -s lint.out The command "test ! -s lint.out" exited with 0.
There are a few jobs that need to be done for any paper, but are time-consuming busy work that ideally would be minimised. One of these is producing and curating long lists of references to prior art, usually in BibTeX. Another is pulling in tables and figures (usually to do with prior art or experimental apparatus) that can be used in different papers. An obvious example is a BibTeX file containing the authors own papers. You might have a file called something like mypapers.bib which you certainly need in your own CV, but then you also need in pretty much all your papers and several talks. What happens when you update this file for your CV project? mypapers.bib isn’t shared between different projects, so if you also need to update it in all your other projects. That might not be so bad when you are just added a newly published paper to your list, but if you find a typo in your old papers it’s a real pain. The same is true for curated lists of papers in the area you work in and all sorts of other files.
It would be nice to find some clever way to resolve this, but what if you also have all your files nicely structured and curated using either Option 1 or Option 2 above? Maybe a neat thing to do would be to have some “dummy” projects which only contain common files, such as BibTeX files (and don’t get compiled with pdflatex or similar), then use something like Git submodules to “import” the dummy projects into “real” ones that do compile documents.
If there’s one huge and pointless sink of valuable time it’s curating long lists of BibTeX references. In recent years a number of services have started to make this easier — Bibsonomy and Google Scholar being two very handy services — but there is still much that has to be done manually. A neat way to search for a citation and pull it into a BibTeX file from within writeLaTeX would be really, really cool.
Open document review has started to become common, at least for books. A great example of this is Real World OCaml where you can log in with a GitHub account and comment on any paragraph of the book. Comments then become issue tickets in a GitHub repository and the authors can resolve each comment (I notice Real World OCaml has logged an impressive 2457 closed tickets). This is a really neat solution to document review and would be a huge bonus for anyone writing in LaTeX.
The idea of keeping a diary fills me with dread. It conjurers up distant memories of receiving leather-bound paper diaries from well-meaning relatives at Christmas and the crushing obligation to write something, anything every single day, when actually nothing very interesting was going on. The obligation to do something every day is a sure-fire killer of motivation for me. So, as you can imagine, I have never been keen on keeping a regular diary of research notes and results. Not that I haven’t tried. I have a paper notebook that I use to keep track of discussions and obligations from meetings and at various times I’ve tried to use that as a discipline for writing down ideas and notes from my research work. Somehow though, it never stuck.
That is, it never stuck until I read this blog post by Mikhail Klassen on the writeLaTeX blog. Mikhail points out that having a digital diary has some compelling advantages. It allows you to keep track of intermediate results and ideas, links to software repositories and BibTeX citations. This means that next time you need to quickly put together a presentation or poster, or you are starting to write a paper, you can pull figures, citations and text directly from your diary. This is especially useful if a lot of your writing has equations and citations that are time-consuming to keep track of. So, keeping a diary means that a lot of the time-consuming tasks involved at the start of writing a paper or presentation just disappear – those costs are amortized with the costs of keeping the diary. This has enormous appeal to me. The time I get for research is not large, and anything I can do to make my work more efficient makes the process a lot less stressful.
So, having looked carefully at Mikhail’s template I was really impressed, but I wanted to tweak a few things. In particular I changed the layout of the whole diary and based my version on the excellent tufte-latex class which is inspired by the work of Edward Tufte. I also added a couple of new sections at the top of the diary – Projects and Collaborations and Someday / Maybe. Projects and Collaborations is there to help keep track of ongoing commitments, and as a reminder that those projects need to be regularly progressed or abandoned. Someday / Maybe is there to keep track of vague ideas that sound good but you aren’t yet committed to acting on. I find it useful to have a list of these, as they can easily get forgotten, and many good ideas which aren’t quite ready for action can be used as student projects or re-purposed. Other ideas can sit around for a long time, but suddenly become useful when a new collaboration comes about, or you find some scientific result or new technology which makes a previously very difficult idea tractable.
Lastly, like Mikhail, my template and my own notes are on writeLaTeX, which is a cloud platform for writing LaTeX documents. writeLaTeX (and its cousins ShareLaTeX and Authorea) have some great features, like collaborative real-time document editing, auto-compilation so that you can see a current version of the PDF of your document as you type, a wealth of templates and a friendly near-WYSIWYG editor. writeLaTeX can also has a limited sync-with-Dropbox feature for offline work. All of this makes diary entries really simple to write. I just have a writeLaTeX window open in my browser all day and I can write updates and upload new documents as I go along.
Oh, and because I have a pathological aversion to keeping a diary, I call mine “Lab Notes”. Much friendlier!
Scala talk given at the inaugural Thames Valley Functional Programming Meet-Up.
I’ve recently been working on a new Python project, which started off as a bit of an experiment at the recent PyPy London Sprint. Working on a brand new repository is always nice, a blank slate and a chance to write some really elegant code, without all the crud of a legacy project.
That led to an interesting situation. When I run the unit tests, I want to use the CPython interpreter. This means I can use all the standard library modules that I know well, and can test the basic algorithms I’m writing. When I want to “translate” my code into a binary executable, I use pypy and some of its rlib replacements for the Python standard library modules. When I get an runtime error in the translation, I need to know whether that is related to my use of the rlib libraries or my code is just plain wrong, and using CPython helps me to do that.
The problem is that I have to keep switching between different standard libraries and interpreters. Somewhere in my code there is a switch for this:
DEBUG = True
In testing that switch should be True and in production it should be False, but changing that line manually is a real pain, so I need some scripts to catch when I’ve set the DEBUG flag to the wrong mode.
Here’s my (slightly simplified) first go at automating a test script:
import subprocess debug_file = ... framework = 'pytest.py' try: retcode = subprocess.check_output(['grep', 'DEBUG = False', debug_file]) print 'Please turn ON the DEBUG switch in', debug_file, 'before testing.' except subprocess.CalledProcessError: subprocess.call(('python', framework))
What does this do? First the script calls the UNIX utility grep to find out whether there the DEBUG flag is correctly set:
retcode = subprocess.check_output(['grep', 'DEBUG = False', debug_file])
If it is, the script prints a warning message:
print 'Please turn ON the DEBUG switch in', debug_file, 'before testing.'
which tells me I have to edit the code, and if not, the script runs the tests:
Nice, but I still have to edit the file if the flag is wrong.
Nicer, would be for the script to change the flag for me. Fortunately, this is easily done with the Python fileinput module. Here’s the second version of the full test script (slightly simplified):
import fileinput import subprocess import sys debug_file = ... debug_on = 'DEBUG = True' debug_off = 'DEBUG = False' def replace_all(filename, search_exp, replace_exp): """Replace all occurences of search_exp with replace_exp in filename. Code by Jason on: http://stackoverflow.com/questions/39086/search-and-replace-a-line-in-a-file-in-python """ for line in fileinput.input(filename, inplace=1, backup='.bak'): if search_exp in line: line = line.replace(search_exp, replace_exp) sys.stdout.write(line) def main(): """Check and correct debug switch. Run testing framework. """ framework = 'pytest.py' opts = '' try: retcode = subprocess.check_output(['grep', debug_off, debug_file]) print 'Turning ON the DEBUG switch in', debug_file, 'before testing...' replace_all(debug_file, debug_off, debug_on) except subprocess.CalledProcessError: pass finally: subprocess.call(('python', framework, opts)) return if __name__ == '__main__': main()
So, now the flag is tested, set correctly if needs be and the tests are run. But I still have to run the test script! What a waste of typing. So, the next step is simply to call this script from a git pre-commit hook.
At the Government Open Data Hack Day event organised by James Cattell and Gavin Broughton, Andy Pryke, Christophe Ladroue and I had a go at analysing employment statistics for the West Midlands. In particular we were looking for correlations between employment data and other factors, such as census data about age and gender. As with all data mining work, the most difficult and time-consuming job was cleaning the available data before it could be usefully used in an analysis. Christophe wrote a very clear account of the work he did using R to deal with nomis data. You can see a summary of our results in the video below.
… and if you want to download the yourself here it is publicly available here: