Archive for October, 2009

headless: A tool for running programs… headlessly

Saturday, October 17th, 2009

I talked to Josiah earlier today and got him to install Xvfb (X Virtual FrameBuffer) on the cluster; so now you can run Processing and other graphical applications on there. Xvfb emulates a graphical display, so as far as your programs are concerned they’re running in a full fledged graphical environment. All you have to do is launch your application with my “headless” script. Headless simply creates a virtual display with Xvfb, and tells your program to use it instead of the default display.

To use headless:
Download the script and copy it to your home directory on the cluster:

scp -p -r headless YourUserName@fly.hampshire.edu:~/

Now say your graphical program is called doawesome and takes three arguments. Instead of telling tractor or multiquery to issue the command:
./doawesome 1 33 7
You tell it to use the headless script instead:
./headless ./doawesome 1 33 7

One more tip for processing users:
When you’re ready to run your sketch on the cluster, open it in processing and click File->Export application, then select Linux and hit export. This should create a directory called application.linux inside your sketch’s directory, and within application.linux a shell script that will launch your sketch. To run it on the cluster just copy this application.linux directory to your home directory:

scp -p -r YourSketch/application.linux YourUserName@fly.hampshire.edu:~/

And issue jobs for the headless script to launch the shell script inside of ~/application.linux

multiquery './headless ~/application.linux/doawesome'

Looking for job queue alternatives

Monday, October 12th, 2009

I spent some time today looking into alternative job queuing solutions for running stuff on the cluster. After some unnecessarily-difficult detective work, I figured out that the system apparently used most often with Rocks is Maui, which is released as open source by a commercial enterprise selling some hella complicated stuff. (All this rocks stuff is hilariously difficult to interpret, by the way. There’s really no documentation that tells you what anything is for, or how to do anything other than install things.) It would be rad and everything to try to implement a somewhat more open/accessible system for cluster job queuing than tractor, but this is not an area where anything is straightforward and easy. Given Hampshire’s resources, its existing support relationship with Pixar, and the institutional familiarity with Alfred and Tractor, writing a system that targets Tractor is the only thing that makes sense to me right now as a first step.

Tractor: a job queue system from Pixar

Wednesday, October 7th, 2009

Shauna and I met with Josiah Erickson on Monday, and we learned the very useful fact that the cluster does indeed have a load-balancing job queue system operating on it called Tractor. It’s one of the tools distributed with RenderMan, but the scripts one writes to spool tasks are quite generic and should be useful for a variety of applications. When I actually start writing the GP experiment manager (very soon now), it will queue jobs by producing tractor scripts as output and spooling them. There are a couple of quirks with Tractor as it is presently configured that tend to result in permissions errors, but hopefully by working with Josiah and being clever with scripting it will be possible for users to employ the system without having to do stuff with permissions editing. Or at least, not too much of it.