Doing TDD with Max (and JavaScript)

JULY 1, 2014 in MAX PATCHING

Your opinion of unit tests might range from it being the best thing since sliced bread, through to it being a waste of time that should be abolished in favour of end-to-end tests. Another possibility is that you might not care about testing your code at all, in which case I recommend you to get your cowboy hat and leave. Alternatively, you could hang around for a little bit and possibly find yourself intrigued about writing unit tests and how they might make your life a little bit easier.

You might fear that writing unit tests will take more time, and won’t really give much benefit in return for the amount of time you spend writing them. I’ve found that when tackled properly, unit tests take minimal effort to write, and at the same time forces you to verbalize the exact requirements you have for the piece of functionality that you’re going to implement. It’s very satisfying to write a small test suite, see that all the tests go red at first, and then see them start to go green as you start implementing the functionality that you have specified. You will need to write some more lines of code, sure, but you will also save a lot of time when changing bits of your implementation or thinking up new features (aka feature-creep). So in my opinion, writing unit tests even on private projects can make a lot of sense.

Alright then, this was a slightly long intro so lets get to the core of this post: When working on a new module for the Midular project (which is a Max for Live project that I presented with a couple of friends at the Midihack event in Stockholm back in May of this year), I had an idea for a module that would pitch incoming midi notes in such a way that it creates new notes whenever the pitch value is changed. This presents a few problems such as keeping track of up to 127 notes at a time, taking care to always emit a note off before emitting a note on. Even though the implementation of this logic might be simple, it’s also prone to subtle bugs and edge cases that might not be handled properly. In the Max environment, pieces of logic are encapsulated into objects that acts on a set of inputs and produces some kind of output based on this. The logic that goes on inside each object is completely separate from the flow of the patch itself, and is therefore a perfect candidate for unit testing as I see it. In other words: If I can ensure that any conceivable input is translated into the proper outputs, then the object will perform as expected in any given patch. Furthermore, if I add new features to my object, I want to ensure that I don’t break any pre-existing logic that existing patches might depend upon. My suite of unit tests will handle these concerns for me, and alert me whenever something isn’t working as it should.

Max provides a number of possibilities for creating these objects, which are called externals in Max-speak, and for me the fastest way to get to the finish line was to use JavaScript. To get my TDD-environment up and running, I created a simple build using GruntJS which triggered unit tests written using the Jasmine testing framework together with the Karma test runner (using the PhantomJS headless browser). With this setup in place, I can simply type “grunt” at my command line and watch as my external is being fed various types of input and being evaluated for what it produces in return.

Screenshot 2014-06-29 23.38.02

On the subject of speedy development and the efficacy of writing unit tests, consider the time it takes to run your automated tests versus manually testing your patch(es) for each code change. Now that you know a little bit about the what and why, let’s look at some details about the how:

My directory structure looks like this:

m4l-midular-dir-structure

Of interest here is the gruntfile.js, which contains the build script which is used to run tests, the _max_api_placeholder.js which defines some dummy functions that the max externals expect to exist (meaning that they can be subsequently mocked out using Jasmine), and the tests themselves, which always have the ending .spec.js. We’ll have a look at these files in turn.

The gruntfile.js is very simple and looks like this (if you’re unfamiliar with grunt, head over to gruntjs.com to learn the basics):

module.exports = function(grunt) {
 grunt.initConfig({
  karma: {
   options: {
    browsers: ['PhantomJS'],
    frameworks: ['jasmine'],
    singleRun: true
   },
   noteclock: {
    options: {
     files: ['js/common/*.js', 'js/noteclock.js', 'js/noteclock.spec.js'],
    }
   },
   notepitcher: {
    options: {
     files: ['js/common/*.js', 'js/notepitcher.js', 'js/notepitcher.spec.js'],
    }
   }
  }
 });

 // Load plugins
 grunt.loadNpmTasks('grunt-karma');

 // Tasks
 grunt.registerTask('default', ['karma:noteclock', 'karma:notepitcher']);
};

The purpose of this file is to set up the karma test runner, telling it which files to read. I have made two sub tasks within the main karma task, each one specifying their own external and related files (i.e. the test suite itself and any shared code, in this case the placeholder Max API functions). At the end of the file I tell grunt to run both my test tasks as the default build, which means that I can simply type grunt<enter> to have my tests run.

Next, the _max_api_placeholder.js defines a couple of functions that will be referred to from the externals themselves. During testing however, these functions will not be available and therefore have to be mocked out. The task of this particular file is thus to ensure that the functions are defined, which will allow them to be mocked out using Jasmine’s mocking functionality. The file is just a few lines long and looks like this:

// dummy functions to mock out the Max js callbacks

window.outlet = function(outletNumber, value) {
 // does nothing
};
window.post = function() {
 // does nothing
};

Finally, here is a sample test suite. These tests are describing the expected behaviour for my SuperPitcher Midular module, which will be available from my github repo soon.

describe("notepitcher tests", function() {

 /*
  Inlets: 
  0: note pitch, note velocity (list)
  1: pitch (int) 

  Outlets:
  0: note pitch, note velocity (list)
 */

 describe("When sending in a note on", function() {
  beforeEach(function() {
   activeNotes = [];
   spyOn(window, 'outlet');

   inlet = 0;
   list(36, 127); // note on
  });

  it("should generate a note on for the initial note", function() {
   expect(window.outlet).toHaveBeenCalledWith(0, 36, 127);
  });

  it("should, upon receiving a new pitch value, generate a note off for the initial note and a note on for the new note (initial note with new pitch added)", function() {
   inlet = 1;
   msg_int(4);
   expect(window.outlet).toHaveBeenCalledWith(0, 36, 0);
   expect(window.outlet).toHaveBeenCalledWith(0, 40, 127);
  });

  it("should, upon receiving a note off (for the initial note), generate a note off for the new note (initial note with new pitch added)", function() {
   inlet = 1;
   msg_int(4);
   inlet = 0;
   list(36, 0); // note off
   expect(window.outlet).toHaveBeenCalledWith(0, 40, 0);
  });

  it("should ignore a second note with the same pitch", function() {
   window.outlet.reset();
   list(36, 120); // note on, different velocity
   expect(window.outlet.calls.length).toEqual(0);
  });
 });

 describe("When sending in a note on and the pitch value is set to something other than 0", function() {
  beforeEach(function() {
   activeNotes = [];
   spyOn(window, 'outlet');

   inlet = 1;
   msg_int(10); // pitch setting
   inlet = 0;
   list(36, 127); // note on
  });

  it("should take into account the current pitch setting when generating the initial note", function() {
   expect(window.outlet).toHaveBeenCalledWith(0, 46, 127);
   expect(window.outlet.calls.length).toEqual(1);
  });

  it("should generate a correct final note off when the pitch value is changed during the notes playtime", function() {
   inlet = 1;
   msg_int(-1); // pitch setting
   expect(window.outlet).toHaveBeenCalledWith(0, 46, 0);
   expect(window.outlet).toHaveBeenCalledWith(0, 35, 127);
   inlet = 0;
   list(36, 0); // note off
   expect(window.outlet).toHaveBeenCalledWith(0, 35, 0);
  });
 });
});

If you’ve made it all the way here then I guess you’re quite excited about the prospect of unit testing your Max externals, so do have fun doing that, and don’t hesitate to drop me a line if you want to discuss this further. Cheers and have a nice summer! 🙂