ludum dare 29: underground city defender

This weekend was Ludum Dare again, and again Switchbreak asked me to write some music for his entry. It’s called Underground City Defender, and it’s a HTML5/Javascript game, so you can play it in your browser here!

The original idea for the game was to make it Night Vale-themed, so I started the music with a Disparition vibe in mind. The game didn’t turn out that way in the end, but that’s okay, since the music didn’t either! It’s suitably dark and has a driving beat to it, so I think fits the game pretty well.

My move to San Francisco is just a few weeks away, so I’ve sold most of my studio gear, including my audio interface and keyboard. That left me using my on-board sound card to run JACK and Ardour, but that turned out just fine — with no hardware synths to record from, not having a proper audio interface didn’t slow me down.

As some of you guessed, the toy in the mystery box in my last post was indeed a Teenage Engineering OP-1. It filled in as my MIDI controller here, and while it’s no substitute for a full-sized, velocity-sensitive keyboard, it did a surprisingly good job.

Software-wise, I used Rui’s samplv1 for the kick and snare drums, which worked brilliantly. I created separate tracks for the kick and snare, and added samplv1 to each, loading appropriate samples and then tweaking samplv1’s filters and envelopes to get the sound I was after. In the past I’ve used Hydrogen and created custom drum kits when I needed to make these sorts of tweaks, but having the same features (and more!) in a plugin within Ardour is definitely more convenient.

The other plugins probably aren’t surprising — Pianoteq for the pianos, Loomer Aspect for everything else — and of course, it was sequenced in Ardour 3. Ardour was a bit crashy for me in this session; I don’t know if it was because of my hasty JACK setup, or some issues in Ardour’s current Git code, but I’ll see if I can narrow it down.

studio slimdown

Last weekend, almost exactly five years after I bought my Blofeld synth, I sold it. With plans to move to the US well underway, I’ve been thinking about the things I use often enough to warrant dragging them along with me, and the Blofeld just didn’t make the cut. At first, the Blofeld was the heart of my studio — in fact, if I hadn’t bought the Blofeld, I may well have given up on trying to make music under Linux — but lately, it’s spent a lot more time powered off than powered up.

Why? Well, the music I’m interested in making has changed somewhat — it’s become more sample driven and less about purely synthetic sounds — but the biggest reason is that the tools available on Linux have improved immensely in the last five years.

Bye bye Blofeld -- I guess I'll have to change my Bandcamp bio photo now

Bye bye Blofeld — I guess I’ll have to change my Bandcamp bio photo now

Back in 2009, sequencers like Qtractor and Rosegarden had no plugin automation support, and even if they had, there were few synths available as plugins that were worth using. Standalone JACK synths were more widespread, and those could at least be automated (in a fashion) via MIDI CCs, but they were often complicated and had limited CC support. With the Blofeld, I could create high-quality sounds using an intuitive interface, and then control every aspect of those sounds via MIDI.

Today, we have full plugin automation in both Ardour 3 and Qtractor, and we also have many more plugin synths to play with. LV2 has come in to its own for native Linux developers, and native VST support has become more widespread, paving the way for ports of open-source and commercial Windows VSTs. My 2012 RPM Challenge album, far side of the mün has the TAL NoiseMaker VST all over it; if you’re recording today, you also have Sorcer, Fabla, Rogue, the greatly-improved amsynth, Rui’s synthv1/samplv1/drumkv1 triumvirate, and more, alongside commercial plugins like Discovery, Aspect, and the not-quite-so-synthy-but-still-great Pianoteq.

I bought the Blofeld specifically to use it with a DAW, but I think that became its undoing. Hardware synths are great when you can fire them up and start making sounds straight away, but the Blofeld is a desktop module, so before I could play anything I had to open a DAW (or QJackCtl, at the very least) and do some MIDI and audio routing. In the end, it was easier to use a plugin synth than to set up the Blofeld.

Mystery box of mystery!

You can probably guess what’s in the box, but if not, all will be revealed soon

So, what else might not make the cut? I only use my CS2X as a keyboard, so I’ll sell that and buy a new controller keyboard after moving, and now that VST plugins are widely supported, I can replace my Behringer VM-1 analog delay with a copy of Loomer Resound. I might also downsize my audio interface — I don’t need all the inputs on my Saffire PRO40, and now that Linux supports a bunch of USB 2.0 audio devices, there are several smaller options that’ll work without needing Firewire.

I’m not getting rid of all of my hardware, though; I’ll definitely keep my KORG nanoKONTROL, which is still a great, small MIDI controller. In fact, I also have two new toys that I’ll be writing about very soon. Both are about as different from one another as you could get, but they do share one thing — they’re both standalone devices that let you make music without going anywhere near a computer.

ludum dare 26: anti-minimalist music and sampled orchestras

This weekend was Ludum Dare 26, and as usual when Switchbreak enters such things, I took the opportunity to tag along. The theme was “minimalism”, but his game, called MinimaLand, deliberately eschews that theme; it tasks the player with bringing life and detail in to a very minimalist world.

MinimaLand screenshot

In MinimaLand, the player brings life to an abstract, minimalist world

I wasn’t sure at first how to fit music to the game, but it soon became clear: if I was going anti-minimalist, I wanted to use an orchestra. Ever since I heard about the Sonatina Symphonic Orchestra, a CC-licenced orchestral sample set, I’ve wanted to try recording something with it; what better time to try it than with a deadline looming!

Given that I had just a few hours for this, I kept the music itself very simple — just three chords and a short melody. The music itself is almost irrelevant in this case, though, since it’s really just a means of delivering those orchestral sounds to the player. Initially, the melody and harmony are on strings, with rhythmic staccato stabs on brass, then the whole thing repeats, with the stabs moving to strings and the melody/harmony to woodwinds and horns.

It’s funny that, even when I’m dealing with sampled instruments instead of my own synth sounds, I still think in terms of sound and feel first, and chords and melodies second. I guess that’s just how I roll!

Working with LinuxSampler

That's a lot of LinuxSampler channels!

That’s a lot of LinuxSampler channels!

Not unexpectedly, I sequenced everything in Ardour 3 and hosted the SSO instruments, which are in SFZ format, in LinuxSampler, using a LinuxSampler build from a recent SVN checkout. I didn’t use anything else on this one, not even any plugins, since all it really needed was some reverb and the SSO samples already have plenty of it.

Recent versions of LinuxSampler’s LV2 plugin expose 32 channels of audio output; I guess the idea behind this is to allow you to run multiple instruments to dedicated outputs from within a single plugin instance, but I’m not sure why anyone would actually want to do that. I think my workflow, with each instrument in its own plugin instance on its own track, makes a lot more sense, so I patched the plugin to return it to a simple stereo output per instance.

Sonatina quality?

I’ve been keen to try SSO mostly to see just how usable it is, and in this case, I think it worked pretty well. With just 500MB of samples, it’s never going to sound as good as a commercial library (where individual instruments can take several GB), but some of the samples, such as the string ensembles, sound quite nice at first listen.

The biggest problem is with the availability of different articulations for each instrument. You do get staccato samples for most instruments, and pizzicato for the strings, but beyond that you just get “sustain” sounds, which are great for held notes (as long as the sample’s long enough), but far less suitable for faster legato parts. You can hear this in the horn part in the second half of the track, where some short notes take so long to start playing that they barely become audible before they end.

Many of the solo instruments are quite weak, too — you can hear audible jumps between certain notes in several of them, where the instrument jumps from one discrete sample to the next, while others have odd tuning problems.

There’s also a tonne of reverb on every instrument. SSO’s instrument samples come from a variety of sources, so each instrument has its own reverb characteristics; in an attempt to even out the differences and make them all sound at least vaguely like they’re in the same room, the library’s creator added quite a bit of extra reverb to each instrument. It’s a necessary evil, and it works, but it has a smearing effect that only exacerbates those problems with short notes.

So, SSO was well suited to this track — most notes were either staccato/pizzicato or were held for quite some time, I didn’t need to use any solo instruments, and the wall of reverb helped make it sound suitably pompous. If your needs are different, though, then you’ll have a hard time getting good results from it.

Having said that, it is far-and-away the best free option, and it’s also quite easy to get working under Linux, which can’t be said for many commercial libraries. Despite my mostly good experience with it, though, I’m keen to investigate just what commercial alternatives are available that will work under Linux.

cosplay mystery dungeon: sound design for a seven-day roguelike

I’ve spent the last week working on a game for the Seven Day Roguelike Challenge with Switchbreak and Amanda Lange, and by virtue of the fact that it’s a seven-day project, it’s now finished! It’s called Cosplay Mystery Dungeon, and you can play it here (if you have Flash installed).

screenshot

A week isn’t long, but I’m really impressed with the finished game — Amanda did great work on the art and the game design, and Switchbreak put a tonne of work in to the code. Here’s how my part of it all came together.

Getting in with the right crowd

It all started innocently enough, with a tweet:

Once Switchbreak was involved, it wasn’t long before I jumped on board, too. Amanda came up with the concept and had written up a lot of notes about the design, so I had a good idea of what sounds would be needed, and what they should sound like, from the get-go.

Early in the week, I worked on the basic player and weapon sounds, making a few versions of most of the weapon sounds to avoid any annoying reptition. The magic effects came later; Amanda’s design included various spells, in the form of collectable comic books, but with the deadline looming I wasn’t sure which of those would make it in to the game. As it turned out, Switchbreak managed to implement them all in the closing hours, so my last hour-or-so was a race to create the matching sounds.

The sounds were a mix of synthesis (using both my Blofeld and Loomer Aspect) and recorded sounds. Some used both synthesis and recording, in fact, such as the lightsaber — after reading about how the sound was created originally, I created a suitable humming sound, played it through one of my monitors, and then swung my mic back and forth past the speaker, recording the results.

Music in a hurry

I hadn’t planned to write music for the game, but it felt so odd without it that, with less than a day left, I decided to take a stab at writing something. I’ve written short pieces within hours for past Switchbreak games, but they’ve been much smaller than this. A run-through of Cosplay Mystery Dungeon can take an hour or more, not to mention the time spent on unsuccessful attempts, so the music needed enough length and variety to carry it over a longer playtime.

I started with the bass line and fast-paced drums, and I knew from the start that I wanted to add a later section with slower, glitchy drums, so those went in early, too. Soon after I nailed down the chord progressions and structure, and started filling in the melody, pad, and stabby brass lines.

This is what an all-MIDI, all-softsynth Ardour session looks like

This is what an all-MIDI, all-softsynth Ardour session looks like

As with my RPM Challenge work, I worked quickly by sticking with MIDI (no bouncing to audio), using softsynths instead of hardware, and mixing on-the-fly, with minimal (or none, in this case) EQ and compression. Synths do let you cheat a bit — if you find that a part is too bright or too dull, or needs more or less sustain, you can just edit the synth patch (tweaking filters and envelopes, for example) instead of using EQ and compression to fix those things after the fact. You can’t solve every mix issue that way, but I find that I can get a perfectly decent, listenable mix that way more quickly than I could otherwise.

All up, I think the music took about 5-6 hours to record, with another half-hour or so after that creating musical cues for the endgame screen and the game over screen, using the same instruments. That left me with just enough time to finish the magic sound effects before the deadline.

Loomer Aspect and Sequent earned their keep, alongside open-source plugins like Invada Tube and the TAL and Calf delays

Loomer Aspect and Sequent earned their keep, alongside open-source plugins like Invada Tube and the TAL and Calf delays

Loomer Aspect really paid for itself on this one. I used TAL NoiseMaker on the chorus lead sound (it’s a modified preset), and the Salamander Drumkit with the LinuxSampler plugin for the drums, but every other sound came from Aspect, mostly using patches that I created on-the-fly. For such a capable synth, it’s surprisingly easy to program — everything’s laid out in front of you, and it’s fairly easy to follow. It lacks the Blofeld’s distortion options, but using distortion plugins in Ardour (TAP TubeWarmth and Invada Tube Distortion) helped address that.

I also had an excuse to use Loomer Sequent — it provided the glitch effects on the drums in the slower section. The presets were all a bit too random to be usable on such sparse parts, so I edited the effects sequence in Sequent to match the parts, adding just a bit of randomness to its loop-slicing parameters.

This was the first track I’d recorded since the official release of Ardour 3, too. It worked really well — it was stable, reliable, and predictable throughout, a definite improvement on the betas. If you haven’t tried it yet, now’s definitely the time!

creating a dynamic soundtrack for switchbreak’s “civilian”

Over the last week I’ve put a bunch of time in to my new game project, a Switchbreak game called Civilian. I’ve been working on music for it, but this blog post isn’t about music — it’s about the crazy stunts you can pull in modern interpreted languages.

Dynamic music in Flash?

Most Flash games use a looping MP3 for background music — it takes just a couple of lines of code to implement, and while the looping isn’t perfectly seamless (there are brief pauses at the start and end, added by the MP3 encoder) it’s usually close enough. For Civilian, though, I wasn’t satisfied with a simple looped track. It’s a game about the passage of time and about player progression, and I wanted the music to reflect those things.

What I really wanted was a dynamic music system, something that would let me alter the music’s sequence or instrumentation on-the-fly in response to the player’s actions. There was no way that was going to work with simple, non-seamless looping MP3s, though — I needed to start working with audio data on a much lower level.

Writing a low-level mixer in AS3

Thankfully, the Flash 10 APIs do give you low-level audio functionality. You can get raw audio data out of an MP3 file, and then send that to audio buffers for playback; I’d already done just that in fact, to implement a seamless MP3 looper, and that gave me a crazy idea: if I could get audio data from one MP3 and play it back, could I also get data from two or more MP3s, mix them, and play them back all at once?

Once I’d confirmed with a simple proof-of-concept that the answer was an emphatic “yes”, I set about adding more tracks, and then implementing features like panning and volume conrol. By this point, the amount of CPU power required to run this mixing was significant — about 40% of one core on my 1.7Ghz i5 Macbook Air — but Flash had no trouble keeping up while running some simple gameplay at 60FPS.

A screenshot from my test app, with five channels of audio running

A screenshot from my test app, with five channels of audio running

From mixer to sequencer

A few days later I had more than just a mixer: I had a simple pattern-based sequencer. Instead of looping MP3s from start to finish, it splits the MP3 tracks in to bars, and then plays those bars in accordance with a sequence stored in an array in the AS3 code.

This actually fits quite well with how I tend to write my music. I can arrange the track basically how I want it in Ardour, then record each unique section of each track to audio, and string those sections together to produce a single MP3 track for each instrument. Then, I can create a sequence within the AS3 code that reassembles those sections in to my original arrangement.

Each bar can have its own settings, too, somewhat like the effects on each note in a tracker. So far, these just let me set the panning or volume for each track, or set up a volume slew (ie: a fade in or fade out) to run over the course of the bar.

Making the music dynamic was just a matter of replacing the static sequence array with code that generates the sequence on-the-fly. I have pattern templates for each track which I combine to create the sequence one bar a a time, adding or removing tracks or replacing one part with another (perhaps with a nice fade in/fade out) based on what’s happening within the game world.

Pushing interpreted languages

As if all the above wasn’t enough, I decided to add an optional audio filter on the output. For certain scenes in the game I want to be able to make the music sound like it’s coming from a radio, so I added a simple bandpass filter, based on a Biquad filter implementation from Dr. Dobbs. If the filter is having any impact on my sequencer’s CPU usage, it’s far too small to notice.

Eventually, I gave up trying to think of efficient ways of doing things, and just started doing them in the simplest way possible. I’ve since done some optimisation work, to help retain a steady frame rate on slower systems (using my old Latitude E6400, clocked down to 800Mhz, as my test machine), but those optimisations are totally unnecessary on more typical systems.

Ten years ago, I wrote audio mixing code for the GBA, and it looked something like this

Ten years ago, I wrote audio mixing code for the GBA, and it looked something like this

The last time I wrote audio mixing code, it was for the ARM7 CPU inside the Gameboy Advance. On that system, compiled C code wasn’t fast enough, so I had to re-write the critical loops in hand-optimised ARM assembler code to get the necessary performance. To see an interpreted language do the same things so easily is still somewhat mind-boggling, but it’s a testament to the advances made in modern interpreters, and to just how fast modern PCs are.

It’s somewhat fitting that this was the week that the GNOME developers announced that JavaScript would become the preferred language for GNOME app development. That announcement caused a surprising amount of backlash, but I think it makes perfect sense: not only is JavaScript a capable and incredibly flexible language with a huge developer community. but it performs incredibly well, too. In fact, I doubt that any other interpreted language has ever had as much developer time invested in improving its performance.

The writing’s on the wall for Flash, of course, but HTML5 and JavaScript are improving rapidly, and frameworks are being written that should make it just as easy to write games for them as it is to write for Flash today. When that happens, it should be a simple matter to port my dynamic music system to JavaScript, and I’ll be very excited to see that happen.

spooky october project: candy grapple

Things have been quiet here of late, but I’ve actually been quite busy! I’ve just finished the sound design for Candy Grapple, the latest game from my good friend Switchbreak. It’s based on one of his Ludum Dare games, Waterfall Rescue, but it’s been fleshed out in to a full game, with much more complete gameplay, many more levels, and a spooky Halloween theme. It’s out now for Android, and there’s an iOS version on the way, too.

Switchbreak asked me to make some suitably spooky-cheesy music for it, and I happily agreed; once I started working on that, I realised he’d also need sound effects, so I offered to create those, too. Read on for details!

Background music

The bulk of my time went in to the in-game background music. Halloween music was new territory for me, but my mind went straight to The Simpsons Halloween specials, and the harpsichord and theremin closing credits. I thought about other “spooky” instruments and came up with the organ, and while it’s not spooky as such, the tuba seemed suitably ridiculous for the kooky carnival sound I was after.

I didn’t want to over-use the theremin, so I stuck with organ for the melody for the most part, and saved the theremin for the bridge, where the harpsichord and tuba drop away in favour of some organ triplets and piano bass notes.

A standard drum kit didn’t seem like a good fit (with that bouncy tuba part, it was in danger of becoming a polka), so I stuck with more random, wacky bits of percussion, like castanets and a vibraslap. I did use some cymbal rolls and crashes in the bridge, though.

Now, for the instruments: I used Pianoteq for the harpsichord and piano, as you’d probably expect; the percussion sounds were from the Sonatina Symphonic Orchestra, played using the LinuxSampler plugin; and the theremin was a simple patch on the Blofeld.

Pianoteq doesn’t just simulate pianos — it also handles other melodic percussion, like harpsichords

The tuba and organ, surprisingly, come from the Fluid GM soundfont. I’m not usually a fan of instruments from GM sets, and I did try a few alternatives, but the Fluid sounds were very well-behaved and sat well in the mix, so I didn’t let myself get hung up on where they came from.

Faking the theremin was fairly straightforward — it’s just a single sine-wave oscillator, but with some portamento to slur the pitch changes and an LFO routed to the oscillator pitch to add vibrato, both of which make that sine wave sound suitably theremin-ish.
I used TAL NoiseMaker at first, but switched to the Blofeld so I could use the modwheel to alter the amount of vibrato (the Blofeld’s modulation matrix makes this sort of thing easy); in hindsight, it would’ve been just as easy to stick with NoiseMaker and alter the vibrato by automating the LFO depth.

The mix came together fairly quickly. There’s a bunch of reverb (I had trouble getting the IR plugin working, so I used TAP Reverberator instead), a little EQ on the tuba and organ to brighten them a bit, and some compression on the piano to add sustain, but that’s about it as far as effects go. The only tricky part was making sure the transition in to the bridge wasn’t too abrupt, but all that really required was some careful balancing of levels.

It was, of course, all recorded and mixed in Ardour 3 — it has an annoying MIDI monitoring bug right now, but I’m hoping that’ll be fixed soon.

Intro music

I wanted to add some music to the title screen, too, so I come up with a little organ fanfare-ish thing and recorded it in to Ardour. The organ is the setBfree plugin, a Hammond B3 emulation based on an old app called Beatrix.

Beatrix had taken on near-legendary status in Linux audio circles, partly due to its great sound, and partly due to being near-impossible to run. It lacked JACK support and had various other issues, and its strict licencing forbid forking it or distributing patched versions.

Somehow, though, the setBfree devs managed to negotiate a suitable licence, and have added JACK support, LV2 plugin support, and a basic GUI. The GUI is a separate app that talks to the synth engine (whether it’s the JACK app or the LV2 plugin) via MIDI; it lets you configure the organ stops manually, or load presets.

setBfree’s GUI is a stand-alone app that talks to the synth via MIDI

The thunder sound was my own recording — I have a habit of setting up my Zoom H1 and letting it record during thunderstorms, and that’s finally come in handy!

Sound effects

Sound effects are hard; I’ve had a little experience with this, working on another game for Switchbreak which is still in development, but it’s all still fairly new to me. I used synths for some — Pianoteq came in handy once again here, for its tubular and church bells — but the rest were recorded sounds, mostly of me using things to hit other things. For the flapping bat wings, for instance, I slapped rubber gloves together, and idea I saw on this list of sound effects techniques.

I’m pretty happy with the fact that there are two vocal samples in there, too — the ghost and the witch are both me. The witch’s cackle just took some pitch shifting and a bunch of reverb.

Trailer video

Video editing in progress, using Kdenlive


As the game neared completion we realised it’d need a trailer, so I volunteered to make one, using Kdenlive. I used ffmpeg to record video from the Flash version of the game, then brought that in to Kdenlive, where I composited it on top of the phone image and background. It was a fairly straightforward edit, but I had some fun with it — I hadn’t played with wipes before now, for instance, so I took the opportunity to ham it up and throw some in.

sketchbook: musopen musings

Musopen is a fascinating project — it hosts public domain recordings of, and sheet music for, a large number of classical pieces. Many of the most famous classical works have long been in the public domain, but while the compositions themselves may be free to use, recordings of those works are still subject to copyright. Musopen, then, hosts recordings of those works that have also been released in to the public domain, mainly from student and college orchestras.

Nearly two years ago, Musopen’s founder had an ambitious idea: use funds from a Kickstarter project to commission classical recordings from a top-quality orchestra, which would then be released in to the public domain. The campaign was a great success, and the resulting recordings are now complete. The final mixes aren’t ready yet, but I’m more excited to see that the raw multitrack recordings are available!

The sessions are in ProTools format, but the recordings themselves are WAVs that can be imported in to Ardour or any other DAW quite easily. With some 560GB of high-quality orchestral stems to work with, there’s tremendous scope to incorporate these recordings in to other works, or process and edit them to create entirely new works. This is an incredible gift to the recording community, and I have a feeling we’ll be hearing elements of these recordings for decades to come.

In that spirit, I spent some time over the weekend playing with one of the pieces in Ardour. I took one of the shorter (and more frantic) pieces — Mozart’s The Marriage of Figaro — and extracted a few short elements, stretching them out to create a short ambient electronic (the genre I affectionately call “artwank”) track. Beyond Ardour’s time-stretching and pitch-shifting tools, I used Argotlunar and Cumulus, which are both granular synths, to add a bit more textural variety.


mp3 | vorbis | 2:04

not-quite-announcing my next project!

Things have been decidedly quiet here after the flurry of activity across March and April, but thankfully, in the real world, things haven’t been quite so quiet. I’ve been working on a new project with a couple of really talented guys, and while I can’t say too much about it yet, I can at least reveal that it’s a game!

Unsurprisingly, I’m taking care of the audio. I was initially brought on to write some music, but as we discussed the game’s design and setting, it became clear that the soundtrack would be much more sparse and ambient than my usual video game ditties. I do have a lot of ideas for the music that will fit the mood of the game, but for now, I’m focusing on the sound effects.

Designing those sound effects has definitely been a challenge. I’ve been creating sounds from scratch on the Blofeld, and using Ardour and Audacity to process recorded sounds from my Zoom H1 recorder, and while those tools are all quite familiar, these sounds are unlike anything I’ve created before. Part of the challenge is just getting an understanding of what sounds I need to make, so I’ve been playing a few different games and even watching bits of movies to get ideas on what different things should sound like.

A new prototype of the game should be ready soon; hopefully then I can real a bit more about what the game is and who I’ve been working with!

rpm 2012 post-mortem: track order, making the CD

With all of my tracks completed, only one step remained before I could submit my work to RPM Headquarters: turning my collection of tracks in to an album and burning it to CD. A good album can be more than the sum of its tracks, so it was important for me to do what I could within the time limit to bring the tracks together as a cohesive whole and then present that within a proper CD case.

Track order

I started thinking about track ordering about half-way through the challenge, though I didn’t put much time in to it until I finished all of the tracks. Some of them fell in to place: I knew I wanted to finish with escape velocity and magnificent desolation, and periapsis seemed like a good opener, while I placed free return at track 3 (the “lead single” slot), and I saved track 6 for the upbeat direct ascent, to start the second half of the album with a bang.

For the other tracks, I used a bit of trial and error, slotting them in where I thought they’d fit. The end result is that the first half is generally a bit more relaxed and downbeat, while the second half is a bit more lively and upbeat.

Burning the CD

Despite the time constraints, I did spend some time on the final CD contents. I had planned to just burn each track to CD, using Brasero, but when I played the tracks back-to-back, they didn’t flow together well: the volume levels jumped around from track to track, and the pauses between tracks were too short. I figured the best way to tackle this was to import my finished tracks in to a new Ardour session and make use of Ardour’s CD mastering features.

Within Ardour, it was easy to adjust the timing and volume levels between tracks to make them all flow together; I then just had to add CD track markers at the start of each track. I used my ears to adjust those relative volumes, and I had no time (and no real desire, either) to use compression to bring up the baseline loudness, so some of the tracks look much quieter than others if you look at the CD’s waveform; the chiptune tracks, for instance, seemed much louder than their waveforms suggested, so I turned those down quite a bit.

A waveform display of the entire album

Adjusting the tracks' volumes to match by ear meant that some of the tracks looked much quieter than others on a waveform display

Another advantage to mastering a CD in Ardour is the ease with which you can run tracks together without a pause in between. I couldn’t resist the urge to try this, so I brought the start of eclipse up to just before the end of free return. It’s great to hear that seamless transition between the two tracks while listening to the CD (or to the FLACs, in a player like Aqualung).

The CD master in Ardour 3

The CD master project in Ardour 3, complete with track markers

Turning the Ardour session in to a CD was a matter of exporting it in just the right format — 44.1Khz, 16-bit, WAV, with TOC/CUE files enabled — and then feeding the WAV and TOC file to cdrdao, a specialised “disk-at-once” CD burning tool. The TOC and CUE files are both simple text files that describe the layout of a CD; TOC is specific to cdrdao, while CUE is more generic. Burning the CD with cdrdao took just one command:

cdrdao write Session.wav.toc

The final step was to upload the CD to Bandcamp. I wanted Bandcamp downloads to sound the same as the CD, so I used bchunk to split CD image in to separate tracks. based on the CUE file. The “-w” option instructs bchunk to write the tracks in WAV format:

bchunk -w Session.wav Session.wav.cue track

Cover design

It was important to me to have an attractive CD cover design, and that design started with the cover art itself. The “far side of the mün” concept came the downtime I’d spent playing Kerbal Space Program and listening to MOON8, a brilliant NES reinterpretation of Pink Floyd’s Dark Side of the Moon. A KSP screenshot seemed fitting as a cover, so I fired it up, flew in to Münar orbit, disabled the HUD, and started grabbing shots until I got one I was happy with.

Album cover template

The completed CD jewel case templates: front (with liner notes) and back

To turn that in to a CD jewel case cover, I used Inkscape, along with some excellent SVG templates. Inkscape’s a vector drawing package, so it’s well suited to this sort of design and layout work, and having a template made it easy to size everything correctly. I exported the completed templates to PDF files, and then had them printed on gloss paper and cut to size at an office supply shop.

Unfortunately, there wasn’t much I could do for the CD itself — that was just a standard blank CD-R, with “far side of the mün” scrawled on it in black Sharpie!

rpm 2012 post-mortem, track 10: magnificent desolation

109:43:16 Aldrin: Beautiful view!
109:43:18 Armstrong: Isn’t that something! Magnificent sight out here.
109:43:24 Aldrin: Magnificent desolation.

This track was the perfect bookend for this project: I started it way back on day 3, but it was the very last track I finished, on day 28. It was also a real problem child — the descending bassline motif was there from the start, but despite several attempts I just couldn’t work out how to develop it in to a complete track. It also didn’t seem to fit in with anything else on the album, so even though I really liked the initial idea, I wasn’t sure it’d make it on to the album.

Inspiration struck when I decided that it could work as the closing track of the album, a little mood piece to leave things on that bittersweet note that I seem to love so much. With that decision made, everything else fell in to place: it didn’t have to be long, and I could use vinyl-like effects to make it sound overly vintage, which somehow has the effect of making a lonely piece of music sound even more lonely.

Once I’d decided on that direction, there was only one name I could ever give to this track. Buzz Aldrin’s first words on the Moon aren’t as famous as Neil Armstrong’s, of course, but I’ve always found them far more poignant — they so succinctly express how strange it must’ve felt to look out across a landscape that’s simultaneously full of beautiful and devoid of life.

The piano recording was a live improvisation based upon that descending bassline, which I cleaned up a little in Ardour after the fact; the piano was of course Pianoteq, running in to a convolution reverb (using the IR LV2 plugin). For the vinyl sound, I tried the VyNil plugin, and it did make things sound suitably vintage, but the vinyl surface noise was essentially white noise with random pops, and I wanted more a cyclic popping sound, like you’d get when a needle hits the same scratches on each revolution.

Instead, I used an EQ to kill the highs and the lows, which instantly makes something sound old-timey, and then added a vinyl noise sample from Freesound, which I looped for the length of the track. Adding just a little vibrato (using TAP Vibrato) to simulate the sound of a slightly warped record helped to complete the vintage feel.

"old-timey" EQ curve

Cutting away high and low frequencies, as shown in this plugin frequency response graph, goes a long way to making something sound "old"

Fading between the “vinyl” piano and the unprocessed piano was more tricky than I initially expected. The idea was simple enough: I disconnected the piano track from the master bus and routed it in to two new buses — one with the EQ and vibrato plugins, and one without — and then faded each bus in and out as required. I used automation to disable the vibrato (moving its “depth” to 0%) to prevent chorusing effects while both the “old” and “new” buses were playing mid-fade, but even so, the fades didn’t sound right.

As it turns out, the TAP Vibrato plugin adds latency as part of its processing, so the “old” piano bus was tens of milliseconds behind the “new” piano bus, causing an echoing effect instead of a smooth transition from one sound to the other. The solution was easy, once it occurred to me: I moved the vibrato to the piano track itself, so its latency affected both buses equally, and then used automation to set its depth to 0% during the “new” parts of the song.

It’s worth mentioning that Ardour does compensate automatically for plugin latency on audio tracks. If I’d recorded the piano to audio, and then copied and pasted it across two audio tracks instead of using buses, then these issues wouldn’t occur, assuming that the plugin advertises its latency correctly.

I wish I could’ve done a better job of the actual piano arrangement in this — it needs to be properly composed and written down, and then handed to someone with more skill on the keyboard than me — but I’m really happy with its overall feel, and the impression that it leaves me with every time I hear it at the end of the album. A good final track can really help an album make a mark on people, and I think this track manages that.