ludum dare 29: underground city defender

This weekend was Ludum Dare again, and again Switchbreak asked me to write some music for his entry. It’s called Underground City Defender, and it’s a HTML5/Javascript game, so you can play it in your browser here!

The original idea for the game was to make it Night Vale-themed, so I started the music with a Disparition vibe in mind. The game didn’t turn out that way in the end, but that’s okay, since the music didn’t either! It’s suitably dark and has a driving beat to it, so I think fits the game pretty well.

My move to San Francisco is just a few weeks away, so I’ve sold most of my studio gear, including my audio interface and keyboard. That left me using my on-board sound card to run JACK and Ardour, but that turned out just fine — with no hardware synths to record from, not having a proper audio interface didn’t slow me down.

As some of you guessed, the toy in the mystery box in my last post was indeed a Teenage Engineering OP-1. It filled in as my MIDI controller here, and while it’s no substitute for a full-sized, velocity-sensitive keyboard, it did a surprisingly good job.

Software-wise, I used Rui’s samplv1 for the kick and snare drums, which worked brilliantly. I created separate tracks for the kick and snare, and added samplv1 to each, loading appropriate samples and then tweaking samplv1’s filters and envelopes to get the sound I was after. In the past I’ve used Hydrogen and created custom drum kits when I needed to make these sorts of tweaks, but having the same features (and more!) in a plugin within Ardour is definitely more convenient.

The other plugins probably aren’t surprising — Pianoteq for the pianos, Loomer Aspect for everything else — and of course, it was sequenced in Ardour 3. Ardour was a bit crashy for me in this session; I don’t know if it was because of my hasty JACK setup, or some issues in Ardour’s current Git code, but I’ll see if I can narrow it down.

studio slimdown

Last weekend, almost exactly five years after I bought my Blofeld synth, I sold it. With plans to move to the US well underway, I’ve been thinking about the things I use often enough to warrant dragging them along with me, and the Blofeld just didn’t make the cut. At first, the Blofeld was the heart of my studio — in fact, if I hadn’t bought the Blofeld, I may well have given up on trying to make music under Linux — but lately, it’s spent a lot more time powered off than powered up.

Why? Well, the music I’m interested in making has changed somewhat — it’s become more sample driven and less about purely synthetic sounds — but the biggest reason is that the tools available on Linux have improved immensely in the last five years.

Bye bye Blofeld -- I guess I'll have to change my Bandcamp bio photo now

Bye bye Blofeld — I guess I’ll have to change my Bandcamp bio photo now

Back in 2009, sequencers like Qtractor and Rosegarden had no plugin automation support, and even if they had, there were few synths available as plugins that were worth using. Standalone JACK synths were more widespread, and those could at least be automated (in a fashion) via MIDI CCs, but they were often complicated and had limited CC support. With the Blofeld, I could create high-quality sounds using an intuitive interface, and then control every aspect of those sounds via MIDI.

Today, we have full plugin automation in both Ardour 3 and Qtractor, and we also have many more plugin synths to play with. LV2 has come in to its own for native Linux developers, and native VST support has become more widespread, paving the way for ports of open-source and commercial Windows VSTs. My 2012 RPM Challenge album, far side of the mün has the TAL NoiseMaker VST all over it; if you’re recording today, you also have Sorcer, Fabla, Rogue, the greatly-improved amsynth, Rui’s synthv1/samplv1/drumkv1 triumvirate, and more, alongside commercial plugins like Discovery, Aspect, and the not-quite-so-synthy-but-still-great Pianoteq.

I bought the Blofeld specifically to use it with a DAW, but I think that became its undoing. Hardware synths are great when you can fire them up and start making sounds straight away, but the Blofeld is a desktop module, so before I could play anything I had to open a DAW (or QJackCtl, at the very least) and do some MIDI and audio routing. In the end, it was easier to use a plugin synth than to set up the Blofeld.

Mystery box of mystery!

You can probably guess what’s in the box, but if not, all will be revealed soon

So, what else might not make the cut? I only use my CS2X as a keyboard, so I’ll sell that and buy a new controller keyboard after moving, and now that VST plugins are widely supported, I can replace my Behringer VM-1 analog delay with a copy of Loomer Resound. I might also downsize my audio interface — I don’t need all the inputs on my Saffire PRO40, and now that Linux supports a bunch of USB 2.0 audio devices, there are several smaller options that’ll work without needing Firewire.

I’m not getting rid of all of my hardware, though; I’ll definitely keep my KORG nanoKONTROL, which is still a great, small MIDI controller. In fact, I also have two new toys that I’ll be writing about very soon. Both are about as different from one another as you could get, but they do share one thing — they’re both standalone devices that let you make music without going anywhere near a computer.

ludum dare 26: anti-minimalist music and sampled orchestras

This weekend was Ludum Dare 26, and as usual when Switchbreak enters such things, I took the opportunity to tag along. The theme was “minimalism”, but his game, called MinimaLand, deliberately eschews that theme; it tasks the player with bringing life and detail in to a very minimalist world.

MinimaLand screenshot

In MinimaLand, the player brings life to an abstract, minimalist world

I wasn’t sure at first how to fit music to the game, but it soon became clear: if I was going anti-minimalist, I wanted to use an orchestra. Ever since I heard about the Sonatina Symphonic Orchestra, a CC-licenced orchestral sample set, I’ve wanted to try recording something with it; what better time to try it than with a deadline looming!

Given that I had just a few hours for this, I kept the music itself very simple — just three chords and a short melody. The music itself is almost irrelevant in this case, though, since it’s really just a means of delivering those orchestral sounds to the player. Initially, the melody and harmony are on strings, with rhythmic staccato stabs on brass, then the whole thing repeats, with the stabs moving to strings and the melody/harmony to woodwinds and horns.

It’s funny that, even when I’m dealing with sampled instruments instead of my own synth sounds, I still think in terms of sound and feel first, and chords and melodies second. I guess that’s just how I roll!

Working with LinuxSampler

That's a lot of LinuxSampler channels!

That’s a lot of LinuxSampler channels!

Not unexpectedly, I sequenced everything in Ardour 3 and hosted the SSO instruments, which are in SFZ format, in LinuxSampler, using a LinuxSampler build from a recent SVN checkout. I didn’t use anything else on this one, not even any plugins, since all it really needed was some reverb and the SSO samples already have plenty of it.

Recent versions of LinuxSampler’s LV2 plugin expose 32 channels of audio output; I guess the idea behind this is to allow you to run multiple instruments to dedicated outputs from within a single plugin instance, but I’m not sure why anyone would actually want to do that. I think my workflow, with each instrument in its own plugin instance on its own track, makes a lot more sense, so I patched the plugin to return it to a simple stereo output per instance.

Sonatina quality?

I’ve been keen to try SSO mostly to see just how usable it is, and in this case, I think it worked pretty well. With just 500MB of samples, it’s never going to sound as good as a commercial library (where individual instruments can take several GB), but some of the samples, such as the string ensembles, sound quite nice at first listen.

The biggest problem is with the availability of different articulations for each instrument. You do get staccato samples for most instruments, and pizzicato for the strings, but beyond that you just get “sustain” sounds, which are great for held notes (as long as the sample’s long enough), but far less suitable for faster legato parts. You can hear this in the horn part in the second half of the track, where some short notes take so long to start playing that they barely become audible before they end.

Many of the solo instruments are quite weak, too — you can hear audible jumps between certain notes in several of them, where the instrument jumps from one discrete sample to the next, while others have odd tuning problems.

There’s also a tonne of reverb on every instrument. SSO’s instrument samples come from a variety of sources, so each instrument has its own reverb characteristics; in an attempt to even out the differences and make them all sound at least vaguely like they’re in the same room, the library’s creator added quite a bit of extra reverb to each instrument. It’s a necessary evil, and it works, but it has a smearing effect that only exacerbates those problems with short notes.

So, SSO was well suited to this track — most notes were either staccato/pizzicato or were held for quite some time, I didn’t need to use any solo instruments, and the wall of reverb helped make it sound suitably pompous. If your needs are different, though, then you’ll have a hard time getting good results from it.

Having said that, it is far-and-away the best free option, and it’s also quite easy to get working under Linux, which can’t be said for many commercial libraries. Despite my mostly good experience with it, though, I’m keen to investigate just what commercial alternatives are available that will work under Linux.

cosplay mystery dungeon: sound design for a seven-day roguelike

I’ve spent the last week working on a game for the Seven Day Roguelike Challenge with Switchbreak and Amanda Lange, and by virtue of the fact that it’s a seven-day project, it’s now finished! It’s called Cosplay Mystery Dungeon, and you can play it here (if you have Flash installed).


A week isn’t long, but I’m really impressed with the finished game — Amanda did great work on the art and the game design, and Switchbreak put a tonne of work in to the code. Here’s how my part of it all came together.

Getting in with the right crowd

It all started innocently enough, with a tweet:

Once Switchbreak was involved, it wasn’t long before I jumped on board, too. Amanda came up with the concept and had written up a lot of notes about the design, so I had a good idea of what sounds would be needed, and what they should sound like, from the get-go.

Early in the week, I worked on the basic player and weapon sounds, making a few versions of most of the weapon sounds to avoid any annoying reptition. The magic effects came later; Amanda’s design included various spells, in the form of collectable comic books, but with the deadline looming I wasn’t sure which of those would make it in to the game. As it turned out, Switchbreak managed to implement them all in the closing hours, so my last hour-or-so was a race to create the matching sounds.

The sounds were a mix of synthesis (using both my Blofeld and Loomer Aspect) and recorded sounds. Some used both synthesis and recording, in fact, such as the lightsaber — after reading about how the sound was created originally, I created a suitable humming sound, played it through one of my monitors, and then swung my mic back and forth past the speaker, recording the results.

Music in a hurry

I hadn’t planned to write music for the game, but it felt so odd without it that, with less than a day left, I decided to take a stab at writing something. I’ve written short pieces within hours for past Switchbreak games, but they’ve been much smaller than this. A run-through of Cosplay Mystery Dungeon can take an hour or more, not to mention the time spent on unsuccessful attempts, so the music needed enough length and variety to carry it over a longer playtime.

I started with the bass line and fast-paced drums, and I knew from the start that I wanted to add a later section with slower, glitchy drums, so those went in early, too. Soon after I nailed down the chord progressions and structure, and started filling in the melody, pad, and stabby brass lines.

This is what an all-MIDI, all-softsynth Ardour session looks like

This is what an all-MIDI, all-softsynth Ardour session looks like

As with my RPM Challenge work, I worked quickly by sticking with MIDI (no bouncing to audio), using softsynths instead of hardware, and mixing on-the-fly, with minimal (or none, in this case) EQ and compression. Synths do let you cheat a bit — if you find that a part is too bright or too dull, or needs more or less sustain, you can just edit the synth patch (tweaking filters and envelopes, for example) instead of using EQ and compression to fix those things after the fact. You can’t solve every mix issue that way, but I find that I can get a perfectly decent, listenable mix that way more quickly than I could otherwise.

All up, I think the music took about 5-6 hours to record, with another half-hour or so after that creating musical cues for the endgame screen and the game over screen, using the same instruments. That left me with just enough time to finish the magic sound effects before the deadline.

Loomer Aspect and Sequent earned their keep, alongside open-source plugins like Invada Tube and the TAL and Calf delays

Loomer Aspect and Sequent earned their keep, alongside open-source plugins like Invada Tube and the TAL and Calf delays

Loomer Aspect really paid for itself on this one. I used TAL NoiseMaker on the chorus lead sound (it’s a modified preset), and the Salamander Drumkit with the LinuxSampler plugin for the drums, but every other sound came from Aspect, mostly using patches that I created on-the-fly. For such a capable synth, it’s surprisingly easy to program — everything’s laid out in front of you, and it’s fairly easy to follow. It lacks the Blofeld’s distortion options, but using distortion plugins in Ardour (TAP TubeWarmth and Invada Tube Distortion) helped address that.

I also had an excuse to use Loomer Sequent — it provided the glitch effects on the drums in the slower section. The presets were all a bit too random to be usable on such sparse parts, so I edited the effects sequence in Sequent to match the parts, adding just a bit of randomness to its loop-slicing parameters.

This was the first track I’d recorded since the official release of Ardour 3, too. It worked really well — it was stable, reliable, and predictable throughout, a definite improvement on the betas. If you haven’t tried it yet, now’s definitely the time!

dr strangedrive or: how I learned to stop worrying and love SSDs

I’ve had bad luck with hard drives lately — in the last month or so I’ve lost two of the drives from my desktop PC. Luckily, I’d set up RAID-1 for my Linux install just beforehand, so I didn’t lose anything important (just my Windows drive, hah), but with just one drive left, I needed some kind of replacement.

I could’ve bought another hard drive, but damnit, spinning disks are from the past, and we’re living in the future! Instead, I bought myself a shiny new SSD.

Wolf in mini-sheep’s clothing

To be specific, I got a 256GB Crucial M4 — it’s not the latest and greatest SSD, but it’s been on the market long enough to prove its reliability. It looks so unassuming in its tiny, silent 2.5″ case, but it’s crazy-fast, with read speeds of 450MB/s, write speeds of about 260MB/s (not as fast as some newer drives, but perfectly respectable), and insanely-fast seek times that can make it dozens or even hundreds of times faster than a hard drive in real-world applications.

More than anything else, an SSD makes your PC feel strangely snappy. Boot times and application launch times both benefit hugely — Firefox now takes less than a second to spring to life, even if I’ve only just booted my PC, and staring LibreOffice takes maybe half a second.

Even when attached to a 3.5" bay extender, SSDs look tiny compared to 3.5" hard drives

Even when attached to a 3.5″ bay extender, SSDs look tiny compared to 3.5″ hard drives

To get some numbers, I tested something that’s always been slow on my studio PC: loading large instruments in to LinuxSampler. LS streams most of the sample data on-the-fly, but it still needs to cache the start of each sample in to RAM, and that requires a bunch of seeking. Here you can see the load times for Sampletekk’s 7CG Jr, a 3GB GigaSampler file, and the Salamander Grand Piano, a 1.9GB SFZ, from both my SSD and my old 1TB Seagate Barracuda 7200.12 hard drive — the SSD is about 4-to-6 times faster:

made with ChartBoot

Is flash’s limited lifetime really worth worrying about?

So, SSDs have fantastic performance, and they’re now (relatively) affordable, but I did have one concern: the fact that flash memory cells can only be erased a certain number of times before they wear out. Modern SSDs use techniques like wear-leveling and over-provisioning to minimise writes to each flash cell (this Ars Technica article is a great read if you want to know more), but it’s hard not to think that every byte you write to the drive is hastening its demise.

I worried even more after I ran “iotop” to look at per-process disk usage, and saw that Firefox was writing a lot of data. It writes several things to disk on a regular basis — cached web content, knowing malware/phishing URLs, and crash recovery data — and that can add up to several MB per minute, or several GB per day.

To see if this really was a problem or not, I used iostat to capture per-minute disk usage stats across a typical day. I did all my usual things — I left Firefox, Chrome, Thunderbird, and Steam running the whole time, I spent my work hours working, and then I toyed with some music stuff in the evening. The results are graphed below:

made with ChartBoot

There’s one hefty spike in the evening, when I copied 3.6GB of guitar samples from my hard drive to my SSD (maybe this wasn’t an entirely typical day!), but for the most part, I was writing about 5-15MB per minute to the SSD. The total for the day was 15GB.

That sounds like a lot, but it’s nothing my SSD can’t handle. It’s rated for 72TB of writes over its lifetime, and while that’s an approximate figure, it’s a useful baseline. Over a five-year lifespan, that works out to 40GB of writes a day, or 27.8MB per minute — that’s the red line on the graph above, which was well above my my actual usage for almost the entire day.

When you see a graph like this, it flips your perceptions. If I’m happy to accept a five-year lifespan for my SSD, then every minute I’m not writing 27.8MB to it is flash lifetime that’s going to waste! Smaller SSDs tend to have shorter lifetimes, as do cheaper SSDs, but with typical desktop usage, I don’t think there’s any reason to worry about the life of your SSD, especially if you’re not using your PC 10-12 hours a day or running it 24/7 like I often do.

SSD tuning

There are dozens of SSD tuning guides out there, but most of them spend a lot of time whipping you in to a “don’t write all the things!” frenzy, so instead of linking to one of those, I’ll just reiterate two things that you should do to get the most from your SSD.

The first is to enable TRIM support. This lets the OS tell the SSD when disk blocks are no longer needed (because the files they contained were deleted, for instance); that gives the SSD more spare space to use, which helps reduce drive wear and increases write performance. To enable TRIM, add “discard” to the mount options on each filesystem on your SSD, like so:

/dev/mapper/ssd-ubuntu_root  /  ext4  discard,errors=remount-ro  0  1

IF you’re using LVM, like I am, then you’ll also have to edit the “/etc/lvm/lvm.conf” file, and add the line “issue_discards = 1” to the “devices” section, to make sure that LVM passes the TRIM commands through to the SSD.

The second is to select an appropriate IO scheduler. IO schedulers are bits of code within the Linux kernel that arrange read and write operations in to an appropriate order before they’re sent to the disk. The default scheduler, “CFQ”, is designed to keep for desktop loads on regular hard drives, but its efforts are wasted on SSDs, where seek times are so much lower.

For SSDs, you’re better off with the “deadline” scheduler, which is designed for high throughput on servers, where disks tend to be faster, or you can even use the “noop” scheduler, which does no reordering at all. To set the scheduler on boot, add this to your “/etc/rc.local” file (most Linux distros have one of these):

echo deadline >/sys/block/sda/queue/scheduler

To be honest, the choice of IO scheduler probably won’t make much difference — it just improves performance a little (it won’t have any impact on lifespan), but your SSD is going to be so fast regardless that I doubt you’d ever notice. It’s an easy fix, though, so it’s worth the 10 seconds it’ll take to perform.

So go forth, buy an SSD, make a couple of minor tweaks, and then don’t be afraid to enjoy it!

creating a dynamic soundtrack for switchbreak’s “civilian”

Over the last week I’ve put a bunch of time in to my new game project, a Switchbreak game called Civilian. I’ve been working on music for it, but this blog post isn’t about music — it’s about the crazy stunts you can pull in modern interpreted languages.

Dynamic music in Flash?

Most Flash games use a looping MP3 for background music — it takes just a couple of lines of code to implement, and while the looping isn’t perfectly seamless (there are brief pauses at the start and end, added by the MP3 encoder) it’s usually close enough. For Civilian, though, I wasn’t satisfied with a simple looped track. It’s a game about the passage of time and about player progression, and I wanted the music to reflect those things.

What I really wanted was a dynamic music system, something that would let me alter the music’s sequence or instrumentation on-the-fly in response to the player’s actions. There was no way that was going to work with simple, non-seamless looping MP3s, though — I needed to start working with audio data on a much lower level.

Writing a low-level mixer in AS3

Thankfully, the Flash 10 APIs do give you low-level audio functionality. You can get raw audio data out of an MP3 file, and then send that to audio buffers for playback; I’d already done just that in fact, to implement a seamless MP3 looper, and that gave me a crazy idea: if I could get audio data from one MP3 and play it back, could I also get data from two or more MP3s, mix them, and play them back all at once?

Once I’d confirmed with a simple proof-of-concept that the answer was an emphatic “yes”, I set about adding more tracks, and then implementing features like panning and volume conrol. By this point, the amount of CPU power required to run this mixing was significant — about 40% of one core on my 1.7Ghz i5 Macbook Air — but Flash had no trouble keeping up while running some simple gameplay at 60FPS.

A screenshot from my test app, with five channels of audio running

A screenshot from my test app, with five channels of audio running

From mixer to sequencer

A few days later I had more than just a mixer: I had a simple pattern-based sequencer. Instead of looping MP3s from start to finish, it splits the MP3 tracks in to bars, and then plays those bars in accordance with a sequence stored in an array in the AS3 code.

This actually fits quite well with how I tend to write my music. I can arrange the track basically how I want it in Ardour, then record each unique section of each track to audio, and string those sections together to produce a single MP3 track for each instrument. Then, I can create a sequence within the AS3 code that reassembles those sections in to my original arrangement.

Each bar can have its own settings, too, somewhat like the effects on each note in a tracker. So far, these just let me set the panning or volume for each track, or set up a volume slew (ie: a fade in or fade out) to run over the course of the bar.

Making the music dynamic was just a matter of replacing the static sequence array with code that generates the sequence on-the-fly. I have pattern templates for each track which I combine to create the sequence one bar a a time, adding or removing tracks or replacing one part with another (perhaps with a nice fade in/fade out) based on what’s happening within the game world.

Pushing interpreted languages

As if all the above wasn’t enough, I decided to add an optional audio filter on the output. For certain scenes in the game I want to be able to make the music sound like it’s coming from a radio, so I added a simple bandpass filter, based on a Biquad filter implementation from Dr. Dobbs. If the filter is having any impact on my sequencer’s CPU usage, it’s far too small to notice.

Eventually, I gave up trying to think of efficient ways of doing things, and just started doing them in the simplest way possible. I’ve since done some optimisation work, to help retain a steady frame rate on slower systems (using my old Latitude E6400, clocked down to 800Mhz, as my test machine), but those optimisations are totally unnecessary on more typical systems.

Ten years ago, I wrote audio mixing code for the GBA, and it looked something like this

Ten years ago, I wrote audio mixing code for the GBA, and it looked something like this

The last time I wrote audio mixing code, it was for the ARM7 CPU inside the Gameboy Advance. On that system, compiled C code wasn’t fast enough, so I had to re-write the critical loops in hand-optimised ARM assembler code to get the necessary performance. To see an interpreted language do the same things so easily is still somewhat mind-boggling, but it’s a testament to the advances made in modern interpreters, and to just how fast modern PCs are.

It’s somewhat fitting that this was the week that the GNOME developers announced that JavaScript would become the preferred language for GNOME app development. That announcement caused a surprising amount of backlash, but I think it makes perfect sense: not only is JavaScript a capable and incredibly flexible language with a huge developer community. but it performs incredibly well, too. In fact, I doubt that any other interpreted language has ever had as much developer time invested in improving its performance.

The writing’s on the wall for Flash, of course, but HTML5 and JavaScript are improving rapidly, and frameworks are being written that should make it just as easy to write games for them as it is to write for Flash today. When that happens, it should be a simple matter to port my dynamic music system to JavaScript, and I’ll be very excited to see that happen.

sketchbook: faking guitars

As someone that plays keyboards I’ve been mostly resigned to the fact that I can’t put guitars in any of my tracks, but after some playing with guitar samples available at the Flame Studios website, I’m cautiously optimistic. Flame Studios has high-quality recordings of various guitars (like the Fender Telecaster and Gibson Les Paul) in GigaStudio format, and when used appropriately, they sound great. Their website has been down for most of the last few years, but I’m hoping that it’s now back for good.

Most of the recordings are straight from the guitar’s line-out, which I think is ideal: if you want a clean sound, you can use it as-is, or you can run it through your choice of amp simulation to get a more typical guitar sound (something I’ve always been curious about but never had a chance to try). I was testing the samples using the Linuxsampler LV2 plugin in an Ardour 3 MIDI track, so I decided to try Guitarix, which as of version 0.25 includes some pre-built combinations of tube amp, tonestack, and cabinet as LV2 plugins.

The new Guitarix LV2 plugins, alongside some old favourites

The new Guitarix LV2 plugins, alongside some old favourites

So far, I’ve been quite impressed with Guitarix. I haven’t had much experience with amp simulations, but even so, with appropriate twiddling of knobs I’ve been able to get some crunchy distorted sounds, and some cleaner sounds that add a lot of character without over-the-top distortion. There are other options — Rakarrack has some amp/cabinet simulation features (though it focuses more on effects), and the C* Audio Plugin Suite provides amp/cabinet simulations as LADSPA plugins — but the simplicity and quality of the Guitarix LV2 plugins won me over.

The sound itself is just one piece of the puzzle — the other is to play things in a believable way. Keyboards can do a lot of things that guitars can’t, so you have to limit yourself playing parts that at least sound like they would be possible on a guitar. Arpeggios with wide gaps between notes are an easy cheat, but if you want to play chords, a list of guitar chords mapped out on the keyboard, like this one, is very helpful.

Of course, the reverse is true, too — guitars can do a lot of things that keyboards can’t. Many of the Flame Studios guitars include extra samples of finger slides and other guitar sounds beyond just the notes, which you can sprinkle in to a part to add some realism. Subtle MIDI pitch bends can help add some expression, too, though they really stand out if they’re done poorly. You’re not going to be able to capture the subtlety and nuance of a real guitar, but with a bit of care, I think you can definitely create basic guitar parts that sound convincing enough to work in a mix.

On to the sketch! Using the Telecaster line-in sample set recorded from the bridge pick-up, I came up with a simple little arpeggio-ish riff thing, and then tried running it through various Guitarix settings. In the sketch you hear three versions of this riff: the first is completely clean, the second uses the Guitarix GxAmplifier-IV plugin with fairly heavy drive settings and a CAPS Plate reverb, and the third uses the GxAmplifier-II plugin with much cleaner settings, along with a Calf Vintage Delay, dRowAudio Flanger, and the CAPS Plate reverb again.

mp3 | vorbis | 1:07

obligatory start of the year post

Before it gets too far in to January, I wanted to take a quick look back at last year. I had been planning to record an EP, along the lines of the track Texel that I released in late 2011, but instead, I took part in the RPM Challenge and recorded a full album. From RPM I hoped to get a rough collection of tracks with a few good ideas I could re-use on the EP; instead, I got an album that, while unpolished, felt complete, and I didn’t like the idea of cannibalising it for another project.

I could’ve recorded more new work later in the year, but nothing really came to me; perhaps it’s a cliché, but I didn’t feel like I had much to say. I did have a great time working on a couple of game projects, though — working to a brief on Candy Grapple turned out to be a lot of fun, and I’m hoping I get more opportunities like that in the future.

While I didn’t record much this year, I think I spent more time at the keyboard than I have since high school. I haven’t been happy with my playing, so I grabbed some books and went back to basics, re-learning how to sight read and working on my technique. I still feel like a beginner when I sit down to play something for the first time, but I feel a little less that way each time I do.

This year? I have another game project to work on — the game itself is pretty far along already, so I’m pretty confident that this one will see the light of day, so that’s the focus for now, along with piano practice. Beyond that, it’s anyone’s guess!

sketchbook: aspect and sequent

I got two great new studio toys for Xmas: Loomer Aspect and Sequent. This sketch is a quick demo I made while getting a bit of a feel for them both. Loomer’s plugins are all available as native Linux VSTs (as well as Windows and OS X), so they work well within Ardour 3.

Aspect is an analog-style soft synth with hugely flexible modulation options — it’s very easy to route its modulation sources, including three envelopes and three LFOs, to a wide variety of parameters, which gives you a lot of creative power. My favourite feature so far is its unison control, which lets you use up to five voices for each note. The coolest part of this is that the unison depth is a modulation source, so you can, say, route the unison depth to the pan control to spread those voices out across the sound stage, or route it to oscillator pitch to create massive detuned sounds.

It’s not as flexible as my Waldorf Blofeld, but Aspect is far more flexible than TAL NoiseMaker while remaining quite approachable to program. My RPM album really taught me the benefit of having synth plugins to use; now I’ll be able to do a lot more in-the-box, saving time and effort.

Sequent is an entirely different beast, and it’s not the easiest thing to explain — the simplest description is that it’s a multi-effects module that lets you sequence the parameters for each effect. It can create rhythmic delays, pans, and distortions, but perhaps its most versatile effect is the looper, which lets you slice, reverse, and loop the incoming audio to produce all manner of glitchy, stuttery effects. You can sequence everything precisely, or use any degree of randomness that you like, and it’s even MIDI-controllable, which opens possibilities for live use.

Now, for the sketch. It’s based on an Aspect pad that uses a clock-synced LFO routed to the filter cutoff, giving it a rhythmic rise and fall (I tried this using MIDI clock sync on the Blofeld on an RPM track, but it didn’t quite work). On top of that, I’ve added some simple percussion, again using Aspect — the kick is one of the included presets, but the hat and snareish-thing are my own patches.

While the kick keeps time, the hat and snare are sent through Sequent to glitch them up. I used a Sequent preset for this, which operates mostly randomly — if I was using it for real I think I’d want to either remove the randomness, or record a bunch of random loops to audio and hand-pick the best ones.

As you’d expect, I recorded this in Ardour 3 — it’s shaping up very nicely right now, so I’m hoping we won’t have much longer to wait before the final 3.0 release.

mp3 | vorbis | 26 seconds

spooky october project: candy grapple

Things have been quiet here of late, but I’ve actually been quite busy! I’ve just finished the sound design for Candy Grapple, the latest game from my good friend Switchbreak. It’s based on one of his Ludum Dare games, Waterfall Rescue, but it’s been fleshed out in to a full game, with much more complete gameplay, many more levels, and a spooky Halloween theme. It’s out now for Android, and there’s an iOS version on the way, too.

Switchbreak asked me to make some suitably spooky-cheesy music for it, and I happily agreed; once I started working on that, I realised he’d also need sound effects, so I offered to create those, too. Read on for details!

Background music

The bulk of my time went in to the in-game background music. Halloween music was new territory for me, but my mind went straight to The Simpsons Halloween specials, and the harpsichord and theremin closing credits. I thought about other “spooky” instruments and came up with the organ, and while it’s not spooky as such, the tuba seemed suitably ridiculous for the kooky carnival sound I was after.

I didn’t want to over-use the theremin, so I stuck with organ for the melody for the most part, and saved the theremin for the bridge, where the harpsichord and tuba drop away in favour of some organ triplets and piano bass notes.

A standard drum kit didn’t seem like a good fit (with that bouncy tuba part, it was in danger of becoming a polka), so I stuck with more random, wacky bits of percussion, like castanets and a vibraslap. I did use some cymbal rolls and crashes in the bridge, though.

Now, for the instruments: I used Pianoteq for the harpsichord and piano, as you’d probably expect; the percussion sounds were from the Sonatina Symphonic Orchestra, played using the LinuxSampler plugin; and the theremin was a simple patch on the Blofeld.

Pianoteq doesn’t just simulate pianos — it also handles other melodic percussion, like harpsichords

The tuba and organ, surprisingly, come from the Fluid GM soundfont. I’m not usually a fan of instruments from GM sets, and I did try a few alternatives, but the Fluid sounds were very well-behaved and sat well in the mix, so I didn’t let myself get hung up on where they came from.

Faking the theremin was fairly straightforward — it’s just a single sine-wave oscillator, but with some portamento to slur the pitch changes and an LFO routed to the oscillator pitch to add vibrato, both of which make that sine wave sound suitably theremin-ish.
I used TAL NoiseMaker at first, but switched to the Blofeld so I could use the modwheel to alter the amount of vibrato (the Blofeld’s modulation matrix makes this sort of thing easy); in hindsight, it would’ve been just as easy to stick with NoiseMaker and alter the vibrato by automating the LFO depth.

The mix came together fairly quickly. There’s a bunch of reverb (I had trouble getting the IR plugin working, so I used TAP Reverberator instead), a little EQ on the tuba and organ to brighten them a bit, and some compression on the piano to add sustain, but that’s about it as far as effects go. The only tricky part was making sure the transition in to the bridge wasn’t too abrupt, but all that really required was some careful balancing of levels.

It was, of course, all recorded and mixed in Ardour 3 — it has an annoying MIDI monitoring bug right now, but I’m hoping that’ll be fixed soon.

Intro music

I wanted to add some music to the title screen, too, so I come up with a little organ fanfare-ish thing and recorded it in to Ardour. The organ is the setBfree plugin, a Hammond B3 emulation based on an old app called Beatrix.

Beatrix had taken on near-legendary status in Linux audio circles, partly due to its great sound, and partly due to being near-impossible to run. It lacked JACK support and had various other issues, and its strict licencing forbid forking it or distributing patched versions.

Somehow, though, the setBfree devs managed to negotiate a suitable licence, and have added JACK support, LV2 plugin support, and a basic GUI. The GUI is a separate app that talks to the synth engine (whether it’s the JACK app or the LV2 plugin) via MIDI; it lets you configure the organ stops manually, or load presets.

setBfree’s GUI is a stand-alone app that talks to the synth via MIDI

The thunder sound was my own recording — I have a habit of setting up my Zoom H1 and letting it record during thunderstorms, and that’s finally come in handy!

Sound effects

Sound effects are hard; I’ve had a little experience with this, working on another game for Switchbreak which is still in development, but it’s all still fairly new to me. I used synths for some — Pianoteq came in handy once again here, for its tubular and church bells — but the rest were recorded sounds, mostly of me using things to hit other things. For the flapping bat wings, for instance, I slapped rubber gloves together, and idea I saw on this list of sound effects techniques.

I’m pretty happy with the fact that there are two vocal samples in there, too — the ghost and the witch are both me. The witch’s cackle just took some pitch shifting and a bunch of reverb.

Trailer video

Video editing in progress, using Kdenlive

As the game neared completion we realised it’d need a trailer, so I volunteered to make one, using Kdenlive. I used ffmpeg to record video from the Flash version of the game, then brought that in to Kdenlive, where I composited it on top of the phone image and background. It was a fairly straightforward edit, but I had some fun with it — I hadn’t played with wipes before now, for instance, so I took the opportunity to ham it up and throw some in.