The computer has revolutionised the way we make music, but it also begs a question: how much work do you do “in the box”, using software sequencers, effects, and instruments, and how much do you do with hardware and traditional instruments? When I started making music again last year, having a powerful hardware synth was a huge enabler for me — I really do believe that it, as much as anything, is the reason I’m still making music with Linux now after so many abortive attempts over the years. Now that I have a few tracks under my belt, though, I’m as surprised as anyone to realise that I seem to be working “in the box” more and more.
That’s not to say that I started with a totally hardware-centric workflow, though. My work has always revolved around Ardour, not just for recording, but also for effects, and I’ve used Hydrogen for drums on just about everything I’ve recorded so far. However, on my first track (atlantis), the instrumentation was all Blofeld, and it was mostly played live, with just a few bits of piecemeal sequencing.
By comparison, my latest track (frozen summer) was made entirely in the box, though it’s perhaps not a fair comparison point since it was 100% sample-based and I don’t have a hardware sampler. A better comparison is my cover of Enjoy the Silence, where everything but the (Hydrogen) drums was sequenced from start to finish in Qtractor, using software synths as well as software effects.
Some of that was FluidSynth playing Soundfonts for the less “synthy” sounds (guitar, horns, etc.), but I also made extensive use of WhySynth, an analog-style synth. I even worked with WhySynth the way I’d work with my Blofeld, crafting my own patches instead of using presets for most sounds. Why did I ditch my hardware and embrace softsynths?
The box is convenient
The answer, perhaps unsurprisingly, was convenience. With a separate softsynth on each track, it’s very easy to create custom patches on each one, and then add in custom effects chains that process the results on each channel, too. Using multi-mode on the Blofeld I can run multiple patches at once, and edit them individually, but the results come out a single stereo output in to my PC’s single stereo input, so I can’t add realtime effects to individual instruments unless I use the Blofeld’s (very limited) internal effects. Recall is also an issue — with softsynths, when you load your session, it’s exactly as you left it, but with a hardware synth, you usually have to set it back up yourself.
The other obviously convenience factor is portability. I spend a bit of time on buses, and working with softsynths means that I can do everything on the go. My laptop has a 2.4Ghz Core 2 Duo, and it’s easily handled everything I’ve thrown at it so far.
That’s not quite the end of the story though, because I did end up using the Blofeld on that track. After sequencing it all on my laptop, I moved it to my desktop to add some polish, and while all the FluidSynth sounds stayed in the final product, I replaced some of the WhySynth sounds. It’s a perfectly serviceable analog-style synth but, try as I might, I couldn’t get enough “oomph” in the bass part, or the right filter squelch in the “bieuuw” effect sound that comes in around the second chorus. With the Blofeld, I was able to nail both sounds very quickly.
There’s also a tactile element that softsynths are typically unable to capture. A good synth is an instrument in itself, with an interface that beckons the user to create new sounds and interact with them in realtime. A softsynth might be capable of making the same sounds, but I’m yet to find one that inspires me like a good hardware synth can.
Finding a balance
Ultimately, a lot of the choices about when to use hardware or software come down to compromise. Using softsynths plugins inside Qtractor is very convenient, but the available synths have limited possibilities. Using hardware gives me better sounds that I can program more quickly, but it ties me to my home studio, and limits my effects options while sequencing.
There’s even a middle-ground between the two extremes under Linux — JACK softsynths like Specimen and PHASEX. I can run these on my laptop, and depending on how they’re set up I run separate realtime effects racks for each under Ardour, but they require setup each time you start them, so you lose the convenience of total recall that you get with softsynth plugins.
I think the answer to the question of when to use hardware or software for a job is “it depends” — each track is going to have its own ideal line in the sand, and that’ll vary from person to person, too. Sometimes I’m going to start with a killer sound or riff on the Blofeld and build a track around that, so it’s going to make sense to use hardware all the way through. Other times, I’ll be able to get away with softsynth plugins, even if they’re just guide sounds that I end up replacing later, or it could just be that a softsynth is the best tool I have for a particular job.
Perhaps the best answer is to use whatever gets me the best results with minimal fuss and maximum enjoyment. I don’t think softsynths will ever replace my hardware, but that adding them to my toolkit has definitely made me more productive.