dr strangedrive redux: should you swap on an SSD?

In my recent post about SSDs I did make one major omission, which a friend pointed out on Twitter afterward:

Indeed, I don’t run a swap partition in my desktop PC — RAM is cheap, so I have 12GB of it, and if you’re debating the cost of an SSD, you can probably afford 8-12GB of RAM, too. Let’s play devil’s advocate, though, and say that you can’t upgrade your RAM for whatever reason. Conventional wisdom says that swapping on an SSD is a sure-fire way to send it to an early grave, but is that really the case?

Individual flash cells do have a finite limit on the number of times they can be erased, so it makes sense that if one part of your SSD (say, your swap partition) sees a lot more writes than other areas that it would wear out more quickly. That doesn’t actually happen on a modern SSD, though — they use wear leveling to spread writes as evenly as possible across all available flash. Even if you overwrite a single disk block repeatedly, the SSD’s controller will keep moving that block to different flash cells, transparently remapping things to hide the details from the OS.

Swapping on an SSD, then, should cause no more stress than any other write activity, so it should be perfectly safe, as long as those extra writes don’t push the SSD beyond what it can handle. This calls for another test!

The test

I forced my PC to use swap in a civilised manner, without resorting to pulling out sticks of RAM

I forced my PC to use swap in a civilised manner, without resorting to pulling out sticks of RAM

As in my last post, I observed my write traffic across a typical work day, but with one difference: I removed 8GB of RAM (by rebooting and adding “mem=5G” to my kernel command line, which left me with just over 4GB of RAM once various bits of hardware address space had been accounted for) and replaced it with a swap partition.

The write activity was much more spiky — there are several times when substantial amounts of data are written to swap — and it’s higher on average, too, but it’s clear from the graph that there’s still nothing to worry about. Across the day, about 2.7GB of data was written to swap, and the total data written was 13GB, well below the 5-year lifespan threshold of 40GB/day that I established in my last post.

made with ChartBoot

In fact, if you’re stuck with a PC with limited RAM, I’d heartily recommend swapping on an SSD! It’s so fast that you never really notice that you’re swapping, especially without the sound of a busy hard drive to remind you. In fact, I barely noticed that two-thirds of my RAM was missing.

Swap tuning

With some tuning, you may in fact find yourself using less swap on an SSD than you would on a hard drive. If you’ve been using Linux for a while, you’re probably learned (perhaps after making a semi-panicked “what’s using all my RAM?” post on a Linux forum) that Linux will use all of your free RAM as disk cache to improve performance. However, Linux goes further than that: it’ll sometimes push application data from RAM to swap just to grow its disk cache.

If this seems odd, consider a scenario where you have some apps running in the background that you’re not using at the moment. Doesn’t it make sense to page out those apps and free some RAM for disk caching to improve the performance of the apps you are using? On a hard drive, it certainly does, but random reads on an SSD are so fast that the benefits of that extra disk cache probably aren’t worth the cost of swapping.

You can control how likely the kernel is to use swap by altering the appropriately-named “swappiness” parameter. The default value is 60, and reducing this makes the kernel less likely to swap; on an SSD, you can probably drop this all the way to 0. To do that, add this to your “/etc/sysctl.conf” file, and then either reboot or run “sudo sysctl -p” to put it in to effect:

vm.swappiness = 0

Another parameter, “vm.vfs_cache_pressure”, is often mentioned in SSD tuning guides, too — this controls caching of directory and inode objects, with values lower than the default of 100 making the kernel more likely to keep those cached in RAM. The effect this has isn’t entirely clear to me, but if you want to experiment, add this to your “/etc/sysctl.conf” file:

vm.vfs_cache_pressure = 50

Of course, these values are just guides — if you find other values that work better for you, go with those instead. Don’t be afraid to leave these at their default values, either; Linux has a multitude of tunable paramaters, and just because you can tune something doesn’t mean you should, especially if you’re unsure what effect different values might have.

A note on drive life estimates

After two weeks, I'm yet to go through a single full erase cycle on my drive. That's reassuring!

After two weeks, I’m yet to go through a single full erase cycle on my drive. That’s reassuring!

It’s worth mentioning, too, that this 72TB estimate of the M4’s lifetime seems to be somewhat conservative. Its flash cells can handle about 3000 erase cycles before failing, so if you overwrote all 256GB of flash 3000 times, you’d get not 72TB of writes, but 750TB. The factor-of-ten disparity between these two figures is due to a phenomenon called write amplification, where the shuffling of data performed by wear leveling and garbage collection causes some data to be written to the underlying flash more than once.

The controllers inside SSDs strive to keep write amplification as close to 1 as possible (that is, no amplification), and some even use compression to push it below 1 in some cases. How successful they are depends on the several factors: the nature of the workload, how much spare flash the controller has to work with (this is where TRIM really helps), and just how good the controller’s algorithms are. A write amplification factor of 10 is really quite extreme, so I’d expect my M4 to last far beyond 72TB of writes (assuming the controller doesn’t fail first).

The 3000 erase cycles is just a conservative estimate, too — that’s when flash cells are likely to start dying, but they won’t all die at once, and most SSDs include some amount of spare flash that they can substitute for failed cells. In one endurance test, a user managed 768TB of writes to a 64GB Crucial M4; at that smaller size, that works out to more than 12000 erase cycles.

dr strangedrive or: how I learned to stop worrying and love SSDs

I’ve had bad luck with hard drives lately — in the last month or so I’ve lost two of the drives from my desktop PC. Luckily, I’d set up RAID-1 for my Linux install just beforehand, so I didn’t lose anything important (just my Windows drive, hah), but with just one drive left, I needed some kind of replacement.

I could’ve bought another hard drive, but damnit, spinning disks are from the past, and we’re living in the future! Instead, I bought myself a shiny new SSD.

Wolf in mini-sheep’s clothing

To be specific, I got a 256GB Crucial M4 — it’s not the latest and greatest SSD, but it’s been on the market long enough to prove its reliability. It looks so unassuming in its tiny, silent 2.5″ case, but it’s crazy-fast, with read speeds of 450MB/s, write speeds of about 260MB/s (not as fast as some newer drives, but perfectly respectable), and insanely-fast seek times that can make it dozens or even hundreds of times faster than a hard drive in real-world applications.

More than anything else, an SSD makes your PC feel strangely snappy. Boot times and application launch times both benefit hugely — Firefox now takes less than a second to spring to life, even if I’ve only just booted my PC, and staring LibreOffice takes maybe half a second.

Even when attached to a 3.5" bay extender, SSDs look tiny compared to 3.5" hard drives

Even when attached to a 3.5″ bay extender, SSDs look tiny compared to 3.5″ hard drives

To get some numbers, I tested something that’s always been slow on my studio PC: loading large instruments in to LinuxSampler. LS streams most of the sample data on-the-fly, but it still needs to cache the start of each sample in to RAM, and that requires a bunch of seeking. Here you can see the load times for Sampletekk’s 7CG Jr, a 3GB GigaSampler file, and the Salamander Grand Piano, a 1.9GB SFZ, from both my SSD and my old 1TB Seagate Barracuda 7200.12 hard drive — the SSD is about 4-to-6 times faster:

made with ChartBoot

Is flash’s limited lifetime really worth worrying about?

So, SSDs have fantastic performance, and they’re now (relatively) affordable, but I did have one concern: the fact that flash memory cells can only be erased a certain number of times before they wear out. Modern SSDs use techniques like wear-leveling and over-provisioning to minimise writes to each flash cell (this Ars Technica article is a great read if you want to know more), but it’s hard not to think that every byte you write to the drive is hastening its demise.

I worried even more after I ran “iotop” to look at per-process disk usage, and saw that Firefox was writing a lot of data. It writes several things to disk on a regular basis — cached web content, knowing malware/phishing URLs, and crash recovery data — and that can add up to several MB per minute, or several GB per day.

To see if this really was a problem or not, I used iostat to capture per-minute disk usage stats across a typical day. I did all my usual things — I left Firefox, Chrome, Thunderbird, and Steam running the whole time, I spent my work hours working, and then I toyed with some music stuff in the evening. The results are graphed below:

made with ChartBoot

There’s one hefty spike in the evening, when I copied 3.6GB of guitar samples from my hard drive to my SSD (maybe this wasn’t an entirely typical day!), but for the most part, I was writing about 5-15MB per minute to the SSD. The total for the day was 15GB.

That sounds like a lot, but it’s nothing my SSD can’t handle. It’s rated for 72TB of writes over its lifetime, and while that’s an approximate figure, it’s a useful baseline. Over a five-year lifespan, that works out to 40GB of writes a day, or 27.8MB per minute — that’s the red line on the graph above, which was well above my my actual usage for almost the entire day.

When you see a graph like this, it flips your perceptions. If I’m happy to accept a five-year lifespan for my SSD, then every minute I’m not writing 27.8MB to it is flash lifetime that’s going to waste! Smaller SSDs tend to have shorter lifetimes, as do cheaper SSDs, but with typical desktop usage, I don’t think there’s any reason to worry about the life of your SSD, especially if you’re not using your PC 10-12 hours a day or running it 24/7 like I often do.

SSD tuning

There are dozens of SSD tuning guides out there, but most of them spend a lot of time whipping you in to a “don’t write all the things!” frenzy, so instead of linking to one of those, I’ll just reiterate two things that you should do to get the most from your SSD.

The first is to enable TRIM support. This lets the OS tell the SSD when disk blocks are no longer needed (because the files they contained were deleted, for instance); that gives the SSD more spare space to use, which helps reduce drive wear and increases write performance. To enable TRIM, add “discard” to the mount options on each filesystem on your SSD, like so:

/dev/mapper/ssd-ubuntu_root  /  ext4  discard,errors=remount-ro  0  1

IF you’re using LVM, like I am, then you’ll also have to edit the “/etc/lvm/lvm.conf” file, and add the line “issue_discards = 1” to the “devices” section, to make sure that LVM passes the TRIM commands through to the SSD.

The second is to select an appropriate IO scheduler. IO schedulers are bits of code within the Linux kernel that arrange read and write operations in to an appropriate order before they’re sent to the disk. The default scheduler, “CFQ”, is designed to keep for desktop loads on regular hard drives, but its efforts are wasted on SSDs, where seek times are so much lower.

For SSDs, you’re better off with the “deadline” scheduler, which is designed for high throughput on servers, where disks tend to be faster, or you can even use the “noop” scheduler, which does no reordering at all. To set the scheduler on boot, add this to your “/etc/rc.local” file (most Linux distros have one of these):

echo deadline >/sys/block/sda/queue/scheduler

To be honest, the choice of IO scheduler probably won’t make much difference — it just improves performance a little (it won’t have any impact on lifespan), but your SSD is going to be so fast regardless that I doubt you’d ever notice. It’s an easy fix, though, so it’s worth the 10 seconds it’ll take to perform.

So go forth, buy an SSD, make a couple of minor tweaks, and then don’t be afraid to enjoy it!

on Unity2D and llvmpipe, and the differing approaches of fedora and ubuntu

Ubuntu’s Unity desktop invites comparisons to GNOME 3 for a bunch of reasons, but one important similarity is their reliance on hardware OpenGL support to power their visual animations and effects. In their first releases, both desktops used “fallback” modes to handle systems without OpenGL support, but in Ubuntu 11.10, Unity is available for those systems using a new project called Unity2D.

I think Unity2D is not just a terrible idea, but also another example of the new direction that Ubuntu is taking that makes me wonder if it’ll be my distribution of choice for much longer.

Not so unified

Unity2D removes the reliance on OpenGL by avoiding it entirely: it’s a rewrite of Unity from the ground up, based on the Qt toolkit and using the Metacity window manager instead of Compiz. While it looks and feels much like standard Unity, it’s an entirely separate codebase, and keeping the two in sync as features are added will require a substantial amount of extra work. Perhaps the Ubuntu developers have the resources needed to keep up, but it seems like a very shortsighted approach to me.

GNOME 3’s current fallback desktop is definitely a hack, too — it cobbles together a UI that looks a bit like GNOME Shell using the panel and related components that have been ported from GNOME 2. It has neither the flexibility of GNOME 2 nor the elegance of GNOME 3, so it’s not a particularly compelling experience, but the Fedora developers plan to make the full GNOME Shell experience available for nearly everyone in Fedora 17, using some very cool technology.

Software OpenGL with llvmpipe

OpenGL isn’t inherently limited to systems with hardware acceleration; Xorg actually provides a software implementation of OpenGL by default whenever hardware support is unavailable, but its performance is far low to handle desktop effects. However, a new software renderer, called llvmpipe, aims to change that. By using LLVM, a generic virtual machine that produces optimised x86 or AMD64 code on-the-fly, and utilising multiple CPU cores, llvmpipe performs far better than the standard Xorg renderer.

The gains are impressive: running Quake III Arena at 800×600 on my dual-core laptop, Xorg’s renderer managed 3.9 FPS, while llvmpipe managed a fairly playable 34.9 FPS. While that only makes llvmpipe about as fast as my old Matrox G400, that’s okay — it just has to be fast enough, and for GNOME Shell, and even the odd game, it definitely seems to be. llvmpipe has actually been used as the default software render in Fedora since Fedora 15, but it’s only in Fedora 17 that it supports all of the OpenGL features required to run a compositing window manager.

GNOME Shell on llvmpipe

GNOME Shell running without hardware acceleration on Fedora 17, using llvmpipe

I tested the Fedora 17 development packages (aka “Rawhide”) in a KVM virtual machine, and it worked fairly well; logging in revealed a complete GNOME Shell desktop, and while it was a little choppy, it was definitely usable. I’d definitely expect it to be faster on an actual PC, especially with a mutli-core CPU. The Fedora developers have plans to improve performance, too, by optimising llvmpipe and disabling some minor effects.

So, on the one hand, we have Fedora working on key infrastructure that will improve the Linux desktop experience for all users without hardware OpenGL acceleration, and on the other, we have Ubuntu developers throwing effort away on a developmental dead end. Ubuntu has copped flak before for not contributing to Linux development, but I don’t generally buy in to that argument — “contribution” isn’t something you can measure just by analysing commit logs or counting lines of code. Canonical can pay its developers to do whatever it wants them to do, but increasingly, it seems that the effort they’re expending is pushing Ubuntu in a direction I’m not sure I want to follow.

It’s hard to elaborate on exactly why I feel that way — I think it’s really down to little things, like insisting on forging its own path with Unity, the increasing number of built-in monetised services, and their sleazy dealings with Banshee (be sure to read the comments on that one!). Maybe the result is a great OS for a lot of users, but for me, Ubuntu is slowly drifting away from being the OS I want it to be.

wednesday reading list

I’ve mentioned Mixing Secrets for the Small Studio a couple of times now — it’s a great read, and I’m learning a lot from it, but mixing is just a part of making great music. Finding inspiration, getting ideas down, and then developing those ideas in to complete tracks are the real challenges, but there’s some great advice online for doing just that.

It was an article on Create Digital Music that reminded me about those challenges today — it contains seven “tips for creative success”, and they’re all right on the money. They talk about making use of what you have rather than focusing on what you think you need, and the importance of spending time on the music and having fun rather than sweating over tiny bits of finesse that no-one will notice anyway.

Some of the most enlightening posts I’ve ever read about making music come from general fuzz, a producer of some excellent (and free!) downtempo electronica; his “lessons” posts are a fantastic read. Some of his points relate specifically to music, and electronic music in particular, but many of them relate just as well to any other creative pursuit.

He covers a lot of ground: the value of finding your audience rather than relying solely on friends and family for feedback, of waiting before releasing new work rather than pushing it straight out, of actually finishing something, even if it’s not “perfect”, and of not getting disheartened by the fact that no-one will care about your work quite as much as you do.

I think that last point is particularly important. It’s great to make things that others enjoy, and to take joy from that, but if you’re not creating for yourself first, enjoying the process as much as the result, then you’ll ultimately end up frustrated.

new music update

A few months ago I posted that I was working on new music using Ardour 3, and I’m glad to say that my new track is now all but finished. Working with Ardour 3 was a bit nervewracking at times, as you’d expect when testing alpha software — there were several times, in fact, when I couldn’t even open the project’s session due to one bug or another. It all held together somehow, though, and after many bug reports and fixes, I definitely feel like it’s helped

The new track is a bit of a downbeat, ambient-ish thing, with some lo-fi sounds mixed in with some glitchy elements. I definitely put Ardour’s MIDI features to the test: there are MIDI tracks running out to my Blofeld and to Hydrogen, along with LV2 synths (Calf Monosynth and Linuxsampler), along with automation of CC parameters on the Blofeld and automation of plugin paramaters on Calf Monosynth. I’ve done quite a bit of effects automation as well, particularly with the bitcrushing Decimator plugin.

There’s even a VST plugin in there now; I had been beta-testing Loomer Cumulus, using it as a standalone synth, but with Ardour’s new VST support I now have it running within Ardour directly. Cumulus is somewhere between a synth and an effect: it lets you load a sample, and then trigger its playback using granular synthesis with varying paramaters, altering the starting point, pitch, and playback rate, among other things. You can define up to eight sets of those parameters, and then trigger those via MIDI keys. It can turn all sorts of sounds in to eerie textures, but it can just as easily take a drum loop and turn it in to a wonderfully glitchy mess, which is exactly what I used it for.

I’m pretty sure the track is done, but I don’t want to release it just yet. I plan to sit on it for a few days at least, while I read more of my copy of Mixing Secrets for the Small Studio, but I like the idea of putting together at least an EP with a couple of other tracks and releasing them all at once. That might not be practical if it takes me four months to finish each track, though, so I may post the individual tracks here when they’re ready, and then do an official Bandcamp release once they’re all done.

it’s here! native vst support in ardour 3

Ardour 3.0 is still in alpha, but it gained a substantial new feature last week: support for native Linux VST plugins. It’s a feature that’s been on wishlists for a while, but it’s become more important over the last year or so, as the number of VST synths for Linux has increased. The big drawcards are the commercial synths — Pianoteq, discoDSP Discovery, and the various Loomer plugins, for instance — but more open-source VSTs are appearing now too, such as the TAL synths, ported from Windows by KXStudio developer falkTX in his new DISTRHO project.

The new features use the unofficial Vestige VST headers, which means that Ardour avoids the need for users to download the official Steinberg VST SDK and build Ardour themselves. Having said that, the new VST support is a build-time option that’s disabled by default, but I’m hoping that it will be enabled by default, and available in the official binary builds of Ardour, before the final 3.0 release.

Ardour 3 SVN, running the Loomer Cumulus and TAL-Dub-3 native VSTs

As handy as this is, there has been some discussion about whether or not native VST support is a good thing. VST isn’t a particularly elegant plugin system, and given Steinberg’s licensing restrictions, it’s always going to be harder for the developers of hosts like Ardour to deal VST with than other plugin formats, such as LV2. I would hate to see this VST support discourage developers from working with LV2.

Realistically, though, it’s hard to expect commercial plugin developers to embrace LV2, on top of the effort already required to bring their plugins across to Linux. Indeed, now that Ardour has joined Qtractor and Renoise in supporting VST plugins, the size of their combined user bases might encourage more plugin developers to offer Linux support.

I hope we’ll see more ports of open-source Windows VST plugins too, but for anyone developing a new open-source synth plugin, or working on a plugin version of an existing standalone synth, LV2 makes much more sense. Regardless of how open-source they may be, VSTs that rely on Steinberg’s headers will never be allowed in to distributions. With David Robillard’s new LV2 stack, which is already in use in both Ardour and Qtractor, LV2 is a fast, reliable, and highly capable standard, and its use will only increase, regardless of what happens with native VST support.

a week-and-a-half with GNOME 3

I’m as surprised as anyone to admit it, but I’ve spent the last using GNOME 3, and it hasn’t been too painful — in fact, I’ve had no trouble remaining productive in it. I’ve definitely missed some of GNOME 2’s features, but it’s definitely been a more pleasant and productive experience than my time with Ubuntu’s Unity desktop after the 11.04 release.

A lot of people have reacted poorly to GNOME 3, and I can understand their frustrations. I’m not sure why I haven’t had the same experience, but perhaps my time with Mac OS X has something to do with it — I’m already used to using the Expose-style overview in the GNOME Shell, and to having Alt-Tab work on an application-level. There’s a new key combo for switching between the windows of an individual application; it defaults to Alt and whatever key sits above the Tab key in your locale (Alt-` in my case). It still took a bit of adjustment, but I was soon zipping between windows and launching applications without any dramas.

GNOME Shell's overview provides quick access to your applications and windows

The GNOME Shell cheat sheet covers a lot of the less obvious functionality built in to the Shell. I do find some of the hidden functionality a bit silly — having to hold Alt to reveal the “Power Off” menu item, for instance — but it still doesn’t take long to come up to speed.

I will add one caveat to my comments: I’ve been using GNOME 3 on my laptop, where (as I remarked a couple of posts back) I spend most of my time using Firefox, Chrome, Thunderbird, terminal windows, and a text editor. I haven’t used it with JACK and my regular assortment of music tools yet, so I’m still not sure how it’ll handle that workflow, or if its greater use of video hardware is going to cause any latency issues.

A quick reality check, nine years in the making

One thing I can’t help but feel in the release of GNOME 3.0 is a sense of history repeating; after all, it’s not the first major release of GNOME to slash away at the desktop’s feature set and remodel the remains based on design principles put together by a core team of developers.

Red Hat Linux 8, with the then-new GNOME 2.0. I'd forgotten how much like a browser Nautilus looked

GNOME 2.0 had substantially less functionality and configurability than the 1.4 release that preceded it, and it imposed a set of Human Interface Guidelines that described how user interfaces should be designed. I think you’d have a hard time finding someone today who’d claim that those changes weren’t for the best in the long run, but at the time, the streamlining was considered too extreme, and the HIG was controversial.

I think we forget just how much was missing in GNOME 2.0, partly because it’s been so long, but mostly because all of the really important features have found their way back in. To remind myself, I took a look back in time: I installed Red Hat Linux 8 in a VM and fired up its default GNOME 2.0 desktop.

The configuration dialogs in GNOME 2.0 did actually cover some options that are currently missing in GNOME 3, such as font and theme settings, and its panels had greater flexibility than the GNOME Shell’s single top panel, thanks to the bundled selection of applets. However, there were surprisingly few applets that provided functionality that hasn’t been incorporated in to GNOME 3 in some way.

Even this minimal window settings dialog from Red Hat 8 wasn't an official part of GNOME 2.0. A complete window settings dialog was added in the next release, GNOME 2.2

Leafing through the release notes for the subsequent GNOME 2 releases showed how quickly some of its missing functionality came back, and just how much the desktop has been polished over the years. While GNOME 3 throws away the visible desktop components, there’s a lot of GNOME 2 still in there, from the power, disk, sound, and networking management infrastructure through to its many tools and utilities.

GNOME 3.0 is a little different from GNOME 2.0 in that it changes the basics of navigating your desktop, and the developers have so far resisted requests to relax those changes. I’m still sure that it’s going to improve rapidly, though, and I do think that its developers will take the various criticisms on board. I don’t expect any dramatic design reversals, but I do expect improvements and refinements that will make GNOME 3 a viable option for many of the users that find it frustrating today.

switching back: the 2011 macbook air

UPDATE: I’ve just posted some updates on the state of Ubuntu on the 2011 Macbook Air.

With my old Dell laptop starting to suffer some physical wear and tear, I figured it was time for an upgrade. I couldn’t find a solid PC laptop that fit my needs, particularly in terms of portability and battery life, so I made a potentially controversial decision — I chose the brand-new 13″ Macbook Air. I won’t be using it for music-making, but after using it for work over the last week, I’m definitely happy with my choice.

I had sworn off Mac laptops for a few reasons: Apple’s power supplies and slot-loading DVD drives have always given me trouble, and my Macbook Pro ran very hot at times. Thankfully, the new power supply design seem less fragile, the Air has no DVD slot to worry about, and while it does howl a bit when working hard, that’s preferable to getting super-hot.

It’s also surprisingly quick — its 1.7Ghz i5 CPU outpaces even my 3Ghz Core 2 Duo desktop, and the SSD makes everything feel snappy. The Intel video isn’t brilliant, but it’s fast enough for most indie games, and even for a bit of Civilization IV or Left 4 Dead 2 on low-quality settings.

The Air’s fixed hardware is definitely a departure from my easily-serviceable old Dell, but it does help it to fit both a powerful system and a lot of battery in to a very light and slender frame. I wouldn’t want it to be my only computer, but it’s great as a portable extension of my desktop and home network. I’m sure I’ll have to give up the whole machine if it ever needs repairs, but with Time Machine backups configured (using my Ubuntu file server), I don’t really have to worry about losing data.

Mac OS X is, well… it’s Mac OS X. It has its advantages: it’s very well tuned to the hardware, making the most of the multi-touch trackpad, resuming from suspend in a second or so, and lasting a good seven hours on battery with a light load. It’s also great to have access to things like Steam. On the other hand, it’s still a bit annoying as a UNIX compared to Ubuntu, the Mac App Store is a shambles, and having to hack the OS just to stop it opening iTunes when I press my keyboard’s “play” key is completely asinine.

However, the reality is that I spend 99% of my working day using Firefox, Chrome, Thunderbird, a text editor, and a bunch of terminals, and Mac OS X meets those needs just fine. (For the record, I’ve been using TextWrangler and iTerm2.)

Ubuntu on the 2011 MBA

Ubuntu running, in a fashion, on the 13" 2011 Macbook Air

The Air can run Linux, too, though it’s not terribly usable yet. The trackpad works in mutli-touch mode after some hacking, but there’s no power management, and the Intel driver doesn’t work with the built-in display, so you’re stuck with unaccelerated 1024×768 video. The wireless works, too, which makes it unique among current Mac laptops, though only in 2.4Ghz mode.

I generally think it’s a bad idea to buy a Mac to run Linux, since the hardware is odd enough to cause these kinds of problems, but it’s always nice to know that I can run it if I need to. There’s a thread on the Ubuntu forums with all the details, and one post in particular that has a script to install patched keyboard and trackpad drivers.

some early ardour 3 impressions

Ardour 3 is now in alpha, and I’ve been poking at it for a few days now; in fact, you may have noticed some bits of Ardour 3’s GUI in the screenshot from my last post. It’s still quite crashy, as you’d perhaps expect from an alpha, but that seems to improve with each new release. In fact, going back to Ardour 2 already feels uncomfortable, because the Ardour 3 interface just feels nicer to work with, even before you consider all the new features.

The MIDI functionality takes a little getting used to, but once you’ve learned a few keyboard shortcuts you can quickly jump between working with MIDI and audio at the region level, and working with the individual notes within regions. I still think I’d be more comfortable if the piano roll was in a separate window, but once you’ve resized your MIDI track and adjusted the range of notes it displays to match your needs, it’s really quite easy to draw in notes with the mouse.

Being able to manipulate notes easily with the keyboard is great, too; once you’ve learned the appropriate shortcuts, you can move between notes and edit their pitch, duration, and velocity using the keyboard. Editing velocity in general is a bit strange, though, since there’s no velocity ruler — velocities are represented just by note colour, though hovering the mouse over a note will tell you its velocity value.

I did run in to a few problems beyond simple crashes, but I’m still pretty confident that Ardour 3 will be pretty solid by the time of its final release. I’m not sure it’ll eclipse other sequencers, like Qtractor, in that first final release, at least not in some ways (I do like having a velocity ruler, for instance). That’s just fine, though — Ardour 3 works just as well with external sequencers as Ardour 2 ever did, and its features extend far beyond simply adding MIDI.

back from the break

It’s a new year! I’m back at work this week, after too short a break, but I decided to keep my leave days up my sleeve for later on in the year rather than use them now. It’s always problematic heading off at the same time as other people, anyway, so I’d rather wait until I can take time off while others are around to cover me as much as possible.

It wasn’t a long break, but it was good, even though I didn’t feel great for much of it. I was hoping to get the track that I’m working on finished by the end of 2010, and that didn’t happen, but I had time to relax between hanging out with friends and catching up with the family. I also got some great Xmas loot — as well as a bunch of fun stuff, including Thinkgeek’s synth T-shirt (with actual working synth keyboard), a super-cute Android plushie, and some totally awesome GLaDOS core module plushies (which talk!), I got the very practical and awesome Korg nanoKONTROL MIDI controller, which I’m sure I’ll talk about more in the future.

Last year’s New Year’s resolution of sorts was to write at least one proper song with lyrics, and while I didn’t quite get there, that is what I’m working on at the moment, so it’ll definitely be done by the end of this year! I did release four tracks and one cover, though, so I think I did okay. I still don’t have lyrics finalised for the new track, but the backing track arrangement is mostly done now, so once I have the lyrics it should all come together pretty quickly.