dr strangedrive redux: should you swap on an SSD?

In my recent post about SSDs I did make one major omission, which a friend pointed out on Twitter afterward:

Indeed, I don’t run a swap partition in my desktop PC — RAM is cheap, so I have 12GB of it, and if you’re debating the cost of an SSD, you can probably afford 8-12GB of RAM, too. Let’s play devil’s advocate, though, and say that you can’t upgrade your RAM for whatever reason. Conventional wisdom says that swapping on an SSD is a sure-fire way to send it to an early grave, but is that really the case?

Individual flash cells do have a finite limit on the number of times they can be erased, so it makes sense that if one part of your SSD (say, your swap partition) sees a lot more writes than other areas that it would wear out more quickly. That doesn’t actually happen on a modern SSD, though — they use wear leveling to spread writes as evenly as possible across all available flash. Even if you overwrite a single disk block repeatedly, the SSD’s controller will keep moving that block to different flash cells, transparently remapping things to hide the details from the OS.

Swapping on an SSD, then, should cause no more stress than any other write activity, so it should be perfectly safe, as long as those extra writes don’t push the SSD beyond what it can handle. This calls for another test!

The test

I forced my PC to use swap in a civilised manner, without resorting to pulling out sticks of RAM

I forced my PC to use swap in a civilised manner, without resorting to pulling out sticks of RAM

As in my last post, I observed my write traffic across a typical work day, but with one difference: I removed 8GB of RAM (by rebooting and adding “mem=5G” to my kernel command line, which left me with just over 4GB of RAM once various bits of hardware address space had been accounted for) and replaced it with a swap partition.

The write activity was much more spiky — there are several times when substantial amounts of data are written to swap — and it’s higher on average, too, but it’s clear from the graph that there’s still nothing to worry about. Across the day, about 2.7GB of data was written to swap, and the total data written was 13GB, well below the 5-year lifespan threshold of 40GB/day that I established in my last post.

made with ChartBoot

In fact, if you’re stuck with a PC with limited RAM, I’d heartily recommend swapping on an SSD! It’s so fast that you never really notice that you’re swapping, especially without the sound of a busy hard drive to remind you. In fact, I barely noticed that two-thirds of my RAM was missing.

Swap tuning

With some tuning, you may in fact find yourself using less swap on an SSD than you would on a hard drive. If you’ve been using Linux for a while, you’re probably learned (perhaps after making a semi-panicked “what’s using all my RAM?” post on a Linux forum) that Linux will use all of your free RAM as disk cache to improve performance. However, Linux goes further than that: it’ll sometimes push application data from RAM to swap just to grow its disk cache.

If this seems odd, consider a scenario where you have some apps running in the background that you’re not using at the moment. Doesn’t it make sense to page out those apps and free some RAM for disk caching to improve the performance of the apps you are using? On a hard drive, it certainly does, but random reads on an SSD are so fast that the benefits of that extra disk cache probably aren’t worth the cost of swapping.

You can control how likely the kernel is to use swap by altering the appropriately-named “swappiness” parameter. The default value is 60, and reducing this makes the kernel less likely to swap; on an SSD, you can probably drop this all the way to 0. To do that, add this to your “/etc/sysctl.conf” file, and then either reboot or run “sudo sysctl -p” to put it in to effect:

vm.swappiness = 0

Another parameter, “vm.vfs_cache_pressure”, is often mentioned in SSD tuning guides, too — this controls caching of directory and inode objects, with values lower than the default of 100 making the kernel more likely to keep those cached in RAM. The effect this has isn’t entirely clear to me, but if you want to experiment, add this to your “/etc/sysctl.conf” file:

vm.vfs_cache_pressure = 50

Of course, these values are just guides — if you find other values that work better for you, go with those instead. Don’t be afraid to leave these at their default values, either; Linux has a multitude of tunable paramaters, and just because you can tune something doesn’t mean you should, especially if you’re unsure what effect different values might have.

A note on drive life estimates

After two weeks, I'm yet to go through a single full erase cycle on my drive. That's reassuring!

After two weeks, I’m yet to go through a single full erase cycle on my drive. That’s reassuring!

It’s worth mentioning, too, that this 72TB estimate of the M4’s lifetime seems to be somewhat conservative. Its flash cells can handle about 3000 erase cycles before failing, so if you overwrote all 256GB of flash 3000 times, you’d get not 72TB of writes, but 750TB. The factor-of-ten disparity between these two figures is due to a phenomenon called write amplification, where the shuffling of data performed by wear leveling and garbage collection causes some data to be written to the underlying flash more than once.

The controllers inside SSDs strive to keep write amplification as close to 1 as possible (that is, no amplification), and some even use compression to push it below 1 in some cases. How successful they are depends on the several factors: the nature of the workload, how much spare flash the controller has to work with (this is where TRIM really helps), and just how good the controller’s algorithms are. A write amplification factor of 10 is really quite extreme, so I’d expect my M4 to last far beyond 72TB of writes (assuming the controller doesn’t fail first).

The 3000 erase cycles is just a conservative estimate, too — that’s when flash cells are likely to start dying, but they won’t all die at once, and most SSDs include some amount of spare flash that they can substitute for failed cells. In one endurance test, a user managed 768TB of writes to a 64GB Crucial M4; at that smaller size, that works out to more than 12000 erase cycles.

dr strangedrive or: how I learned to stop worrying and love SSDs

I’ve had bad luck with hard drives lately — in the last month or so I’ve lost two of the drives from my desktop PC. Luckily, I’d set up RAID-1 for my Linux install just beforehand, so I didn’t lose anything important (just my Windows drive, hah), but with just one drive left, I needed some kind of replacement.

I could’ve bought another hard drive, but damnit, spinning disks are from the past, and we’re living in the future! Instead, I bought myself a shiny new SSD.

Wolf in mini-sheep’s clothing

To be specific, I got a 256GB Crucial M4 — it’s not the latest and greatest SSD, but it’s been on the market long enough to prove its reliability. It looks so unassuming in its tiny, silent 2.5″ case, but it’s crazy-fast, with read speeds of 450MB/s, write speeds of about 260MB/s (not as fast as some newer drives, but perfectly respectable), and insanely-fast seek times that can make it dozens or even hundreds of times faster than a hard drive in real-world applications.

More than anything else, an SSD makes your PC feel strangely snappy. Boot times and application launch times both benefit hugely — Firefox now takes less than a second to spring to life, even if I’ve only just booted my PC, and staring LibreOffice takes maybe half a second.

Even when attached to a 3.5" bay extender, SSDs look tiny compared to 3.5" hard drives

Even when attached to a 3.5″ bay extender, SSDs look tiny compared to 3.5″ hard drives

To get some numbers, I tested something that’s always been slow on my studio PC: loading large instruments in to LinuxSampler. LS streams most of the sample data on-the-fly, but it still needs to cache the start of each sample in to RAM, and that requires a bunch of seeking. Here you can see the load times for Sampletekk’s 7CG Jr, a 3GB GigaSampler file, and the Salamander Grand Piano, a 1.9GB SFZ, from both my SSD and my old 1TB Seagate Barracuda 7200.12 hard drive — the SSD is about 4-to-6 times faster:

made with ChartBoot

Is flash’s limited lifetime really worth worrying about?

So, SSDs have fantastic performance, and they’re now (relatively) affordable, but I did have one concern: the fact that flash memory cells can only be erased a certain number of times before they wear out. Modern SSDs use techniques like wear-leveling and over-provisioning to minimise writes to each flash cell (this Ars Technica article is a great read if you want to know more), but it’s hard not to think that every byte you write to the drive is hastening its demise.

I worried even more after I ran “iotop” to look at per-process disk usage, and saw that Firefox was writing a lot of data. It writes several things to disk on a regular basis — cached web content, knowing malware/phishing URLs, and crash recovery data — and that can add up to several MB per minute, or several GB per day.

To see if this really was a problem or not, I used iostat to capture per-minute disk usage stats across a typical day. I did all my usual things — I left Firefox, Chrome, Thunderbird, and Steam running the whole time, I spent my work hours working, and then I toyed with some music stuff in the evening. The results are graphed below:

made with ChartBoot

There’s one hefty spike in the evening, when I copied 3.6GB of guitar samples from my hard drive to my SSD (maybe this wasn’t an entirely typical day!), but for the most part, I was writing about 5-15MB per minute to the SSD. The total for the day was 15GB.

That sounds like a lot, but it’s nothing my SSD can’t handle. It’s rated for 72TB of writes over its lifetime, and while that’s an approximate figure, it’s a useful baseline. Over a five-year lifespan, that works out to 40GB of writes a day, or 27.8MB per minute — that’s the red line on the graph above, which was well above my my actual usage for almost the entire day.

When you see a graph like this, it flips your perceptions. If I’m happy to accept a five-year lifespan for my SSD, then every minute I’m not writing 27.8MB to it is flash lifetime that’s going to waste! Smaller SSDs tend to have shorter lifetimes, as do cheaper SSDs, but with typical desktop usage, I don’t think there’s any reason to worry about the life of your SSD, especially if you’re not using your PC 10-12 hours a day or running it 24/7 like I often do.

SSD tuning

There are dozens of SSD tuning guides out there, but most of them spend a lot of time whipping you in to a “don’t write all the things!” frenzy, so instead of linking to one of those, I’ll just reiterate two things that you should do to get the most from your SSD.

The first is to enable TRIM support. This lets the OS tell the SSD when disk blocks are no longer needed (because the files they contained were deleted, for instance); that gives the SSD more spare space to use, which helps reduce drive wear and increases write performance. To enable TRIM, add “discard” to the mount options on each filesystem on your SSD, like so:

/dev/mapper/ssd-ubuntu_root  /  ext4  discard,errors=remount-ro  0  1

IF you’re using LVM, like I am, then you’ll also have to edit the “/etc/lvm/lvm.conf” file, and add the line “issue_discards = 1” to the “devices” section, to make sure that LVM passes the TRIM commands through to the SSD.

The second is to select an appropriate IO scheduler. IO schedulers are bits of code within the Linux kernel that arrange read and write operations in to an appropriate order before they’re sent to the disk. The default scheduler, “CFQ”, is designed to keep for desktop loads on regular hard drives, but its efforts are wasted on SSDs, where seek times are so much lower.

For SSDs, you’re better off with the “deadline” scheduler, which is designed for high throughput on servers, where disks tend to be faster, or you can even use the “noop” scheduler, which does no reordering at all. To set the scheduler on boot, add this to your “/etc/rc.local” file (most Linux distros have one of these):

echo deadline >/sys/block/sda/queue/scheduler

To be honest, the choice of IO scheduler probably won’t make much difference — it just improves performance a little (it won’t have any impact on lifespan), but your SSD is going to be so fast regardless that I doubt you’d ever notice. It’s an easy fix, though, so it’s worth the 10 seconds it’ll take to perform.

So go forth, buy an SSD, make a couple of minor tweaks, and then don’t be afraid to enjoy it!

2011 macbook air linux update

As I mentioned previously, I’ve been playing with Ubuntu on my 2011 Macbook Air, and I’m happy to report that it’s now much more usable than when I first installed it. There’s a kernel module hack that fixes the display issues, allowing the Intel driver to run at the panel’s full 1440×900.

Having the Intel driver running instead of the fbdev driver means that OpenGL and visual effects (and Unity, if you’re in to that sort of thing) work, as does brightness adjustment, and I suspect it’s the reason that suspend and resume now work, too. Patching the kernel manually would be a pain, but the (updated) setup script from the Ubuntu forums now takes care of this for you, along with the keyboard and trackpad driver patches.

Ubuntu 11.04 on the Macbook Air, with all the important stuff working

I also had a chance to test the Ubuntu 11.10 beta. I haven’t tested it with the video fix above (though it is supposed to work), but I did notice that the my 5Ghz 802.11n network worked with it, so it seems like the 5Ghz issues I’m having with 11.04 have been fixed.

All of the important stuff is working, then, at least for my needs. There are some minor keyboard niggles — I haven’t been able to adjust the keyboard backlight brightness, and the volume keys are incorrectly mapped — but the biggest issue is with the trackpad. It works, including two-finger scrolling and two- and three-finger taps and clicks, but it doesn’t feel quite right, particularly when scrolling.

It seems like a minor thing, but the trackpad is central to the user experience, and when basics like button presses and scrolling rely to a degree on gesture recognition, it matters a lot that they’re detected reliably and respond appropriately. The multitouch driver is under active development, though, so I have no doubt it’ll improve.

I really enjoying benchmarking this thing, just to see how much power has been crammed in to it. Compiling Ardour 3 from SVN seemed like a good test of overall system performance: it managed it in 14 minutes and 50 seconds, just under two minutes faster than my 3Ghz Core 2 Duo desktop. It’s definitely no slouch!

switching back: the 2011 macbook air

UPDATE: I’ve just posted some updates on the state of Ubuntu on the 2011 Macbook Air.

With my old Dell laptop starting to suffer some physical wear and tear, I figured it was time for an upgrade. I couldn’t find a solid PC laptop that fit my needs, particularly in terms of portability and battery life, so I made a potentially controversial decision — I chose the brand-new 13″ Macbook Air. I won’t be using it for music-making, but after using it for work over the last week, I’m definitely happy with my choice.

I had sworn off Mac laptops for a few reasons: Apple’s power supplies and slot-loading DVD drives have always given me trouble, and my Macbook Pro ran very hot at times. Thankfully, the new power supply design seem less fragile, the Air has no DVD slot to worry about, and while it does howl a bit when working hard, that’s preferable to getting super-hot.

It’s also surprisingly quick — its 1.7Ghz i5 CPU outpaces even my 3Ghz Core 2 Duo desktop, and the SSD makes everything feel snappy. The Intel video isn’t brilliant, but it’s fast enough for most indie games, and even for a bit of Civilization IV or Left 4 Dead 2 on low-quality settings.

The Air’s fixed hardware is definitely a departure from my easily-serviceable old Dell, but it does help it to fit both a powerful system and a lot of battery in to a very light and slender frame. I wouldn’t want it to be my only computer, but it’s great as a portable extension of my desktop and home network. I’m sure I’ll have to give up the whole machine if it ever needs repairs, but with Time Machine backups configured (using my Ubuntu file server), I don’t really have to worry about losing data.

Mac OS X is, well… it’s Mac OS X. It has its advantages: it’s very well tuned to the hardware, making the most of the multi-touch trackpad, resuming from suspend in a second or so, and lasting a good seven hours on battery with a light load. It’s also great to have access to things like Steam. On the other hand, it’s still a bit annoying as a UNIX compared to Ubuntu, the Mac App Store is a shambles, and having to hack the OS just to stop it opening iTunes when I press my keyboard’s “play” key is completely asinine.

However, the reality is that I spend 99% of my working day using Firefox, Chrome, Thunderbird, a text editor, and a bunch of terminals, and Mac OS X meets those needs just fine. (For the record, I’ve been using TextWrangler and iTerm2.)

Ubuntu on the 2011 MBA

Ubuntu running, in a fashion, on the 13" 2011 Macbook Air

The Air can run Linux, too, though it’s not terribly usable yet. The trackpad works in mutli-touch mode after some hacking, but there’s no power management, and the Intel driver doesn’t work with the built-in display, so you’re stuck with unaccelerated 1024×768 video. The wireless works, too, which makes it unique among current Mac laptops, though only in 2.4Ghz mode.

I generally think it’s a bad idea to buy a Mac to run Linux, since the hardware is odd enough to cause these kinds of problems, but it’s always nice to know that I can run it if I need to. There’s a thread on the Ubuntu forums with all the details, and one post in particular that has a script to install patched keyboard and trackpad drivers.

farewell old router, hello new router

For about the last seven years our home network connection has been served a Linksys WRT54GS, the slightly-upgraded version of the iconic WRT54G that began the custom router firmware craze. Thanks to the excellent Tomato firmware I’ve been hesitant to upgrade it, despite having a house full of 802.11n laptops and gigabit Ethernet desktop, but it had been flaky of late, so it was time to jump ship.

Linksys WRT54GS

My dusty old WRT54GS, with one missing antenna, has been better days

My chosen replacement is the Netgear WNDR3700. With dual-band 802.11n and gigabit Ethernet it’s a major upgrade — I can easily get 60-70MB/s between my desktop PC and HTPC/file server (maxing out the disk), and about 12MB/s over the wireless from my laptop. There’s also a USB port, though I’m not sure if I’ll do anything with that, yet.

The stock firmware lacked some features that I’m used to having, such as DNS hosting for the local domain, so I soon switched to DD-WRT. Installing it was more of an ordeal than I expected, though; the version linked from, of all places, the DD-WRT wiki entry for the WNDR3700 caused the router to sit there rebooting in a loop. After much frustration some I found an older build that I had better luck with, and by working through the Atheros tuning guide I managed to get a little more speed from the wireless network.

DD-WRT is a far cry from the elegance and simplicity of Tomato, but it definitely has a wealth of features. It’ll take me some time to dig through it all, but for now, it’s doing everything I need.

new studio toys

In the last few weeks I’ve added two great bits of gear to my home studio. The first, which I actually received for Christmas, is the Korg nanoKONTROL (Amazon link), a brilliant little MIDI controller that I think just everyone could find a use for.

Korg nanoKONTROL

Korg's nanoKONTROL is a brilliant, affordable MIDI controller

The nanoKONTROL is part of Korg’s nano series of tiny, laptop-friendly controllers which also includes the nanoPAD, with 12 drum pads and an X/Y touch controller, and the nanoKEY, a 25-key keyboard (of sorts). While I don’t think much of the nanoKEY — Akai’s LPK25 (Amazon link), while slightly larger, looks far more practical — the nanoPAD looks good, but I still think the nanoKONTROL is the pick of the bunch.

Its layout, with nine faders, nine knobs, and eighteen buttons, along with a set of transport controls, certainly lends itself to DAW mixer control, but it’s flexible enough to control just about anything. It did a fine job of handling synth parameters on PHASEX, for instance — using PHASEX’s MIDI learn features (just right-click on a control and move the appropriate MIDI controller) I was quickly able to set up the nanoKONTROL’s faders to configure the amp and filter envelopes, and the knobs to control filter cutoff, resonance, and envelope amount, among other things. It’s also brilliant as a SooperLooper controller, letting you pan, fade, and mute individual loops on-the-fly.

As a class-compliant USB MIDI device, it goes without saying that it works perfectly under Linux, but I’ll say it anyway — the nanoKONTROL works perfectly under Linux, with true plug-and-play simplicity. If you want to reconfigure the device, to change the MIDI messages that each controller sends, there’s a native app for that, called Nano-Basket, but Korg’s official app runs flawlessly under Wine, too.

Korg has announced updated versions of its nano controllers, but there’s no hard word on when they’ll be available yet. The nanoKONTROL2 adds a third set of buttons but loses one fader and knob, so I’m glad to have the original.

The Saffire PRO 40 has 8 inputs with preamps, 8 line outs, and ADAT expandability

The other new addition is somewhat bigger: it’s a Focusrite Saffire PRO 40 (Amazon link), a Firewire audio interface with eight channels of analogue I/O. Each input is a combo XLR/TRS jack with a preamp and phantom power, so it can handle up to eight condenser mics, but it’s just as happy handling line inputs from synths. In addition to the analogue I/O, there are S/PDIF and ADAT ports, which can add up to another 10 inputs and outputs.

As a sysadmin I’m quite familiar with how big standard 19″ rackmounted gear is, but for some reason, I was still surprised when I got it home — this thing is big! Now that I’ve made room for it, though, it’s fine, and beacuse it’s replacing not just my old PCI sound card, but also my Behringer mixer, it doesn’t actually take up much more space than my old setup did. Having to run just a single Firewire cable down to the PC is great — I certainly won’t miss running 3.5mm audio cables between my mixer and my PC’s back panel.

Like all supported Firewire audio devices, the PRO 40 uses drivers from the FFADO project, but support for the PRO 40 (as well as the smaller PRO 24, and some competing devices that use the same DICE chipset) is only available in the development FFADO code from Subversion. The current FFADO build in Ubuntu 10.10 is actually a Subversion build that’s recent enough to handle the PRO 40, but before I realised that I’d already installed the drivers manually. It wasn’t exactly plug-and-play, but once I switched to the old Firewire stack (playback doesn’t work on DICE devices with the new stack right now), and got the PRO 40 talking to my Firewire controller successfully (annoyingly, turning everything off and on again helped with this), getting it running with JACK was actually fairly straightforward.

So far, the performance has been fantastic. I haven’t given its preamps a good test with my mic yet, but recordings of my Blofeld via line-in were very clean and noise-free. Even my analogue delay pedal, which I know is a bit noisy, sounds much quieter than before, and with eight ins and outs on the one device, it’s very easy to hook up that delay pedal, send audio to it from Ardour, and then receive the output back in to Ardour. Even with Ubuntu 10.10’s stock generic kernel, I’m running pretty solidly at 8ms latency, which is low enough for my needs.

cheap bleeps: meeblip, shruthi-1, and monotron

Not that long ago people were predicting the death of hardware synths, and with good reason — software synths promised far greater convenience and flexibility at a lower price. There’s something uniquely compelling and immediate about working with hardware, though, so I’m glad to see that hardware synths live on. In fact, I’d say they’re thriving, if the new breed of cheap, quirky synths is any indication. These devices deliver unique sounds, hands-on control, and highly hackable designs, all for less than the cost of many soft-synths.
Continue reading

linux music tutorial: seq24, part 2

In the first part of my seq24 tutorial series, I looked at creating patterns in the pattern editor, and then triggering those patterns in real-time from the QWERTY keyboard. In part 2, I go in to more detail on both features. This video covers:

  • Advanced pattern triggering techniques: queuing and snapshots
  • Basic note editing: copying/pasting notes and changing velocities
  • MIDI CC automation
  • Background patterns
  • MIDI note entry (step-sequencing) and MIDI recording

It’s a little longer than I’d have liked, but there’s a lot in there! If you’d prefer smaller, shorter tutorials in future, feel free to leave a comment and let me know.

For downloaders, there’s also a 720p WebM version available (107MB).

linux music tutorial: seq24, part 1

I promised I’d make an introductory tutorial to seq24, and now, I’ve delivered! If you’ve tried seq24 in the past and been confused by it, hopefully this will clear up some of the mysteries; if you’ve never tried it, this might just encourage you to give it a go!

There’s an unspoken “step zero” here — get yourself a working copy of seq24. I’m not sure about other distributions, but on Ubuntu, especially 64-bit, the packaged version seems very unstable. The best thing to do is to grab the 0.9.1 version from the seq24 Launchpad and install that — this new release includes a bunch of bug-fixes, and a few new features, too.

The original plan was for a straight screencast, like my earlier synth tutorials, but I was so impressed by Kdenlive that I decided to have a bit of fun with it — hopefully the fun I had comes through in the finished product.

For downloaders, there’s also a 720p WebM version available.

new blog URL!

After a couple of years of running at blag.linuxgamers.net, I’ve decided to move my blog to a new, dedicated URL. If you’re not already there, you’ll now find my blog at:

http://wootangent.net/

Why the change? Well, the old URL made it seem like a blog attached to the old linuxgamers.net site, which it definitely wasn’t — the only reason I put it there to begin with was because I already had that domain set up. I thought about just ditching the “blag.”, but this blog has very little to do with Linux gaming, so running it at linuxgamers.net wouldn’t make sense.

It may take me a day or two to sort out all of the links, so if you find anything broken, or anything that still links back to the old domain, please let me know!