[ TNT | Who we are | HiFi Shows | Factory tours | Listening tests | HiFi topics | Tweakings | Inter.Views ]

Meet Ze Ripper - Part 3

The process flow

[Italian version]

Once each album side has been recorded as one 88.2/96kHz and 24 bit file my first processing step is click removal. I know that many people find vinyl clicks and pops virtually innocuous. After all, they are a fact of life in this game. But my stance is that it would be a shame not to remove at least the worst offenders, now that it is actually feasible to do so. Half of my record collection is second-hand, but even the LPs I bought new all too often are marred by clicks and pops. And yes, I've always treated them well and kept them clean, graduating through several record cleaning methods including B&O, Knowin/Knosti, Allsop, and since 2004 a Nitty Gritty RCM, and no, not even the RCM is a panacea. In fact it never made any previously-cleaned record sound any better at all!

Click and pop removal

Digital click removal then. The safest option is to do it manually: play the track in an audio editor, stop when a click is heard, zoom in on the waveform, and manually reduce the click's amplitude or simply cut it out. A valid method, but soooo tedious and long-winded as to be impossible.

There exists software that does the job fully automatically, such as the declicker in Audition. Some of these tools work reasonably well. In my experience, and I test-drove a lot of them, none do their job without significant damage to the sound. The solution I finally settled for (after started the development of my own click detection and repair algorithms in Matlab!) is a semi-automatic method using ClickRepair.

[Click repair]

ClickRepair is a shareware application for PC and Mac, written and maintained by Australian retired mathematics professor Brian Davies. For me it is the only sonically transparent tool that still allows a reasonably fast workflow. ClickRepair has a programmable detection threshold, and lets you have automatic repairs done to clicks up to a specified duration or number of consecutive samples. If the presumed click exceeds that size, the program stops, displays the waveform, and asks you to decide if it is click or music (clicking Accept then makes the repair, while hitting the return key skips to the next click). Alternatively, it allows you to edit the repair manually.

While there is a preview/prelisten feature I never got to grips with it, but I found after only a short while that many real clicks are visually distinct from real sound, obviating the need for listening! In those cases where it is too hard to see directly what it is, I found that false positives often follow the music's rhythm, i.e. are repetitive: when I see such an artefact twice or thrice in a couple of seconds, I know it is the music itself and I omit correction.

Using this flow, automatic up to clicks 10-20 samples wide (for a hi-res recording; at 44.1kHz I put the limit at 10 samples), manual/visual above that, combined with a low detection sensitivity (5 to 10), I can process an album side in 10 minutes or so, while remaining confident in the sonic outcome. Still, music from the likes of Kraftwerk remains a challenge to declick!

ClickRepair reads files in any .wav format, and outputs them in the same format. In my case that is 24 bit each time. After the time invested with declicking it is always wise to make a backup to an independent disk. The original files from the Tascam can now be deleted from the computer system (assuming the DVD+RW got archived).

Housekeeping and track splitting

For the next step the files are read into Audition, while Audition itself is configured for internal 32 bit / floating point accuracy. From now on all output files shall be written in the 32 bit float format, as they are subject to linear processing and the accumulation of rounding errors must be minimised. So we stay firmly in the 32 bit domain until targeting the final delivery format (16 or 24 bit, typically).

Now it is time for some simple housekeeping. The start and end of the file gets trimmed, i.e. the empty head and tail are deleted, with a quick fade-in at the beginning of the first song and a slow fade-out at the end of the last song. If desired inter-track 'silence' can be made quieter too, but I almost never do this as it detracts too much from the listening-to-LP-experience.

After this individual tracks are demarkated in the editor. In Audition this is very easy, with cue marks. These are meta data added to the .wav file and carried through all further processing. (Note: to enable writing cue marks to files you have to tick "save extra non-audio information" in Audition's save/save-as menu. Further, before writing to CD or DVD the cue marks have to be removed again, this is important as their unintended presence will foul up disc authoring!) To the cue marks I add the track number and title.

Noise reduction

Broadband noise reduction if often offered in audio editing software. Conceptually this is an extremely powerful too, promising the eradication of continuous-time noise, hiss, and even vinyl roar. In practice these tools are often very destructive to sound quality and they should only be used sparingly, when really needed, and when the user has attained proficiency with them. I only very recently started using noise reduction, and I don't have any salient tips or tricks to relate yet.

Equalisation

Equalisation. I admit being guilty of it. In my experience, there are not too many records that are good enough not to benefit from a bit of tonal massaging. A lot of my records stem from the 80s, and all too often I find them lethally dull and flat sounding. Perhaps some of this can be attributed to teenage-me playing them at 3g VTF with a cheap conical stylus, but then again, I always was wise enough to first tape the album and then never play it again (until I got myself a better deck or record player, of course). No, my pet theory is that small markets like Belgium were at the bottom of the food chain when it came to distributing the offspring of the presses:

Stampers wear, and the difference in treble contents and dynamics between the first and the very last records coming off the same stamper must be significant, especially when time to market dictated that stampers remain employed well past their ‘use by' dates to eke out another 1000, 2000, or 4000 frisbies.. But even further up in the flow similar things could happen. World-wide releases of mega acts were often done by issuing copies of copies of copies of master tapes to cutting and pressing plants all over the globe. In the case of analogue copies higher generations too suffered treble and dynamics losses. Early digital recorders were adopted to circumvent this, bringing its own flavour of corruption. So most of my 80s international pop and rock collection sounds bad, whereas albums from local artists sound remarkably better. Given the immense possibilities of digital processing it makes no sense now not to try and improve on obviously sub-par discs.

Equalisation can be thorny business. The first LP I ever converted was my pristine yet bad-sounding copy of Peter Gabriel's first. This is a record with a wild variety of music styles, and I spent literally weeks in giving each song individually what it deserved. This was clearly not the way to do it, and nowadays I run EQ for a whole album or for a whole side at once. Only in very rare and extreme cases do I try to compensate for end-of-side deterioration, with EQ set for the first three tracks, and a more trebly EQ for the remainder of the side.

Like audio components, software equalisers tend to have their own sound. The parametric in Audition sounds a bit grainy and coarse to my ears, yet I often use it as it allows me to zoom in relatively quick onto the sound I have in mind. I've tried demo versions of more advanced EQs, including linear phase ones, which often indeed sounded cleaner. But I dropped them because I just found them less accessible than good old Audition. What's the value of superior sonics if you never arrive at the desired result?

There is also another class of EQs I often use. They are a bit oddball in that they are home-brew. Using Matlab software at the company I generated (and stored in small .wav files) linear phase impulse responses for spectral tilt in a number of increments, thus emulating the tilt controls found on Quad and old Luxman preamplifiers. Often a record just need some freshening up, and the application of a bit of upwards tilt then does the job as transparently as is feasible. This is then implemented using the convolution function of Audition, a mathematical operation that convolves the music file with the filter's impulse response file in the time domain, i.e. without suspect trickery like bouncing into and out of the frequency domain.

At any rate, a good strategy when messing with the sound of a record is to first make it to taste, and then back off a bit. Over-doing things is never wise in the long term, and any corrections exceeding 3dB should be done with utter care. Keep it subtle: this is about record preservation, not about making a remix!

When comparing before and after, keep in mind that the ear/brain mechanism initially prefers the louder sound. So when the EQ is boosting a sizeable range, do reduce overall gain a bit to compensate for the increased loudness. Otherwise you'll just be fooling yourself.

Gain and balance

Often the source file retrieved from the recorder or ADC will be a bit low in level, as during the capture overload and clipping of the ADC were avoided. In a world were everything seems to get louder with the day it seems pointless to leave a file at a needlessly low level, so the final step in the LP restoration is to bring its overall level up. How high up? The obvious answer is to bring up the two or three highest peaks of an album to 0dBF(ull)S(scale). This obviously requires looking at both sides first before deciding on the desired amount of gain, but this way the original song-to-song dynamics of the LP will be preserved, as the artist (hopefully) intended.

Some people will argue that the 0dBFS level should never be hit, because this may cause overload during replay, in the DAC's digital reconstruction filter. While this is undeniably correct, listening tests conducted by UK journalist Keith Howard have indicated that the audible effects of such excursions, when rare in frequency, are totally inaudible.

Likewise, when a record is overall very quiet, and has only one or two massive peaks, you may find that the judicious application of limiting or even hard clipping on these peaks is remarkably inaudible, too. It is the relentless use of continuous-time limiting on modern records (the so-called shredding) that makes them so tiring to listen to. The same things occuring once or twice on a record can easily be forgiven. Indeed, I know a classical CD of excellent sound quality that has significant clipping, but applied where it doesn't detract from the sound. Stating that all clipping always is bad strikes me as a bit dogmatic. You don't have to believe me now, but give it a try, one or two records in your collection may benefit from it.

And on to the final product ...

Once all editing and processing is done I typically have one file for each album side, each file in 32 bit floating point, double sample rate, and with cue marks.

The first step towards CD and data-reduced file formats is sample rate conversion to 44.1kHz. I do this with Audition (quality ‘999', pre-filter OFF) or with external tools such as the slightly superior SoX and r8brain (I'd love to try iZotope SRC). The result is a downsampled floating point file with cue marks.

This is then read back into Audition, where it is converted to 16 bit with spectrally-flat 1 bit TPDF dither. Spectrally-shaped dither (with more noise at higher, less audible frequencies) is not used, due to the nature of a vinyl record's innate noise floor. Noise shaping techniques are avoided altogether since these tend to be less robust than dither with respect to any later digital signal processing, such as in room equalisers and digital crossovers. (Am I secretly contemplating a switch-over to Meridian loudspeakers?) Finally, Audition's batch mode is used to spit out individual 44.1k/16b tracks, as indicated by the cue markers.

These tracks are directly used in CD authoring (too grand a word for what amounts to drag and drop). I also pull them into iTunes, which then converts them to a reduced format. Normally I use 128kbps MP3, with Joint Stereo coding enabled. Contrary to popular belief Joint Stereo is not a destructive process, but rather a more efficient coding scheme that represents the Left/Right stereo signal as Mid/Side, cashing in on the reduced redundancy of both streams (while L and R have a lot in common, M and S are totally different) and the generally less informationally dense Side channel to attain better coding. Roughly speaking plain 128k MP3 with JS about halves the quality gap between plain 128k MP3 and 128k AAC (which uses Joint Stereo by default). So even while I only use iTunes and various iPods my choice is ‘vile' MP3, purely for reasons of compatibility. MP3 shall always play on any machine, which is worth something in the greater scheme of sound archival. Yet, I see with some pleasure that our newest car's B&O entertainment system sports an in-dash hard disk with MP3 and AAC compatibility!


Now back to the mother files, high-res 32 bit. In the past I used to filter these with one of Peter Craven's "apodising" low pass filters (coefficients can be found on Keith Howard's website), but lately I stopped bothering with this additional step without tangible benefit. Audition is used once more for streaming out individual tracks, now at 24 bit, made by truncating the 32 bit source data. I don't believe in dither at this low level, the more so as the music itself must already be self-dithering by nature.

From these tracks I can made DVD-As, using Tascam's edition of Minnetonka discWelder Bronze, or audio-only DVD-Vs, using Audio DVD Creator, a cheap tool that works ... sort of. There are, of course, many other tools available for PC, Mac, and Linux platforms, such as Cirlinca's.

The final step is presentation. I feel that in general things only invite to be used if they are attractive enough. Sickened by the cheap appearance of generic CD-Rs with hand-written titles I invested the massive sum of 100 Euro in a Canon inkjet printer with CD compatibility (and emphatically not a dedicated CD label printer). Relevant artwork can readily be downloaded from the internet, or failing this, a simple title and track list is typed in, and so reasonably pro-looking CDs and DVDs can be made in the matter of minutes. Add jewel case, inlay card (printed on thick paper), and ... done.

[Forward to Part IV] | [Back to Part I] | [Back to Part II]

© Copyright 2010 Werner Ogiers - werner@tnt-audio.com - www.tnt-audio.com

[ TNT | Who we are | HiFi Shows | Factory tours | Listening tests | HiFi topics | Tweakings | Inter.Views ]