Another classification, referred especially but not only to digital communications, distinguish two type of jitter: deterministic jitter and random jitter. Deterministic jitter includes
This jitter component is normally limited and in some extent "known".
Random jitter instead includes all jitter components which do not enter in the previous class. This jitter has no upper bound, but normally has a Gaussian distribution and therefore follows specific statistics rules.
Why this classification is so important. First of all, if jitter is measurable and predictable, then using very sophisticated digital signal processing algorithm it is at least in theory possible to adopt countermeasures such as to reduce the bit error rate. Instead, if jitter is random and hence not predictable, no solution is available.
Moreover, effects and methods of measurement of the two types of jitter are different.
A further, less precise classification is based on the probable cause of jitter.
Typical components of jitter can be related to data, power supply, or other causes.
Data related jitter is normally the component of jitter that appears depending on the data transmitted. This can be assessed by using specific patterns (see below).
Power supply related jitter is normally the component(s) of jitter with such frequencies (50 or 60 Hz and upper harmonics) as to be with very high probability modulation of the output signal by residual power supply frequencies.
Obviously, there is a certain amount of jitter that cannot be related neither to data nor to power supply: this in part is due to the phase noise of the clocks, in part caused by other non-linear phenomena, but in general it is not possible to assess the real source.
This classification is often used with spectral analysis of the analog output, just to give a tentative explanation of the spectrum lines.
The effect in terms of RMS error or noise introduced by random jitter are shown in the screenshot hereafter, which is an FFT of a mathematical simulation of a sinusoidal signal with 11.025KHz frequency and 0dB level, sampled at 44.1KSps and 16 bits respectively with (red) and without (green) a Gaussian jitter with an RMS value of 1 ns.
With jitter, the noise floor rises by 10dB. You'll better not think that this rise is irrelevant, as the best converters today only sport at most 115dB of dynamic range, and here we are at far lower levels. The noise floor appears very low because of a very wide sample (1M points), but the rising is real and effective.
This simulation shows very clearly that random jitter must be (far) lower than 1 ns to be able to achieve optimal performances.
In facts, from approximate calculations () the RMS error or noise introduced by jitter (of any type) results, in the worst case for a 20KHz signal and for a 16bit conversion, of about 4 LSB per ns: clearly a (widely) sub-nanosecond jitter is required for optimal performances.
Also the effects of sinusoidal jitter at a fixed frequency applied to a fixed frequency sinusoidal signal have been studied in detail.
The results were published in , and can be summarized saying that, from the technical point of view, the effect of jitter is a modulation of the audio program.
In practice sinusoidal jitter with peak value J (in seconds) applied to a sinusoidal signal of frequency w (in Hz) generates two sidebands at a spacing equivalent to the jitter frequency with an amplitude relative to the signal given by
This is the look the spectrum assumes in presence of purely sinusoidal jitter (from a simulation). In practice, you'll never get such a clean spectrum, as different forms of jitter are normally present.
Just to make another example, a sinusoidal jitter with 300ps peak value applied to a 10KHz sinusoidal tone creates two side bands with a level of -106dBr. Note that with a 100Hz tone, the sidebands would be reduced to -146dBr, while with a 20KHz tone they would rise to -100dB. This kind of behaviour is very important, as it gives us one of the simplest way to measure jitter.
In case of oversampling an interesting effect takes place. Sampling jitter bandwidth can extend up to half the sampling frequency, while the error induced by a given amount of jitter is independent from the sampling frequency. However, from the audio perspective we are interested only in the audio band noise, so that in case of a uniformly distributer jitter the noise power in the audio band would be reduced by the oversampling factor. Unfortunately, jitter distribution is often concentrated in lower frequencies, and the effect is not so marked .
In case instead of single bit converters with noise shaping, the noise moved to upper frequencies can reach very high levels, and is present even with very low level signals. This noise intermodulates with jitter, so that a relatively small amount of jitter can cause an important rise of the noise floor. However, commercial converters use different techniques to avoid this issue, like filtering out the high frequency noise before sampling it, or using multi bit converters.
Far less clear are the effects of jitter on human hearing, and its tolerability.
Studies  seem to demonstrate that the lowest sampling jitter amount that makes a sensible difference is 10ns RMS with a 17KHz pure tone; with music the minimum amount seems to be 20ns RMS.
However, sensibility seems to depend heavily on jitter frequency. Very low frequency jitter, up to 100Hz, causing sidebands next to the main signal, seems to be generally masked off by the modulated signal itself: here the threshold seems as high 100ns. The threshold drops steeply between 100 and 500Hz reaching a level of 1ns peak. From this point the threshold continues to diminish at 6db/octave, reaching 10ps (peak) at 24KHz with very high sound level. However, later studies seem to show that this last level of sensitivity is rather overestimated.
Others point out that most people can perceive tones buried 25dB or more below white noise; therefore it is necessary to keep the jitter side bands below this level, that in case of most recent DAC is really very low. Note that if this does not happen, the jitter might be masked or not depending on the situation. However, the fact that a tone buried 25dB below the noise floor of a CD player, that is in case of 16 bit conversion somewhere below -120dBFS, can be detected in a normal listening environment and situation is something, in my personal view, all to demonstrate.
In my personal experience, and I would dare say in common understanding, there is a huge difference between the sound of low and high jitter systems. When the jitter amount is very high, as in very low cost CD players (2ns), the result is somewhat similar to wow and flutter, the well known problem that affected typically compact cassettes (and in a far less evident way turntables) and was caused by the non perfectly constant speed of the tape: the effect is similar, but here the variations have a far higher frequency and for this reasons are less easy to perceive but equally annoying. Very often in these cases the rhythmic message, the pace of the most complicated musical plots is partially or completely lost, music is dull, scarcely involving and apparently meaningless, it does not make any sense. Apart for harshness, the typical "digital" sound, in a word.
In lower amounts, the effect above is difficult to perceive, but jitter is still able to cause problems: reduction of the soundstage width and/or depth, lack of focus, sometimes a veil on the music. These effects are however far more difficult to trace back to jitter, as can be caused by many other factors.
I also remember someone reporting to have obtained a very pleasant sound at a certain point while tweaking his own player, just to understand to his own surprise that he had been injecting by mistake high level 600Hz jitter into the clock... The sound was reported as soft compared with the original player, so it is possible that jitter added some kind of veil that reduced the "digitality" of sound.
I have recently tested the Weiss Medea, one of the lowest jitter DAC existing, and the precision and detail are absolutely outstanding. I cannot be completely sure, but I think that low jitter has a part in this.
However, I have two other systems whose jitter levels are similar (between them... Medea is out of reach with reasonable prices), and the sound is definitely not in the same league: the one with a higher jitter has a far more defined and precise sound. So low jitter is important, but not the only important thing.
From what you have read before, you could think that the only relevant jitter form in audio is sampling jitter, and it makes scarcely any sense to measure any other kind of jitter.
This is not completely true. In facts, it is often important to be able to assess also various clocks jitter, in order to make them as jitter free as possible.
The measure of jitter depends essentially on what kind of jitter you need to measure: either clock jitter, or digital data flow jitter or analog audio output jitter.
The most precise test to measure jitter would require to measure with precision the timing error between different signal fronts, and analyze the distribution of the timing error in time and over time.
The analysis cannot be limited to the calculation of jitter of one clock cycle: the presence of a very small amount of jitter between two adjacent edges does not give any information about what happens between two edges far away one from the other, as jitter components of each cycle in between could add up or cancel out. Jitter or timing error distribution is far more important than its peak or average values. This is generally valid for both digital transmission and digital audio jitter.
In high speed digital transmission jitter is a problem because can cause transmission errors. Obviously if the peak timing error is such as to potentially cause transmission errors, some error correction system is required, but this is not an issue: the real issue is when the frequency of transmission errors is so high as to cause serious degradation of the line performance, and this depends on the timing error distribution and not on its peak value.
In digital audio, instead, it is important to discriminate periodic from random jitter, and the frequency of periodic jitter, as this can give a very precise indication of the source.
In the audio field, however, the issue is very hard, if not impossible, to address in this way, as a precision of the order of few picoseconds (1E-12 seconds, the time in which a light beam covers 0.3mm - 1/80") is required in front of signals with periods in the range of 100 microseconds (100E-6) at best. There are probably instruments capable of this performance, today, but their cost is definitely very high.
This solution is without doubt the ideal one for clocks and digital data flows. However, as we have seen, it is not possible to assess all jitter components at the output of a digital system by just looking at internal digital signals, and on the other ù side it is not practical/possible to evaluate jitter in time with the precision required in audio frequency range.
So, for conversion jitter, it is normally necessary to approach the problem in a less direct way.
The method commonly used is, as we have anticipated, based on the evaluation of the sidebands generated by jitter on both sides of an audio signal. In this way, it is possible to assess jitter just by analyzing the spectrum of the analog output of a system driven by a very pure digital signal.
It is just necessary to make a spectrum analysis of the analog output and look for symmetrical sidebands. Given the equation above, the highest the signal frequency the highest the sidebands, so a frequency of Fc/4 (11025Hz, when Fc=44100Hz) is normally used. Inverting the previous equation, it is possible to calculate the RMS jitter contribution of each couple of sidebands, and obtain the total RMS jitter by adding up each contribution.
This methodology tends to include in output jitter any kind of phenomena that cause sidebands: for example, these can also derive from intermodulation with residual harmonics of the power supply. However, given the expected very low level of these harmonics, and the equally low level of intermodulation distortion expected in a modern low power analog circuit, it is clear that the intermodulation component, if any, should only be a far minor component of the analog spectrum. So we can well assume that the sidebands are essentially depending on different kinds of jitter.
A further enhancement to this technique consists in adding a low frequency toggling of the LSB (least significant bit) of the driving digital signal. Usually the toggling, which is synchronized with the high level signal, has a cycle of 192 sample, so that its frequency in case of Fc=44.1KHz is of 229.6875Hz.
This is in practice a very low level square wave, which adds to the spectrum a sequence of lines centered on 229.6875 Hz and all its odd harmonics. This is useful because introduces in the digital SPDIF flow a fixed, repetitive pattern, whose interactions with clock (through the intersymbol interference mechanism, ) cause the odd harmonics of 229.6875Hz around Fc/4 to assume an anomalous level. So measuring such a level it is possible to evaluate data related jitter sensitivity.
So is there any hope for the poor man interested in measuring sampling jitter? Well there are a few not too high cost solutions, all the ones I know of are based on qualitative FFT spectrum analysis.
For what regards output audio jitter, a good 24 bit sound card is mandatory. However, both the FFT accuracy and the generation of the test signal are serious issues. They can unfortunately be solved in an immediate way only with relatively expensive audio spectrum analyzer software: in facts we are talking of maintaining at least -140dB of dynamics throughout the whole FFT process, word size and rounding effects must be taken into adequate consideration.
The best unit I ever measured, the Weiss Medea, sported a measured sampling jitter value of just over 100ps (see diagram above). In good quality but still reasonably priced players, the measured jitter value can be as low as 200psec. It is not so difficult to keep it under 250-300psec in DIY units without taking special precautions (like in TNT 1541, for example).
As a matter of fact, I have never measured a jitter higher than 1700psec in any unit (in an... "audiophile" brand 24/96 DIY card, by the way), even though I must admit I have never measured really basic entry level units.
A last note: the different diagrams are directly comparable, because they are derived from the same CD and the diagrams scales are in decibel relative to the central peak (that is, the central peak in all cases reaches he 0dB Level.
Rewind to: Part 1.1 | Part 1.2 | [Part 1.3] | Fast forward to: [Part 1.5]
© Copyright 2005 Giorgio Pozzoli - www.tnt-audio.com