Spectral Deconvolution algorithm
Technical memo
Norm O'Neill, Tom Eck, Alexander Smirnov, Brent Holben, S.
Thulasiraman
Mar. 28, 2005
revised June 22, 2005
revised April 26, 2006
Version 4.0..........................................................................................................................2
Problems to be resolved..................................................................................................2
Resolution of Version 3.0 problems in Version 4.0........................................................4
Version 3.0..........................................................................................................................7
Problems to be resolved in Version 2.0..........................................................................7
Resolution of Version 2.0 problems in Version 3.0........................................................7
Remaining problem in Version 3.0.................................................................................7
Some comparative results...............................................................................................9
QA issues for Version 3.0.............................................................................................13
Dynamic QA.............................................................................................................13
Pass / no-pass type of QA filter................................................................................13
Relationship with Dubovic inversion outputs...........................................................13
Version 2.0........................................................................................................................14
Problems resolved with respect to the Version 1.0 algorithm......................................14
Resolution of Version 1.0 problems in Version 2.0.....................................................14
Version 1.0........................................................................................................................14
References........................................................................................................................14
In this technical memo we presume that the reader has a copy of the original defining
papers in hand and accordingly describe changes from the algorithm described in those
papers (O'Neill et al., 2001 and O'Neill et al., 2003). The history of the algorithm1 in
terms of Matlab program version numbers, is described below.
Version 4.0
Problems to be resolved
A problem with the Version 3.0 MOE algorithm was an over sensitivity to the estimated
error bars in af (and in consequence ., tf and tc). It will be recalled that, in the MOE
algorithm, the estimated error bars in af were employed to achieve a smooth transition in
the forcing condition (. = 1). However it became apparent when processing large AOD
smoke data over the Mongu site (Tom Eck) that this (a) created a situation where .
values rarely got close to unity because the stochastic error estimates are typically quite
large and (b) induced a ceiling effect in . and consequently a strong correlation was
created between tf and tc (which wasn’t a problem before the MOE type of error bar
forcing). Figure 1 shows a particularily illustrative example of this effect in the Mongu
data of 2004.
A second more minor problem was that the empirically developed stochastic expression
for the rms error in a' was found to be more complicated than was merited (a much
simpler expression was found to reproduce, about equally well, the empirical results of
the stochastic simulations of the processing of an ensemble of noisy AOD spectra). This
new rms expression was;
.a’ = 10 s(ta)/ta
where s(ta) is the rms error in the polynomial-fitted AOD.
1 written originally as a Matlab program called tauf_tauc.m, but, for example, converted to C++ for
AERONET
Figure 1 - Artificial correlation between tc and tf and ceiling effect in . induced by the
Version 3.0 (MOE) physical forcing when . is near unity.
Version 3
2 a similar approach was taken when . . 0
Resolution of Version 3.0 problems in Version 4.0
A more general representation of the type of weighted averaging which occurs when . .
1 (which includes the MOE) case is2;
af( af(1) ) = .(af(1)) a + [1 - .(af(1)) ] (af(1) + .af)
where af(1) is the uncorrected estimate of af, af(af(1)) is the corrected estimate, .(af(1)) is
a weighting function and .af is the estimated rms error in af. The pragmatic approach to
eliminating the problem discussed above is to weight the recomputed af mean more
towards a (towards . = 1) than af(1) + .af rather than a straight unweighted mean
between the same two quantities as was done for the MOE (Mean of Extrema approach)
method of Versions 2 and 3 (and the same idea for the . = 0 forcing). This means
.(af(1)) > 0.5 (where .(af(1)) = 0.5 for the MOE approach). The justification is that the
part of the normal curve below af(1) = a should have some influence on the corrected af
value (as opposed to none at all in the MOE). The details of this correction in terms of
the analytical development of the quadratic expression for
.(af(1)) = b0 + b1af(1) + b2[af(1) ]2 and the 3rd order expression for af( af(1) ) are available
from Norm O'Neill. Figure 2 shows how the effects of correlation between tc and tf and
the ceiling effect in . is significantly reduced with the application of the Version 4.0
algorithm. This result is very similar to turning physical forcing off (without the
infringements of the . = [0, 1] limits which plague the case of no physical forcing); in
other words, virtually no new correlation is induced by the algorithm. The residual
correlation could well be real (coarse mode smoke being generated at the same time as
fine mode smoke particles).
Figure 3 shows a schematic of the Version 3.0 and Version 4.0 averaging schemes.
Figure 2 - Significantly reduced correlation between tc and tf using Version 4.0
algorithm.
Figure 3 - Conceptual physical forcing illustration for the Version 3 and Version 4
algorithms. The solid vertical lines are the error bars in af. The dotted red line is the
Version 3.0 (MOE) solution while the dashed bold red line is the Version 4.0 (quadratic
weighting) solution. The superscript (1) refers to the uncorrected solution. The bold solid
red line is the brute force correction without any error smoothing while (non-bold) solid
red line (mostly hidden by the bold solid red line) is the uncorrected solution
aaf, max(Mietheoretical)
af(1)-.afaf(1)+ .afaf(1)
.(1)> 1.(1)< 1MOE Version 3; af= a/2+ (af(1)+ .af)/2
Version 4
compromise.(1)= 0
Version 3.0
In March of 2005 Version 3.0 of tauf_tauc.m, to be implemented in the new
AERONET processing system (called Version 2.0) was delivered. The problems which
motivated a new version and the solutions effected are detailed below.
Problems to be resolved in Version 2.0
(i) Version 2.0 still produced anomalous values for very large input AOD errors
(discovered when the algorithm was applied to airborne AOD data which had nominal
AOD errors >> nominal CIMEL AOD errors) : for very large AOD errors the "mean of
extrema correction" was appropriately limited at the lower bound but there simply was no
analogous upper bound when the uncertainty limits of af were up in the stratosphere
(induced by overly large AOD errors) and thus the corrected value of af was excessively
large (and in consequence the . values were too small).
(ii) The af ' = f(af) polynomial was moderately biased because it didn't include
sufficiently small fine-mode PSD standard deviation cases in its envelope of uncertainty
and because the original relationship (equation (7) of O'Neill et al. [2001]) was not
wavelength dependent when clearly it should be.
Resolution of Version 2.0 problems in Version 3.0
Version 3.0 of the spectral deconvolution algorithm was different from the Sept. 8
(Version 2.0) algorithm in the following ways ;
(i) physical forcing was rendered "symmetrical" in Version 3.0 by applying it to the
upper as well as the lower physical bounds of af (the upper bounds, af,max, theoretical being
spectrally dependent and ~ 3.5 as determined by Mie considerations).
(ii) New spectrally dependent coefficients of the parabola in equation (7) of O'Neill et al.
[2001] were employed. These are;
aupper = -.22, bupper = 10-0.2388 .1.0275, cupper = 100.2633* . -0.4683
alower =-.3, blower = .8, clower =.63
a = (alower + aupper)/2, b = (blower + bupper)/2 et c = (clower + cupper)/2
where the indices "upper" and "lower" refer to the uncertainties in the coefficients (due to
uncertainty in the actual fine mode model). The new uncertainty in af (which propagates
into the uncertainty in af, ., etc.) follows from these expressions, viz;
.a'f = (aupper - alower)/2 af2 + (bupper - blower)/2 af + (cupper - clower)/2.
Remaining problem in Version 3.0
The version 3.0 algorithm does not account for rare cases where a > af,max, theoretical
(usually associated with a serious artifact in one of the AOD channels). The solution
would mean forcing a to be = af,max, theoretical. Rather than changing measurement values
(up to this point only inverted values have been modified) it was decided to simply accept
the infrequent occurence of this situation (for which . > 1).
Figure 4 - Version 2.0 and version 3.0 spectral deconvolution results for Egbert, Ontario,
Canada (. = 500 nm). These results are for the same input data employed to produce
Figure 4 of O'Neill et al. [2003].
Some comparative results
Figure 4 above shows a comparison between Version 2.0 and Version 3.0 results. In
this case the changes are very small (as they are also relative to Version 1.0 results shown
in Figure 4 of O'Neill et al. [2003]). Figure 5 below also shows a case with only very
small changes between the algorithms.
Figure 5 - Version 2.0 and version 3.0 spectral deconvolution results for the Lanai,
Hawaii, site, Aug.-Sep 2001, . = 550 nm (daily averages).
Figure 6 below shows a case where there are slightly more significant changes
between the algorithms. Comparison with the Version 1.0 results of Figure 8 in O'Neill et
al. [2003]) show that the anamolous AOD and Angstrom results of that figure (where tf >
ta and af < a) have appropriately disappeared. The modelled stochastic errors increase
moderately from Version 2.0 to Version 3.0 while the nominal af and . values decrease
and increase respectively by a small amount. Both versions demonstrate the progressively
larger errors which one obtains as af decreases towards unity (as one approaches large
fine mode particles).
Figure 6 - Version 2.0 and version 3.0 spectral deconvolution results during the first day
of the Québec smoke event of July 2002 at Egbert, Ontario, Canada (. = 500 nm). These
results are for the same input data employed to produce Figure 8 of O'Neill et al. [2003].
Version 2.0 algorithm Version 3.0 algorithm
It was found that only extreme cases had any significant effect re the limit of af, max,
theoretical on af. Figure 7 shows Version 2.0 versus 3.0 results for a case of very large
AOD input error (precisely when one has problems with extremely large af values
induced by very large uncertainty bars). It can be seen that the Version 3.0 af values are
much more stable (which is in itself a positive thing) but that . values do not change by a
lot except when the errors in af are quite exessive (measurements 1, 16, and 17 which are
associated with anamolous artifacts in certain bands).
Figure 7 - Version 2.0 and version 3.0 spectral deconvolution results for some airborne
data provided by Santiago Grasso. The nominal input AOD error is = 0.039 at a
wavelength of . = 0.55 µm. The x axis represents a measurement # index.
Version 2.0 algorithm Version 3.0 algorithm
QA issues for Version 3.0
Data processing protocols typically include two or more levels which range from raw
data to averaged value-added products. One could view the QA issue in the case of the
spectral deconvolution algorithm as a choice between dynamic QA (where no points are
eliminated but an error estimate is given for all points) and "pass / no-pass" type of
filtering such as cloud screening. Below are some thoughts on each type.
Dynamic QA
The algorithm should be applied to level 1.0, 1.5 and 2.0 AOD since it is intended to
discriminate coarse mode from fine mode AODs. In a very real sense it "rides" on the
AERONET QA already in place; one could certainly speak of level 1.0, 1.5 and 2.0
values of tf, tc, and ..
Its not clear that a complementary QA process is necessary in the sense of pass/no-pass
filter; the algorithm already provides a dynamic esimation of stochastic error (.tf, .tc,
..) for every single AOD spectrum. There is not yet an analogue to this in Oleg's
inversion processing because there is no provision for a dynamic error estimate of the
derived products (and of course its much more complicated to do). As well, the physical
forcing modifications of Version 3.0 have eliminated virtually all cases of non-physical
values.
Pass / no-pass type of QA filter
If it is deemed essential to have pass / no-pass type of QA filter then possible candidates
would be a combination of the two conditions below. The filter thresholds selected below
represent a fairly liberal constraint while ensuring that extreme anomolies are eliminated.
- a threshold on the estimated stochastic error in .tf, .tc, .. (.. < 0.5 would be a
reasonable choice)
- some threshold on the AOD polynomial regression error .ta / ta (an indicator of how
distorted the AOD spectrum is). .ta / ta < 0.3 would be a liberal choice.
Relationship with Dubovic inversion outputs
The differences between the spectral deconvolution algorithm and the fine-mode / coarse-
mode optical depths from the Dubovik inversion are not an issue since the current
Dubovik output is the equivalent of what the community calls SMF (sub-micron fraction)
as opposed to the spectral deconvolution algorithm output which is essentially an FMF
3 the spectral deconvolution approach is really spectral in nature (one assumes apriori properties of the
coarse mode spectrum). This spectral approach is much more closely tied with the FMF than the SMF. In
terms of the notation in O'Neill et al. (2003), FMF = ..
(fine mode fraction) type of discrimination3. The former is a purely mechanical cutoff in
radius (which is fundamentally how mechanical discriminators work) while the latter is a
total mode discrimination (arguably more physically fundamental in that the different
modes represent different physical origins). This means that tc, SM < tc, FM so that tf, SM >
tf, FM and hence SMF > FMF (.SM > .FM). The new Dubovik inversion which will base
the fine-mode / coarse-mode division on the minimum value of the (volume) particle size
distribution value rather than the current 0.6 µm cutoff will be more analogous to a FMF
type of division.
Version 2.0
On or about Sept. 8, 2004 a new tauf_tauc.m version with a "physical_forcing" option
for eliminating . > 1 problems was delivered to AERONET. Details are given below.
Problems resolved with respect to the Version 1.0 algorithm
Under certain conditions the value of the ("monochromatic") Angstrom exponent (a)
exceeded the maximum value of af permitted by equation (7) of O'Neill et al., (2001).
This automatically created a non-physical situation where the fine mode fraction (.) was
greater than unity (and the spectral derivative a' was as a consequence greater than af ').
These conditions usually corresponded to cases of thick, aged (large particle) smoke
when a' was large and a was small. The problem was fixed in a smoothly varying
fashion by implementing the "physical forcing" option described in the section
immediately below.
Resolution of Version 1.0 problems in Version 2.0
If any portion of the uncertainty bar of af (computed from the stochastic error estimate
described in O'Neill et al., 2001) was lower than a then a new value of af was computed
as the mean of the upper extrema of the estimated af uncertainty and a. This "mean of
extrema" (MOE) modification is represented by the dotted line in Figure 3.
Version 1.0
This is basically the algorithm described in O'Neill et al., 2001 and O'Neill et al., 2003
and it was the first algorithm delivered to AERONET (to Ilya Slutsker).
References
1. O'Neill, N. T., Dubovik, O., Eck, T. F., A modified Angstrom coefficient for the
characterization of sub-micron aerosols, App. Opt., Vol. 40, No. 15, pp. 2368-2374, 2001.
2. O'Neill, N. T.,T. F., Eck, A. Smirnov, B. N.Holben, S. Thulasiraman, Spectral
discrimination of coarse and fine mode optical depth, Vol.. 108, J. Geophys. Res., No.
D17, 4559-4573, 10.1029/2002JD002975, 2003.