The following is a collection of correspondence and note that we've had with the CERES science
team members over the pat few years. Regular users of CERES data may find it useful.


*************On comparing ERBE and CERES fluxes and on known ERBE biaseses**********


In Section 5 of the attached paper, we compare fluxes using the same CERES data processed with
two algorithms: the ERBE-like (ES-4) algorithms (same as those used on ERBE) and those used to
produce the new SRBAVG product. Fig. 13 shows the seasonal cycle of SW and LW TOA flux for clear
and all-sky conditions. In the all-sky SW flux case, the amplitudes of the seasonal cycles are
quite similar (although there is a 1.8 Wm-2 difference).

For clear sky, the seasonal cycle in CERES SRBAVG SW flux is much more pronounced than ERBE-like,
SRBAVG fluxes show a smoother variation with season than ES-4 fluxes, and SW TOA flux maxima in
November-December and April-May (associated with higher albedos in the Antarctic and Arctic) are
clearly evident in the SRBAVG results, but hardly noticeable in the ERBE-like results.
Hope this is helpful.



...Comparing CERES ERBE-Like fluxes to the ERBE scanner fluxes is the only direct way to compare
since the algorithms are the same. The only other potential
issue might be orbit differences between NOAA-9 (is this what you used for ERBE?) vs Terra, but I'm
not sure if anyone has looked at the importance of this.  Seiji Kato in our group has been looking
at the poles a lot, but mostly with the newer CERES improved cloud/snow/ice determination (e.g. SSF,
CRS, SRBAVG products).  But he and Norm might have some relevant thoughts.
If you compare CERES SRBAVG SW flux with new ADMs, directional models, clear/cloud detection vs the
old ERBE SW fluxes, you have apples and oranges and all bets are off.



Now that we are ready to release SRBAVG Edition 2D in the next month which solves the largest of
our diurnal cycle problems with geo data, and the difficulties of constraining geo calibration with
CERES, we are working on pulling together an overall error budget that will compare ERBE and CERES.
the answer to your question is highly dependent on the time/space scale that you ask it.

for instantaneous fluxes: yes ERBE ADM errors dominate for LW and SW
for daily mean fluxes: ADMs still dominate for SW, but are comparable to time sampling errors for
LW.  (i.e. diurnal cycle biases).
for monthly mean regional results it depends on the region of interest: polar regions are dominated
by ERBE-Like inability to distinguish cloud from snow/ice, and by ADMs.  stratus deck regions are
dominated by diurnal sampling errors.
for zonal means, ADM differences dominate in SW fluxes and diurnal sampling in LW fluxes.
for simple global differences over decades the biggest single factor will be absolute calibration
(this is why overlapping data is so critical).

as you can see, there is no simple answer to your question.  we will write all this up, and give an
overall summary table by error source and time/space scale, but please don't expect a simple generic
answer to work: it won't.

************Motivations of merging geostationary 3 hourly data with the polar orbiters**********

We are also getting some very interesting new results on the merging of narrowband geostationary
3-hourly data (to handle diurnal cycles more rigorously) with CERES broadband data.  In all past geo
data sets on clouds and radiation, an EOF analysis at monthly to interannual time scales shows large
effects of the geo sampling patterns: the geo satellite rings show up as major eofs.  but sunsynch
orbits like CERES, while they have no such geo eof patterns, have always been suspect for diurnal
cycle biases and for climate change that changes because diurnal cycles change.  by merging geo and
CERES data we hoped to eliminate both problems: geo calibration, narrowband, and systematic viewing
angle aliasing, as well as sunsynch orbits systematic diurnal sampling biases.  while CERES had
already successfully managed this merging for LW fluxes in the current SRBAVG edition 2 data
product, SW had eluded us.  we finally achieved the SW merging goal this fall, and the CERES science
team approved the new product for data production in november: should be available by Feb. 2006.  
in our offline tests the new merged geo/CERES data passed the following critical tests:
a) addition of a 5% gain error in the geo visible channel calibration causes only a 0.1% change in
final TOA SW flux in the merged CERES/geo data set.
b) eofs of SW reflected flux patterns for the first 3 years of Terra data were compared with and
without the geostationary data correcting diurnal sampling biases.  the eof patterns, variance
explained, looked identical for the first 10 eofs. we did not see any evidence of the geo rings in
the merged geo/ceres data products. additionally, at least from a qualitative examination of the
eofs, we didn't see any difference between the ceres only and ceres/geo eof patterns or magnitudes.  
this implies that while systematic diurnal cycles are important for the mean fields, they do not
appear to be very relevant to seasonal to interannnual climate change.  it may be that there are
some very subtle effects that we can look for, but the first 10 eofs visually looked identical.
c) we also tested the merged geo/CERES fluxes with one of the CERES satellites used in the data
product (e.g. Terra at 10:30 LT) and the other satellite (e.g. Aqua at 1:30LT) used to test the
accuracy of the time interpolated SW flux in the merged geo/CERES data product. critical tests were
to look for biases as a function of cloud fraction, optical depth, solar zenith angle, etc.  the new
product has eliminated the large biases we found in earlier attempts.

The next step is to write up papers on
- CERES/MODIS/MISR/SeaWIFS results: norm loeb will be leading this.
- the new SW merged geo/CERES 1 degree gridded data: Dave Doelling and Dave Young will be the lead
on this.
- a paper summarizing the CERES level 1 through level 3 data product accuracies, and comparing
erbe/ceres capabilities: I will take the lead on that.
Please take note that the GEWEX Radiative Flux Assessment is underway.  we have a new web site up
and running for data submissions/comparisons, and a second workshop will be held in Williamsburg
Feb 21-24 (2.5 day workshop: final dates will be set in about a week as the hotel negotiations wrap
up.  contact Laura Hinkelman (l.m.hinkelman@larc.nasa.gov) if you would like to get on the flux
assessment mailing list.

hope this is helpful!

**********ON the IMPROVEMENTS OF EDITION 2D OVER EDITION 2C*****************
the new part is the public release of the SRBAVG Ed2D TOA and Surface fluxes (March 2000 through
Feb 2003 currently in the archive).  The data quality summaries will always give you the latest
details we have on all released data products: we don't release them until we have a data quality
summary to go along with each data product.   We are now working on extending this product out to
Oct 2005.  The main delay is wrestling 5 changing geostationary satellites into climate quality
records to help with the diurnal cycles.
The main advance in SRBAVG Ed2D over the earlier 2C includes: new surface SW fluxes, constraining
the effect of geo calibration errors to less than 0.1% in the SW flux, and reducing biases as a
function of solar zenith and cloud fraction when merging the CERES and geostationary data sets.
Whenever you go to the ASDC data center and look at CERES data products: the Edition with the
highest number and then letter is our most recent major version.  Edition 2 a major improvement over
Edition 1, and less major changes will be in the letter increments. In general, we use the Edition
number to indicate a consistent family of data products.  The letters are to indicate improvements
in individual products within this family.  So you can always find the most recent version, as well
as its data quality summary, which will indicate the changes from previous versions as well as the
current understanding of accuracy.
Hope this helps.


John et al:
Sorry we haven't been back to you sooner.  We are in the middle of improving our CERES calibration
to adjust for some in-orbit contamination we discovered about a year ago.  Our Rev 1 adjustment to
SW fluxes was step one in this process (see the CERES data quality summaries online with the data
products), and we are now working on a more first principle correction than this simple clear ocean
and all-sky SW correction factor.  As we dig into this deeper, we are not expecting large changes
in the CERES Edition 2 Terra FM1 SW TOA fluxes, but we may get about an 0.5 W/m^2 LW flux increase
for LW(Jan 2004 thru Dec 2005) minus LW(Mar2000 thru Dec 2003).  This is currently showing up as a
daytime only decrease in the current Edition 2 CERES TOA LW flux (doesn't affect night-time).  Note
that the change in daytime only flux for these two time periods is order 1 W/m^2 but when you do
the day/night average for global mean conditions this will be reduced to ~ 0.5 W/m^2.
Looking at the Lyman, Willis, and Johnson paper, looks to me like the best comparison metric may be
to do global net flux change between Net(Jan04-Dec05) for the cooling period, minus Net(Mar00-Dec03)
for the previous warming period.  We will have to use monthly deseasonalized fluxes to make sure we
don't alias seasonal cycle error in from the missing Jan/Feb of 2000.   From the papers figures 1
and 3, and discussion of errors, I estimated that he would predict a global Net Flux change of
Global Net TOA Flux (2004-2005) minus Global Net TOA Flux (2000-2003) = -1.55 W/m^2 with a 95%
confidence of +/-0.56 W/m^2.  So the 95% confidence range for change in TOA Net flux would be -1.0
to -2.1 W/m^2.  Note the uncertainty is somewhat larger than in the Lyman et al paper.  I determined
it using the sigma of OHCA in Figure 3 for start and end of each time interval, assumed independence
so that error in a difference of beginning/end OHCA is just sqrt(variance of OHCA at time t2 plus
variance of OHCA at time t1).  Then since we are doing a radiative flux anomaly in two different
time periods, the difference of fluxes also adds variance in a similar fashion.  note that the
uncertainty I got for flux in each time period was 0.45 W/m^2: even though the std error of OHCA
error is larger for 2000-2003: this is because you divide by a 3 year period instead of 2, and
because the 2003 error is the same for both time periods.
I got the flux difference and error bound by using figures 1 and 3 in Lyman et al.  as follows:
A) Global Net Flux Change: Net(05-03) minus Net(03-00)
The global net radiative flux needed to provide the 2003 to 2005 cooling is as follows:
- where as in the paper: OHCA = Ocean Heat Content Anomaly in 10^22 Joules.
- Also note that there are 5.1x10^14 square meters of surface area and 3.15x10^7 seconds in a year.
 So the - 3.2 x 10^22 J of OHCA anomaly from 2003 to 2005 gets converted to an equivalent radiative
flux of -3.2 x 10^22/2yrs/5.1x10^14 square meters/3.15x10^7 sec per yr = - 1.0 W/m^2 as in the Lyman
For the change of 2000 to 2003 (time separation of mid 2000 to mid 2003 or 3 years), from Figure 1,
the equivalent calculation is
Radiative net flux = + 3.5 /3yrs/5.1x10^14/3.15x10^7 = 0.7 W/m^2.
Then the difference in global net flux between these two periods becomes:
Net(05-03) - Net(03-00) = -1.0 - 0.72 = -1.7 W/m^2
There is a discretization subtlety here, however that we need to be very careful about.  The ocean
heat storage is averaged for a year to reduce sampling noise and to eliminate any issues with the
seasonal cycle.  So the cooling is quoted as the difference between the annual average heat content
for 2005 minus the annual average heat content for 2003.  I need some further details on how the
time averaging is done.  Here is an example of why this is critical:
Ideally, to get heat storage for 2005, we would measure the temp/salinity throughout the entire
ocean at 12am Jan 1, 2005.  Then we would do it again at 12pm Dec 31, 2005.  We would difference
the OHCA between these two times, divide by area of the earth, and number of seconds in the year
2005, and then we have exactly the annual average net flux of heat into the ocean that occurred
during the entire year of 2005.  We would not make any other measurements of OHCA during the year:
only at the beginning and the end.  Note that this is opposite of measuring net radiation at the
TOA: measuring the beginning and end is useless: this is simply the dramatic difference between a
flux measurement and a heat content measurement between two times.
But we don't have a perfect ocean observation at time t2 and t1.  So to average out ocean eddies,
sampling noise, instrument noise, etc, time compositing must be done.  What I don't know is the
detail of how this is done.  Josh and John: hopefully you can illuminate me.
Here is my sense of the problem.  Imagine that the global net radiation for the first 6 months of
2005 is a constant + 2 W/m^2. And that the global net radiation for the last 6 months of 2005 is
exactly - 2 W/m^2.  i.e. we have a switch of sign half way through the year.  In this case, the
annual mean net radiation = 0.  The OHCA anomaly from end of the year minus beginning of the year
is exactly zero, and the perfect ocean observing system would predict that net radiation is zero.  
But now imagine that I had taken temperatures every 10 days, and done an annual average of
temperature over the year.  In this case, the oceans will warm steadily for the first 6 months, and
then cool back to the starting temperature the next 6 months.  The average ocean temperature for
the year will be anomalously high.  But their was NO NET FLUX OF HEAT INTO THE OCEAN IN 2005.  Even though the average ocean temperature was high all year.  Note that if we took ocean data every 10
days, and instead of averaging temperature, we determined a flux into the ocean for each 10 days by
using OHCA(t+10 days) - OHCA(t), then when we averaged the every 10 day ocean heat fluxes over the
year, we would get the correct answer: zero flux into the ocean.  But this is a problem. Because the
error in OHCA at any time t is a constant (at least if we imagine an ARGO where all floats take
simultaneous profiles).  Call this noise sigma(OHCA).  The error in flux from this noise is now the
difference of these two OHCA values and statistics tells us we have an error of sqrt(2) sigma(OHCA)
/ earth surface area / seconds in 10 days.  If we now take 36 such 10 day fluxes to get the annual
mean, we have an error in the annual mean net flux into the ocean of sqrt(2) sigma (OHCA) / earth
surface area / seconds in 10 days / sqrt (36) or 1/6th the error in a single 10 day period.  But
instead, if we take just the last 10 day period, and the first 10 day period, difference their
ocean heat content, we see that the flux error drops by a factor of 36 not sqrt (36).  So I'm
assuming that the data in figures 1, 2, and 3 only used single fields at the beginning and end of
the periods being averaged over to get anomalies or changes in ocean heat content.  Is that true?  
If so, then I think the comparison is relatively straightforward, and the time interval for us to
average radiative fluxes is exactly the same as OCHA (t2) - OHCA (t1).  If not, we have some
potential problems as indicated below:
a) seasonal cycle could alias into the observing system sampling changes over time.  In particular
as ARGO improves its sampling dramatically from Jan 2003 to Dec 2005, this means that the first
part of each seasonal cycle in those two years (jan-mar) will have a lot fewer observations than
the last part of each seasonal cycle (oct-dec) in the same two years.  Averaging heat storage then
has the potential to in effect alias the seasonal cycle into the annual mean. Note that this would
not occur if the observing system sampling were uniform with time.  (this is a bit like in the
satellite business adding more satellites over time at different times of the diurnal cycle of
radiation: it would also cause an aliasing unless we explicitly account for the diurnal cycle in
each region of  the earth and month of the year.  I thought at first that the anomaly map  in figure
2 woud rule this possibility out: the regional anomalies don't look like seasonal cycle hemispheric
patterns.  BUT: the color scale is so coarse to cover large 50 W/m^2 regional anomalies, that
systematic 2 W/m^2 seasonal effects won't show up: should also plot this with -5 to +5 or -2 to +2
color scales as a quick and dirty, but better yet, plot a pdf for northern hemisphere vs southern
hemisphere ocean.  another way to check this might be to look at the altimeter sampling errors for
systematic errors not just standard error.
b) note that if net flux into or out of the ocean is constant for an entire year, then the simple
averaging of temperatures would be ok to determine changes in OHCA from year to year.  but the
radiation data tells us that this is not the case.  this suggests that we may want to determine a
mean ocean heat storage seasonal cycle, and determine the potential sampling errors that way, if
the altimeter cannot sort it all out.
c) so under what conditions would averaging temperatures for all year give the right annual mean
for OHCA?  under the conditions of i) constant net radiation during the year ii) symmetric seasonal
cycle of ocean heat storage so that warming biases during one part of the year negate cooling
biases in the other part of the year.  Potential asymmetries here include both earth sun distance
change over the year, and asymmetric NH and SH ocean mass.
d) so how would you get a mean seasonal cycle?  average all ocean heat content data for Jan of all
years, then for Feb of all years, etc but only pick years where sampling doesn't change too greatly
from month to month.  then determine monthly fluxes.  seasonal time resolution will probably be too
large a time averaging period for accuracy.
Tak should be back sometime next week, and in a week or two we should have a better look at what we
think the CERES fluxes are telling us, once we determine the best time intervals to compare to.
hope this helps move the discussion along,

***************ON Global Albedo and inconsistencies with Earthshine fields******************
... The consistency of CERES/SeaWIFS/MODIS cloud
property anomalies in 2000-2005 show that a consistent story is beginning to emerge on SW albedo at
monthly to interannual time scale: and that Earthshine shows no correlation to it.  We are also now
at the point of starting to determine the requirements for record length and accuracy/stability to
determine decadal scale trends in albedo from the SeaWIFS/CERES comparison.  MODIS still has
calibration shifts that are too large (order 1%), and ISCCP appears to have about 1% calibration
variations which while too large for decadal change, are much smaller than the original estimates of
5% uncertainty for trends.  SeaWIFS uses satellite pitchover lunar views to constrain stability of
its 0.4 to 0.7 micron ocean color channels to an estimated 0.1% over mission life (currently 1998 to
2005).  MODIS cannot do as well because we don't pitch over the Terra satellite to observe the moon
in the same instrument observing conditions as the moon.
I can also have norm loeb send you a summary he put together for the earthshine author palle on the
recent results I will show today: norm led this work.Enric,
We have several new results which I think you will find very interesting. I will be presenting
these during the poster session at AGU on Thursday. Bruce Wielicki will also show some of these
and other results during his invited oral presentation on Friday. I've taken the liberty of copying
this email to others whom I think might also be interested.
The attached pdf shows deseasonalized monthly anomalies from CERES Terra, CERES Aqua, SeaWiFS, MODIS
and ISCCP from 2000 through June 2005. A deseasonalized monthly anomaly is determined by
differencing the average in a given month from the average of all years of the same month. The last
slide shows results of direct radiance comparisons between CERES, MISR and MODIS during the first
five years of Terra.

        Results from all instruments are consistent with CERES Terra. The excellent agreement
between CERES Terra and SeaWiFS is especially noteworthy: monthly anomalies are consistent to
0.3 W m-2 1s, annual anomalies are consistent to 0.13 W m-2 1s, and “trends” (i.e., regression
slopes) are consistent to 0.205+-0.47 W m-2 per decade. SeaWiFS is the best calibrated earth-viewing
instrument available that I know of. Interestingly, none of the instruments see the Earthshine and
GOME trends shown in the paper you sent.

Slide 1
Tropical (30S-30N) ocean monthly anomalies of CERES Terra shortwave (SW) radiative flux and SeaWiFS
Photosynthetically Active Radiation (PAR) reaching the ocean surface for March 2000 through June
2005. PAR is defined as the quantum energy flux from the Sun in the spectral range 400-700 nm,
expressed in Einstein/m2/day. SeaWiFS is an eight-band visible and near-infrared scanning radiometer
designed to have high radiometric sensitivity over oceans. The SeaWiFS team uses monthly lunar
calibrations to monitor its on-orbit radiometric stability. SeaWiFS top-of-the-atmosphere radiances
were stable to better than 0.07% during the first six years of the mission (Eplee et al. 2004:
“SeaWiFS Lunar Calibration Methodology”, Earth Observing Systems IX, edited by William L. Barnes,
James J. Butler, Proc. of SPIE Vol. 5542 (SPIE, Bellingham, WA, 2004) · 0277-786X/04/$15 ·
doi: 10.1117/12.556408).
PAR is an estimate of the 400-700 nm radiation reaching the surface, and CERES SW flux is an
estimate of the outgoing flux. For most changes in cloudiness the two should be related: increasing
broadband reflectance (more cloud) means less PAR. The two are anti-correlated, as shown in the
figure. When plotted against one another (bottom figure), the correlation is excellent (r-square of
0.93, correlation coefficient of 0.964).

Slide 2
Tropical (30S-30N) ocean monthly anomalies of CERES Terra SW flux and SeaWiFS PAR after scaling the
PAR anomalies by -6.575, the slope of the SW fluxPAR anomaly regression in Slide 1 (bottom). Scaling
the PAR anomalies in this manner essentially puts them on the same scale as the CERES SW flux
anomalies. As indicated in the figure, monthly anomalies from these two records agree to 0.3 W m-2
1s, a factor of 3.7 smaller than the variability in the monthly anomalies. Neither record shows a
significant trend during the 5 years. The “trends” are consistent to 0.205+-0.47 W m-2 per decade.
The agreement between CERES and SeaWiFS is really quite spectacular. To my knowledge, this is the
closest two earth reflected data sets have ever come.

Slide 3
Annual anomalies in CERES Terra SW flux and SeaWiFS PAR (in W m-2) determined by averaging the
monthly anomalies in Slide 2 by year. Again, there is excellent agreement between the two datasets.
The annual CERES and SeaWiFS anomalies are consistent to 0.13 W m-2   1s.  As with Earthshine, the
largest anomaly occurs in 2003. However, the CERES and SeaWiFS observe a negative anomaly of
0.55 W m-2, whereas Earthshine sees a positive anomaly in of approximately 4 W m-2 in 2003.
Slides 4 and 5
        Tropical (Slide 4) and global (Slide 5) monthly anomalies in SW TOA flux for CERES Terra
and CERES Aqua for August 2002 through March 2005. CERES Terra and Aqua monthly anomalies agree to
0.4 W m-2 1s. Curiously, agreement between CERES and SeaWiFS is somewhat better than CERES Terr and

Slides 6
Tropical and global monthly anomalies in SW TOA flux for CERES Terra and ISCCP between March 2000
and December 2004. ISCCP monthly anomalies are much noisier than CERES, particularly between 2000
and 2002. CERES Terra and ISCCP monthly anomalies agree to 1 W m-2 1s, roughly the same magnitude
as the variability in the monthly anomalies. Agreement is much better for 2002 onwards. None of the
records show a significant trend (the increase in ISCCP global anomalies is not significant at the
95% significance level). The difference between the CERES and ISCCP slopes is significant for the
global results (2.5 W m-2 per decade), but not for the tropical results (1.4 W m-2 per decade).
Compare these differences with the CERES-SeaWiFS slope difference of 0.205 W m-2 per decade.
Note that ISCCP relies heavily on Geostationary instruments which are primarily designed for
weather applications and therefore do not provide climate accuracy like SeaWiFS, MODIS, MISR or
CERES. Therefore, the differences in Slide 6 are not surprising. When I showed these results to
Zhang (of ISCCP) he was actually quite encouraged. Rossow was aware of the problems in the earlier
part of the record. He noted that it may have something to do with changing sampling, but he didn’t

Slide 7
        Tropical and global annual anomalies in CERES Terra and ISCCP SW flux determined by
averaging the monthly anomalies in Slide 6 by year. As with CERES and SeaWiFS, ISCCP does not show
a huge increase in 2003 like Earthshine.

Slide 8
        Monthly anomalies in SW TOA flux from CERES and cloud fraction from MODIS for tropics and
global. These two records track each other extremely well. Cloud fraction explains most of the
variance in SW TOA flux (60%-85%).

Slide 9
        Radiometric stability of CERES and MISR relative to MODIS. This figure shows that the
relative calibration of CERES, MODIS and MISR Terra has been stable to 1% during the first 5 years.
That is, none of the instruments showed any calibration changes greater than 1% relative to one
another during this period. These results are based on direct coincident radiance comparisons
between the three instruments (which are all on the Terra spacecraft).
See you at AGU,

Other Comments on Data