Atmospheric variability as a function of scale has been divided in various dynamical regimes with alternating increasing and decreasing fluctuations: weather, macroweather, climate, macroclimate, and megaclimate. Although a vast amount of data are available at small scales, the larger picture is not well constrained due to the scarcity and low resolution of long paleoclimatic time series. Using statistical techniques originally developed for the study of turbulence, we analyse the fluctuations of a centimetric-resolution dust flux time series from the EPICA Dome C ice core in Antarctica that spans the past 800 000 years. The temporal resolution ranges from annual at the top of the core to 25 years at the bottom, enabling the detailed statistical analysis and comparison of eight glaciation cycles and the subdivision of each cycle into eight consecutive phases. The unique span and resolution of the dataset allows us to analyse the macroweather and climate scales in detail.

We find that the interglacial and glacial maximum phases of each cycle showed particularly large macroweather to climate transition scale

We hypothesize that dust variability at larger (climate) scales appears to be predominantly driven by slow changes in glaciers and vegetation cover, whereas at small (macroweather) scales atmospheric processes and changes in the hydrological cycles are the main drivers.

For each phase, we quantified the drift, intermittency, amplitude, and extremeness of the variability. Phases close to the interglacials (1, 2, 8) show low drift, moderate intermittency, and strong extremes, while the “glacial” middle phases 3–7 display strong drift, weak intermittency, and weaker extremes. In other words, our results suggest that glacial maxima, interglacials, and glacial inceptions were characterized by relatively stable atmospheric conditions but punctuated by frequent and severe droughts, whereas the mid-glacial climate was inherently more unstable.

Over the late Pleistocene, surface temperature variability is strongly modulated by insolation, both at orbital (Jouzel et al., 2007) and daily timescales. In between these two scales, temperature variability has been shown to scale according to power law relationships, thus evidencing a continuum of variability at all frequencies (Huybers and Curry, 2006). However, although a vast amount of high-resolution data exist for modern conditions, our knowledge of climatic variability at glacial–interglacial timescales is usually limited by the lower resolution of paleoclimatic archive records, thus restricting high-frequency analyses during older time sections. Previous analyses using marine and terrestrial temperature proxies from both hemispheres suggest a generally stormier and more variable atmosphere during glacial times than during interglacials (Ditlevsen et al., 1996; Rehfeld et al., 2018).

One of the difficulties in characterizing climate variability is that ice
core paleotemperature reconstructions rapidly lose their resolutions as we
move to the bottom of the ice column. Figure 1 shows this visually for the
EPICA Dome C Antarctic ice core temperature proxy (5787 measurements in
all); the curve becomes noticeably smoother as we move back in time. In
terms of data points, the most recent 100 kyr period has more than 3000
points (

Temperature (blue) and dust flux (red) from the EPICA Dome C ice core (Jouzel et al., 2007; Lambert et al., 2012a). The dust flux time series has 32 000 regularly spaced points (25-year resolution); the temperature series has 5752 points. The temperature data are irregularly spaced and lose resolution as we go back into the past (number of temperature data points in successive ice ages: 3022, 1117, 521, 267, 199, 331, 134, 146). In both cases we can make out the glacial cycles, but they are at best only quasi-periodic.

Fluctuation analysis (Lovejoy, 2017; Lovejoy and Schertzer, 2013; Nilsen et al., 2016) gives a relatively simple picture of atmospheric temperature variability (Fig. 2). The figure shows a series of regimes each with variability alternately increasing and decreasing with scale. From left to right we see weather-scale variability, in which fluctuations tend to persist, building up with scale (they are unstable) and increasing up to the lifetime of planetary structures (about 10 d). This is followed by a macroweather regime with fluctuations tending to cancel each other out, decreasing with scale and displaying stable behaviour. In the last century, anthropogenically forced temperature changes (mostly from greenhouse gases) dominate the natural (internal macroweather) variability at scales longer than about 10–20 years. The figure shows that in pre-industrial periods, the lower-frequency climate regime starts somewhere between 100 and 1000 years (the macroweather–climate transition scale

A composite showing root mean square (rms) Haar fluctuations (

We focus on the EPICA Dome C dust flux record, which has a 55 times higher resolution than the deuterium record, including high resolution over even the oldest cycle (Lambert et al., 2012a, Fig. 1). Antarctic dust fluxes are well correlated with temperature at orbital frequencies (Lambert et al., 2008; Ridgwell, 2003). But the fluxes are also affected by climatic conditions at the source and during transport (Lambert et al., 2008; Maher et al., 2010). The dust data used here can therefore be thought of as a more “holistic” climatic parameter that includes not only temperature changes but describes atmospheric variability as a whole (including wind strength and patterns and the hydrological cycle).

In order to proceed to a further quantitative analysis of the types of statistical variability and of the macroweather–climate transition scale, we need to make some definitions. A commonly used way of quantifying fluctuations is the Fourier analysis. It quantifies the contribution of each frequency range to the total variance of the process. However, the interpretation of the spectrum is neither intuitive nor straightforward (Sect. 2.3). The highly non-Gaussian spikiness for both dust flux and its logarithm (e.g. Fig. 3b, c), implies strong – but stochastic – Fourier space spikes. Indeed, Lovejoy (2018) found that the probability distributions of spectral amplitudes can themselves be power laws. This has important implications for interpreting spectra, especially those estimated from single series (“periodograms”): if the spectral amplitudes are highly non-Gaussian, then we will typically see strong spectral spikes whose origin is purely random. This makes it very tempting to attribute quasi-oscillatory processes to what are in fact random spectral peaks. It therefore makes sense to consider the real (rather than Fourier) space variability (fluctuations). The problem here is that the spectrum is a second-order statistical moment (the spectrum is the Fourier transform of the autocorrelation function). While second-order moments are sufficient for characterizing the variability of Gaussian processes, in the more general and usual case – especially with the highly variable dust fluxes – we need to quantify statistics of higher orders, in particular, the higher-order statistics that characterize the extremes. Here, we will use two simple concepts to describe variability and intermittency (or spikiness) of the data.

The theoretical framework that we use in this paper is that of scaling, multifractals, and the outcome of decades of research attempting to understand turbulent intermittency. Intermittent, spiky transitions – characterized by different scaling exponents for different statistical moments – turn out to be the generic consequence of turbulent cascade processes. Although the cascades are multiplicative, the extreme probabilities generally turn out to be power laws (Mandelbrot, 1974; Schertzer and Lovejoy, 1987), not log-normals (as was originally proposed by Kolmogorov, 1962). The analyses are based on scaling regimes and their statistical characteristics. Because scaling is a symmetry (in this case invariance of exponents under dilations in time), in a dynamical regime in which two different components – such as temperature and dust – are strongly coupled parts of the system, each may have different scaling properties but both should respect the scale symmetry including the transition scale at which the symmetry breaks down. Therefore, the broad conclusions of our dust flux analyses – scaling regimes and their break points and stability or instability – are expected to be valid for the more usual climate parameters including the temperature. Although it is beyond our present scope, we will explore the scale-by-scale relationship between EPICA dust fluxes and temperatures in a future publication.

The basic tool we use to characterize variability in real space of a series

We can characterize the fluctuations by their statistics. For example, by
analysing the whole dataset using intervals of various lengths, we can thus
define the variability as a function of scale (i.e. interval length). If
over a range of timescales

More generally, we can consider other statistical moments of the
fluctuations, the “generalized structure functions”,

A simple way to quantify the intermittency is thus to compare the mean and root mean square (rms) Haar fluctuations:

For theoretical reasons (Lovejoy and Schertzer, 2013; Schertzer and Lovejoy, 1987), it turns out that the intermittency near the mean (

While the mean-to-rms ratio is an intuitive statistic, it does not give a
direct estimate of

Although spectra may be familiar, their physical interpretations are
nontrivial, a fact that was underscored in Lovejoy (2015). In a scaling regime – a good approximation to the macroweather and climate regimes discussed here – the spectrum is a power law form (Eq. 5) where the spectral exponent

But what does low-frequency or high-frequency “dominance” mean physically?
For this, it is easier to consider the situation in real space using
fluctuations; the simplest relevant fluctuations are the Haar fluctuations

The dust flux data used in this study are based on a linear combination of insoluble particles, calcium, and non-sea-salt calcium concentrations (Lambert et al., 2012a). Because missing-data gaps in the three original datasets were linearly interpolated prior to the PCA (principal component analysis), high-frequency variability can sometimes be underestimated in short sections that feature a gap in one of the three original datasets. This occurs in about 25 % of all dust flux data points, although half of those are concentrated in the first 760 m of the core (0–43 kyr BP), when an older, less reliable dust-measuring device was used. Below 760 m these occurrences are evenly distributed and do not affect our analysis. Due to the sometimes slightly underestimated variability, the analysis shown here is a conservative estimate (Lambert et al., 2012a).

Unlike water isotopes that diffuse and lose their temporal resolution in the
bottom section of an ice core at high pressures and densities, the
relatively large dust particles diffuse much less and have been used to
estimate the dust flux over every centimetre of the 3.2 km long EPICA core
(298 203 valid data points; Lambert et al., 2012a). The temporal resolution of this series varies from 0.81 to 11.1 years (the averages over the most recent and the most ancient 100 kyr respectively). The worst temporal resolution of 25 yr cm

Unlike temperature for the water isotopes, polar dust flux records cannot be assigned to one particular atmospheric variable, like temperature for the water isotopes. At any given moment, the amount of dust deposited in East Antarctica will depend on the size and vegetation cover of the source region (mostly Patagonia for East Antarctic dust; Delmonte et al., 2008), on the amount of dust available in the source region (can depend on the presence of glaciers), on the strength of the prevailing winds between South America and Antarctica, and on the strength of the hydrological cycle (more precipitation will wash out more dust from the atmosphere; Lambert et al., 2008). Over large scales it is thought that temperature-driven moisture condensation may be the major process driving low-frequency variability (Markle et al., 2018), although that may not be true everywhere (Schüpbach et al., 2018). High- and low-frequency variability in the dust flux record is likely driven by different processes. For example, dust source conditions related to glaciers and vegetation cover may not have influenced high-frequency variability due to their relatively slow rate of change. On the other hand, volcanic eruption or extreme events related to the hydrological cycle may produce high-frequency signals in the record. A single dust peak within a low background may therefore reflect a short-term atmospheric disturbance like an eruption or drought over South America or low precipitation over the Southern Ocean. The analysis presented here focuses heavily on the occurrence of dust fluctuations, the physical interpretation of which will depend on the scale of the phenomenon.

Figure 3a shows a succession of 10 factor-of-2 “blow downs” (upper left to
lower right at 11 different resolutions). In order to avoid smoothing, the
data were “zoomed” in depth rather than time, but the point is clear: the
signal is very roughly scale invariant, at no stage is there any sign of
obvious smoothing, and the quasi-periodic 100 kyr oscillations are the only
obvious timescale (we quantify this below). In comparison with more common
paleoclimate signals, such as temperature proxies – which are apparently
smoother but with spiky transitions – the dust flux itself is already quite
spiky. However, it also displays spiky transitions. In Fig. 3b we show the
absolute change in dust flux, and one can visually see the strong spikiness
associated with strongly non-Gaussian variability: the intermittency. At
each resolution, the solid line indicates the maximum spike expected if the
process was Gaussian, and the upper dashed lines show the expected level for a
(Gaussian) spike with probability

Taking the logarithms of the dust flux is common practice since it reduces
the extremes and makes the signal closer to the temperature and other more
familiar atmospheric parameters. We therefore show the corresponding spike
plot for the log-transformed data (Fig. 3c). Although the extreme spikes are
indeed less extreme (see also Fig. 6a, b), we see that the transformation has not qualitatively changed the situation, with spikes still regularly exceeding (log-) Gaussian probability levels of

Figure 4 shows various spectral analyses (for the corresponding fluctuation
analyses, see Fig. 5). There is a clear periodicity at about (100 kyr)

Log–log plot of the Fourier spectrum of the (25 year)

The Haar fluctuation analysis of the entire 800 kyr dust
flux dataset (thin lines). The dashed black and solid pink lines (top pair)
represent rms fluctuations for dimensional and non-dimensional time respectively. The solid black and blue curves are the same but for the mean
absolute (

Since this is a log–log plot, power laws appear as straight lines. We show
in the figure the fits to the bi-scaling function

The variability shown in Fig. 4 can be interpreted broadly or in detail. A
clear feature is the spectral maximum at around (100 kyr)

The overall conclusion is that the background represents between 85 % and 96 % of the total variance.

Figure 5 shows the Haar fluctuations comparing their statistics for both
dimensional and non-dimensional cycles as well as for the mean and rms
fluctuations (bottom and top set of curves respectively). To start, let us consider the direct interpretations of the fluctuations in terms of the
variability of the dust flux. Recall that when the fluctuations increase
with scale, they represent typical differences, whereas when then decrease
with scale, they represent typical anomalies (deviations from long-term mean
values). For example, typical variations over a glacial–interglacial cycle
(half cycle

The macroweather, climate, and macroclimate regimes noted in Fig. 4 are also
clearly visible. In Fig. 5, we can clearly see the short regime with

Beyond confirming the results of the spectral analysis and allowing for direct interpretations of the fluctuation values in terms of typical fluxes, Haar analysis also quantifies the intermittency from the convergence of the rms and mean statistics at larger and larger timescales (see the clear difference in slopes shown in the climate regime: 0.38 versus 0.33). This underlines the limitation of spectral analysis discussed earlier: the fact that it is a second-order statistic that is only a partial characterization of the variability. Finally, the figure also shows that regardless of whether the cycles are defined in dimensional or in non-dimensional time, statistical characterizations (including the exponents) are virtually unaffected.

Figure 6a shows the fluctuation probabilities of the entire 800 kyr series at a 25-year resolution (here the fluctuations are simply taken as absolute
differences at a 25-year resolution). We see that the large fluctuations (the
tail) part of the distribution is indeed quite linear on a log–log plot, with
exponents

While the dust fluxes are always positive and so cannot be Gaussians, the
increments analysed here could easily be approximately so. Nevertheless, a
common way of trying to tame the spikes is by making a log transformation of
fluxes. Figure 4 already showed that this did not alter the spectrum very
much; here it similarly has only a marginal effect. For example, Fig. 6b
shows that the extreme tails on the log dust flux distribution has

We must mention the problem of estimating the uncertainties in the exponents. In the familiar case, we test a deterministic model and then uncertainty estimates are based on a stochastic model of the errors which are often assumed to be independent Gaussian random variables. In our case, the basic model is a stochastic one, and therefore one needs a stochastic model of the underlying process from which one can draw random time series. While our paper aims to provide a basis for the formulation of such a model, it is beyond our present scope. In order to obtain robust conclusions, we instead rely primarily on cycle-to-cycle comparisons, two different definitions of time (dimensional and non-dimensional) as well as a diversity of analysis techniques (spectral, fluctuation analysis, probability distributions). We should also mention that the use of fluxes (product of 1 cm concentrations and 55 cm accumulation rate) introduces an additional source of uncertainty due to the different time ranges contained in these sections at various depths. However, we prefer using the fluxes because they are more directly representative of climatic changes than concentrations.

However there are some results that are worth mentioning. For example,
Lovejoy and Schertzer (2012) performed a numerical analysis of the uncertainties in first- and second-order exponent estimates obtained from Haar fluctuations of a universal multifractal model with

Finally, for the problem of estimating probability tail exponents (

Scaling is a statistical symmetry, a consequence of a time and space scaling
symmetry of the underlying dynamics. Being statistical means that

This already illustrates the general problem: in order to obtain robust
statistics we need to average over numerous realizations, and since here
we have a single series, the best we can do is to break the series into
disjointed segments and average the statistics over them, assuming that the
major underlying processes were constant over the last 800 000 years. Yet at the same time, in order to see the wide-range scaling picture (which also
helps to more accurately estimate the scaling properties or exponents), we need
segments that are as long as possible. The compromise that we chose between
numerous short segments and a small number of long ones was to break the
series into eight glacial–interglacial cycles and each cycle into eight successive
phases. As a first approximation, we defined eight successive 100 kyr periods (hereafter called “segments”; Fig. 7a), corresponding fairly closely to the main periodicity of the series. As we discussed, the spectral peak is broad implying that the duration of each cycle is variable – the cycles are only “quasi-periodic”. It is therefore of interest to consider an additional somewhat flexible definition of cycles as the period from one interglacial to the next (hereafter called “cycle”; Fig. 7b). The break points were taken at interglacial optima: 0.4,
128.5, 243.5, 336, 407.5, 490, 614, 700, and 789 kyr BP, i.e.

Panel

With either of these definitions, we have eight segments or cycles, each with eight phases. Note that in our nomenclature, phases 1 and 8 are the youngest and oldest phases respectively and that time flows from phase 8 to phase 1. Figure 8 shows the phase-by-phase information summarized by the average flux over each cycle including the dispersion of each cycle about the mean (for the segments in panel a and the cycles in panel b). We see that the variability is highest in the middle of a cycle and lowest at the ends.

Panel

The spectra showed that there were wide-scale ranges that are on average
scale–invariant power laws, and Fig. 4 quantifies the glacial–interglacial cycle. We are thus interested in characterizing the scaling
properties over the different phases of the cycle; for this we turn to real-space statistics. In Fig. 9 we compare the statistics averaged over cycles and the statistics averaged over phases. The figure shows that the phase to phase differences are much more important than the cycle to cycle differences, in particular for the average fluctuations

Panels

From the global statistics (e.g. Figs. 4, 5), it is clear that in each
glacial–interglacial cycle there are two regimes, so that before characterizing the structure functions by their exponents (e.g.

One way of estimating the transition scale

The figure shows that our results are robust since the results are not very
different using dimensional and non-dimensional time (segments and cycles).
Comparing the blue and black curves, we see that in all cases the late
phases have much larger

The transition scale

Alternatively, rather than fixing a phase and determining the variation in
the mean fluctuation and intermittency function (Fig. 9), we can consider
the variation in the Haar fluctuations at fixed timescales and see how they
vary from phase to phase (Fig. 11). The figure shows the phase-to-phase
variation in Haar fluctuations at 50, 100, 200, 400, 800, 3300, and 7000 years
scales (bottom to top; the dashed and solid lines alternate to demarcate the
different curves; they are not uncertainties). Over the macroweather regime
(up to about 400–800 years), the fluctuations tend to cancel so that the
variability is nearly independent of timescale. In contrast, once we reach
the longer scales in the climate regime (up to 7000 years), the fluctuations
increase noticeably as the time interval

Using non-dimensional time, the amplitude of the Haar fluctuations are averaged over all the cycles The curves from bottom to top are for timescales of

Finally, we describe for each phase the drift tendency (

The fluctuation and intermittency exponents

If

Another useful characterization of the phases is to directly consider the
flux variability at a fixed reference scale, taken here as the 25-year
resolution; quantifying the amplitude of the variability of each segment by
its standard deviation

Whereas

An attractive aspect of dust fluxes is that they are paleo-indicators with
unparalleled resolutions over huge ranges of temporal scales. However, they
come with two difficulties. First, their dynamical interpretation is not
unambiguous: they depend on temperature, wind, and precipitation; dust flux
variability is hard to attribute to a specific process, and it is a holistic
climate indicator. Second, their appearance as a sequence of strong spikes
is unlike that of any of the familiar proxies. Indeed, we argue that their
highly spiky (intermittent) nature (i.e. with

Due to the dominance of the continuum (spectral background) variability,
physical interpretations must be based on an understanding of climate
variability as a function of scale. We first consider overall analyses over
the whole dust flux series and then focus on the phases. The spectral
analysis (Fig. 4) is the most familiar, and for the dust fluxes, it is
qualitatively similar to previous results obtained with temperature data,
although temperature spectra with anything approaching the resolution of
Fig. 4 are only possible over the most recent glacial cycle. The most
striking spectral feature is the peak over the background at 100 kyr
periodicity. The broadness of this peak already indicates the irregularity
of the Earth system response to the eccentricity-forced orbital cycles. The
(near-) absence of obliquity frequencies at 41 kyr is notable and is
consistent with the corresponding analysis of paleotemperatures. Although
there is definitely power in that frequency range, it is barely larger than
the background continuum, suggesting a low response to that forcing.
Finally, our high-resolution data allow us to discern two different
power law regimes: one at low frequencies with an exponent

In Sect. 2.3, we discussed some of the difficulties inherent in
interpreting spectra and showed that the exponent of the integrated spectrum

To interpret the analysis by the phase of the dust record (Fig. 12), one must
understand the significance of

The exponents

The exponents characterize the variability of the dust signal over a wide
range of scales. To understand the two scaling regimes, it may be helpful to
recall that the ice core dust signal depends on both the variability of the
dust source and that of the overall climate system. For example, a spike in
the dust source and a fast change in the system state (e.g.
Dansgaard–Oeschger – DO – events in the NH) could both produce a similar
signal. However, fast changes in system state – such as the DO events in
the NH – apparently do not occur in the SH where the corresponding signals
are more triangular and gradual in shape. High-frequency variations in dust
deposition (at scales in the macroweather regime) are thus likely to be
dominated by dust source dynamics rather than ice sheet changes that have
generally larger reaction times. One hypothesis is that the transition timescale

Finally, we could mention volcanoes. Volcano eruptions usually saturated the dust-measuring device and were mostly cut from the record. Using the sulfate record to identify eruptions is tricky because many large sulfate peaks do not have a corresponding dust peak. This means that even if you do have matching dust and sulfate peaks, it could be an eruption or a coincidence. Therefore, the influence of volcanic variability on the results cannot be completely eliminated, although our key results are fairly robust with respect to the phase of the cycle and are therefore unlikely to be influenced by volcanic eruptions.

Although the spikes occur at all scales (see Fig. 3), the most likely explanation for the (shorter) macroweather-scale dust spikes is disturbances in the atmosphere, involving either the winds or the hydrological cycle (or both at the same time). The obvious candidate for a perturbation that would lead to increased dust in the atmosphere is drought. We will therefore interpret macroweather dust spikes as multiannual to multidecadal or multicentennial drought events in southern South America. With this interpretation, we can conclude that glacial maxima, interglacials, and glacial inceptions were characterized by more frequent and more severe drought events than during the mid-glacial. During glacial maxima, such extreme dust events could have contributed to Southern Hemisphere deglaciation by significantly lowering ice sheet albedo at the beginning of the termination (Ganopolski and Calov, 2011). In contrast, more frequent dust events could have contributed to glacial inception through negative radiative forcing of the atmosphere.

Until now, a systematic comparison of the different glacial–interglacial cycles has been hindered by a limitation of the most common paleoclimate indicators – the low resolution of Pleistocene temperature reconstructions from ice or marine sediment cores. Due to this intrinsic characteristic, the older cycles are poorly discerned; we gave the example of EPICA paleotemperatures whose resolution in the most recent cycle was 25 times higher than the resolution in the oldest one. In this paper, we therefore took advantage of the unique EPICA Dome C dust flux dataset with 1 cm resolution measuring 320 000 cm, whose worst time resolution over the whole core is 25 years.

Dust fluxes are challenging not only because of their high resolutions, but also because of their unusually high spikiness (intermittency) and their extreme transitions that occur over huge ranges of timescales. Standard statistical methodologies are inappropriate for analysing such data. They typically assume exponential decorrelations (e.g. autoregressive or moving average processes) that have variability confined to narrow ranges of scale. In addition, they assume that the variability is quasi-Gaussian or at least that it can be reduced to quasi-Gaussian through a simple transformation of variables (e.g. by taking logarithms). In this paper, using standard spectral and probability distribution analysis, we show that both the spectral and the probability tails were power laws, not exponential and requiring nonstandard approaches.

The high resolution of the data allowed us to not only quantitatively compare glacial–interglacial cycles with each other, but also to subdivide each cycle into eight successive phases that could also be compared to one another. One of the key findings was that there was a great deal of statistical similarity between the different cycles and that within each cycle there were systematic variations in the statistical properties with phase. These conclusions would not have been possible with the corresponding much lower-resolution temperature proxy data.

Our variability analysis using real-space (Haar) fluctuations confirmed that
the majority of the variability was in the macroweather and climate scaling
regime backgrounds with an average transition scale

Using various techniques,

We addressed the task of statistically characterizing the cycles by
primarily characterizing the phases' variability exponents

We interpret the intermittency indicators as suggesting a higher frequency of drought events and more severe droughts during glacial inception, interglacials, and glacial maxima than during mid-glacial conditions. These short-term spikes in atmospheric dust could have helped trigger Southern Hemisphere deglaciation through albedo feedback of ice sheet surfaces or glacial inception through negative radiative forcing.

The results presented in this paper are largely empirical characterizations of a relatively less known source of climate data: dust fluxes. Dust flux statistics defy standard models: they require new analysis techniques and better physical models for their explanation. These reasons explain why our results may appear to be rough and approximate. Readers may nevertheless wonder why we did not provide standard uncertainty estimates. But meaningful uncertainties can only be made with respect to a theory and we have become used to theories that are deterministic, whose uncertainty is parametric and that arises from measurement error. The present case is quite different: our basic theoretical framework is rather a stochastic one; it implicitly involves a stochastic “earth process” that produces an infinite number of statistically identical planet earths of which we only have access to a single ensemble member. Unfortunately, we do not yet have a good stochastic process model from which we can infer sampling errors and uncertainties. In addition, from this single realization, we neglected measurement errors and estimated various exponents that characterized the statistical variability over wide ranges of timescale, realizing that the exponents themselves are statistically variable from one realization to the next. In place of an uncertainty analysis, we therefore quantified the spread of the exponents (which themselves quantify variability). In the absence of a precise stochastic model we cannot do much better.

This paper is an early attempt to understand this unique very high-resolution dataset. In future work, we will extend our methodology to the EPICA paleotemperatures and to the scale-by-scale statistical relationship between the latter and the dust fluxes.

The dust flux data are available here:

Both authors analysed the data and contributed to the writing.

The authors declare that they have no conflict of interest.

Shaun Lovejoy's contribution to this fundamental research was unfunded. Fabrice Lambert acknowledges support by CONICYT projects Fondap 15110009, ACT1410, Fondecyt 1171773, and 1191223 and the Millennium Nucleus Paleoclimate. Millennium Nucleus Paleoclimate is supported by the Millennium Scientific Initiative of the Ministry of Economy, Development and Tourism (Chile).

This research has been supported by the CONICYT (FONDAP, project nos. 15110009 and ACT1410), and FONDECYT (project nos. 1171773 and 1191223), the Millennium Nucleus Paleoclimate, supported by the Millennium Scientific Initiative of the Ministry of Economy, Development and Tourism (Chile).

This paper was edited by Carlo Barbante and reviewed by Michel Crucifix and two anonymous referees.