|Review of “Implementation of counted layers for coherent ice core chronology” by B. Lemieux-Dudon et al.|
The manuscript describes a methodological extension of the Datice model used to create the AICC2012 ice-core time chronological framework. Although mainly of technical interest at this point, it is a methodological advance of potentially very significant importance that addresses one of the major shortcomings of the previous model: the lack of ability to include the results of annual layer counting in the model in a way that respects the nature of layer counting. Please note that this review does not go into the details of the mathematics of section 2.2 and the appendices.
The revised manuscript has improved a lot since the first version, and the results are much more convincing (with one point of clarification needed, see comment to fig. 4 below).
Therefore, I believe the results are appropriate for publication in CP provided that the issues mentioned below are fixed and the manuscript is improved with regard to language and clarity. I have not attempted to comprehensively mark up language errors or minor issues in the presentation, but I have spotted many grammatical errors, unclear syntax, and weird use of prepositions. Especially because the manuscript is already hard to read in some sections due to its technical nature, the manuscript would benefit greatly from language revisions by a native speaker.
CP review checklist
1) Does the paper address relevant scientific questions within the scope of CP? Yes
2) Does the paper present novel concepts, ideas, tools, or data? Yes
3) Are substantial conclusions reached? Yes
4) Are the scientific methods and assumptions valid and clearly outlined? Some clarifications and language improvements are needed.
5) Are the results sufficient to support the interpretations and conclusions? Yes, provided that can be convincingly explained why the modelled chronologies are no longer critically dependent on the length of counted sections used as constraints, as was the case for the first version of the manuscript..
6) Is the description of experiments and calculations sufficiently complete and precise to allow their reproduction by fellow scientists (traceability of results)? Yes
7) Do the authors give proper credit to related work and clearly indicate their own new/original contribution? Yes
8) Does the title clearly reflect the contents of the paper? Sort of … see suggestions below
9) Does the abstract provide a concise and complete summary? Yes
10) Is the overall presentation well structured and clear? Well structured: yes. Clear: some attention to language is needed.
11) Is the language fluent and precise? Not really
12) Are mathematical formulae, symbols, abbreviations, and units correctly defined and used? I think so.
13) Should any parts of the paper (text, formulae, figures, tables) be clarified, reduced, combined, or eliminated? As discussed below, the section around page 12-13 could be shortened
14) Are the number and quality of references appropriate? Yes
15) Is the amount and quality of supplementary material appropriate? N/A
What about “Implementation of layer counting in Bayesian ice core chronology modelling” or “Implementation of layer counting in modelling of coherent ice core chronologies”?
Line 5: Remove “s” in ice cores.
Line 9: I still do not think that “markers” is a good description, and “age-difference” is still also a misleading term to use here. A marker is related to either an event or an age, not to a certain number of years between two horizons. “Markers of age-difference” indicates that the age in some particular point is different from some other age estimate, which is not the case (and I believe it is incorrect use of a hyphen, too, but that is a minor detail). You are yourselves using another meaning of “age difference” in figures 5-7, which illustrates the potential of confusion. If you are not willing to use “duration” (which is exactly what the number of counted layers between two horizons is, as you also say yourselves on page 5 line 15), what about “age span” or “layer-counted intervals”? Or find a smart way to replace “marker” by “constraint” or something similar.
Line 15: … which will still not be an “exercise” … when this paper is published, the method will be used to derive new chronologies, not to make exercises. Thus, you can say that the use of “exercise” is appropriate in line 16 because no new chronology is released with this paper, but not in line 15.
Line 25: “1–8 years for counting of 20 annual layers” has been changed to “about 0–4 years for counting of 20 annual layers”, which is line with the GICC05 data file. However, the average relative counting error seems more relevant than the absolute range of uncertain years per 20-year interval.
Line 26: “Since the layer counting is not independent from one interval to another, the final uncertainty on the GICC05 chronology cumulates the counting error (Maximum Counting Error (MCE)”. See the comments to the same sentence in the last review – it is still not correct and the grammar needs to be fixed. It may be better explained in section 2.3 but it needs to make sense here, too.
Line 12: “Permits” is not correct. The constraint forces it.
Line 6-7 and 9: “counted errors” -> “counting errors”
Figure 1 and 3:
Is there only supposed to be one orange line on Figure 1, and is it identical to the GICC05 curve?
I get the feeling that Figs. 1 and 3 shows essentially the same, namely that you can feed Datice with bad background scenarios, and it still converges to GICC05. That’s a lot of room to spend on that.
The term “analysed chronologies” and “analysed error” seem really strange to me, as they indicate that the background chronology and counted chronologies have not been analysed. Could you think of a more appropriate word? “Modelled”, “generated”, “output” or similar …
What do you mean by “older than GICC05”?
And why is “disymmetry (rather asymmetry) of their probability distribution” expected?
This way to describe the annual layer counting is a bit confusing as the summation takes place from annual cycle p to q, implying that the annual cycles are numbered (and thus dated) prior to the dating of the core. I can see what the authors are trying to do, but I do not think the highly complex nomenclature (which is expressing exactly the same as the text at the top of page 12) is adding to the clarity.
Line 3: “Over which time window should we sample the GICC05 ma(r)kers of age–difference?” Do you simply mean to ask how many years there should be in each counted interval? Not clear!
Similarly to the comment above for page 12, the lower part of page 13 is a very complicated way to write up how the errors would sum up in the situations of no or full correlation, neither of which are realistic. Essentially the same discussion is repeated on page 14 for 20-year intervals rather than on annual scale, so the text could be shortened from page 12, line 4 (or 11), to page 14, line 14.
Alternatively, the link between the proposed formal definition (which is based on Gaussian distributions representing every single certain and uncertain year) and how the authors use the publicly available 20-year binned GICC05 depth-age relation and MCEs, is needed.
Line 13: “Consequently” is not correct. In the source papers, the assumption of setting MCE/2 = sigma was not directly linked to the error summation strategy.
Line 22: “1. Either we believe that the full error correlation assessed over the 20yrs time-window between annual layers cuts-off.” I simply do not understand what this means. This is a good example of why a language make-over preferably by a native English speaker is needed.
Line 11: “very distinct inputs” … do you mean “very different inputs”?
Line 17: See, this is a real model development and improvement. Thank you!
Eq. 26: Why this form?
Line 9: “exercises” … will not be exercises, but serious work.
It is really difficult to derive any useful information from this graph if reproduced in less than full page width, and even then, the dashed lines are very hard to decipher.
Are there no constraints keeping the absolute age close to GICC05 in these runs? It seems amazing that the different runs agree with 20 years or so especially then the “analysed errors” are a factor of 3 different between the different experiments. When comparing to figure 2 of the first version of the manuscript, the difference is striking. In the first version, the length of each counted interval had a direct and large influence on the modelled chronology (i.e. the 100 y and 200 y curves on figure 2 were more than 100 years apart at 1900 m), while in the present manuscript, the difference between the 100 y and 200 y curves in figure 4 is an order of magnitude smaller. Only the modelled error bars are significantly different.
What has changed?
Last section: Another advantage is that all deep Greenland cores are available on GICC05.
Line 18: “There is no unique way to define the correlation …” suggested alternative:
“There is no objective way to choose the best representation of the correlation …”
And please replace “exercises” with “experiments” or similar.
The WAIS paper “Members, 2014” needs fixing.