Articles | Volume 21, issue 11
https://doi.org/10.5194/cp-21-2485-2025
© Author(s) 2025. This work is distributed under the Creative Commons Attribution 4.0 License.
CYCLIM: a semi-automated cycle counting tool for generating age models and palaeoclimate reconstructions
Download
- Final revised paper (published on 28 Nov 2025)
- Preprint (discussion started on 13 Aug 2025)
Interactive discussion
Status: closed
Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor
| : Report abuse
-
RC1: 'Comment on egusphere-2025-3749', Anonymous Referee #1, 08 Oct 2025
- AC1: 'Reply on RC1', Edward Forman, 27 Oct 2025
-
RC2: 'Comment on egusphere-2025-3749', Anonymous Referee #2, 09 Oct 2025
- AC2: 'Reply on RC2', Edward Forman, 27 Oct 2025
Peer review completion
AR – Author's response | RR – Referee report | ED – Editor decision | EF – Editorial file upload
ED: Publish subject to minor revisions (review by editor) (29 Oct 2025) by Francesco Muschitiello
AR by Edward Forman on behalf of the Authors (31 Oct 2025)
Author's response
Author's tracked changes
Manuscript
ED: Publish as is (17 Nov 2025) by Francesco Muschitiello
AR by Edward Forman on behalf of the Authors (17 Nov 2025)
Review of „CYCLIM: a semi-automated cycle counting tool for palaeoclimate reconstruction” by Forman and Baldini. (egusphere-2025-3749)
In this paper the authors present a code, which is able to automatically count isotope-geochemical growth bands. The aim is to provide a fast method to perform the annual layer counting task to establish a precise chronology. While there have been some approaches of automated layer counted algorithms available in the field, the new aspect of their approach is to allow the user a refinement after the automated counting procedure. However, even the counting algorithm alone seems to perform quite well, when judging the results of the three provided examples, each of different complexity. With the authors approach, the user can perform the counting-task much faster than when counting all cycles alone.
The manuscript is well written and structured. Especially, the introduction appears to be very nice to me. I also liked the section with the examples and the short discussion. What could be somewhat improved is the methods part, which is the most important section in the manuscript. I miss a bit more details on the model parameters and their influence on the layer counting. Please find more details on my – what I would call – moderate suggestions listed below. Pending on improvements with respect to the suggestions, I suggest to consider this manuscript for publication in CP. To my opinion the manuscript deserves to be published as I can imagine, that many researchers can and will make use of this approach.
Kind regards.
#################################################################################
Minor to moderate comments/suggestions:
L14 and L311-312: The term ‘14.1 times faster’ is very specific. And unfortunately, it is not explained in the text, how this number is calculated. At the moment, I doubt this number. Especially, that this number is always – for all records no matter how difficult or long they are – 14.1. I suggest to change this, to something broader, something like: ‘one order of magnitude faster’.
L75-78: This part is about the parameters. Unfortunately, this part is relatively poor, to my opinion. It is not described at all, how the parameters are derived by the code. There is also no real description about what the parameters are and what they are steering or how they influence the result. In the results section, where the approach is tested against already published data, there is some text, which helps to at least adumbrate how some of the parameters are derived. I suggest to at least add a table where the necessary parameters are listed, where it is shortly explained how they are determined or what are typical values. Maybe it is also helpful to leave some sentences in the text or table, which describe them in more detail.
Interesting for the reader may also an answer to the question, what the choice of the mean cycle length has an influence on the result – at least for the automated part. – I guess the total influence on the result is minor, as the manual part can change the result in an arbitrary way.
L78-79: What happens with gaps due to cuts of the samples as they might occur during sample preparation? Are they also covered by this?
L80-83: This is a very helpful tracking procedure. Very nice idea.
L84-85: You are counting possible types of anchor-points here – i.e. only one of each type. However, I can imagine, that especially for U-Th ages, there could exist more than one dated depth over the counted interval. Is CYCLIM able to cope with that as well? Including all (possible) U-Th dated depths (if available) in placing the counted interval could really help to pinpoint the chronology.
L85-86: “The algorithm derives a median age model from 2,000 Monte Carlo realisations using piecewise cubic Hermite interpolating polynomial (PCHIP) interpolation, which …” Can you please elaborate a bit more on this? I don’t understand why you are needing interpolation here. And what kind of Monte Carlo realisations? What is varying?
L 86: “uses” à “used”
L88-L89: “Furthermore, CYCLIM can translate the proxy values onto a time-certain axis to convert age uncertainty into proxy uncertainty.” This is not a specific comment to this paper, and you are free to ignore, if you like, but maybe you can help me out here.
I know this concept has already been proposed in earlier studies (e.g., Breitenbach et al., 2012). However, I, personally, do not really understand this concept. It appears not to be meaningful to me. I always think about this the following way: Only as a signal cannot be put perfectly in time, the magnitude of an event is not smaller than measured. Or more extreme, only as a clearly pronounced event in the measured proxy cannot be dated at all, it does not mean it is not there at all, as this approach would tend to suggest. At least this is my argument for not agreeing with this approach. But this is only my opinion about this issue. Maybe there are arguments in favour of this procedure, I am not aware of.
So, my question to this sentence would be, if this feature can be deselected by the user?
L103-106: Could you elaborate a bit more on the choice of the width, w? I think this would help the reader, to set their own w if they prefer to do so. What is the impact of a change in w? Can you perform a short sensitivity analysis? Maybe with one of your example data sets (or all).
Per default CYCLIM is using the half average annual cycle length. However, growth layers can change quite strongly throughout the record. What would happen to phases, where the cycles are much shorter than the average. And what would happen to phases, where there is very rapid growth? Does such a behavior result in an under- or over-counting. I think, this would be very interesting to the reader. At least to me, it is.
Related to this, it might be worth in another, future version of CYCLIM to make the template width, w, depth adaptive by using a wavelet. In case the over- and undercounting in periods with low and high growth rates is an issue, this method could improve the counting performance, when changes from fast to slow growth occur. Just an idea.
L119-221: “minimum prominence criterion” Please elaborate more on this. What is this? How did you (or CYCLIM) is defining this? Have you tested the impact of this criterion? Is this the same as what you are later, in sections 3.2.1 to 3.2.3, are setting to 0.04 permil (Houtman Abrolhos Coral), 2.5% (C09-2), 503 ppm (BER-SWI-13). You may be able to test the impact of this value with these or at least with one of these data set. Just to give the reader an idea on the sensibility of the parameter choice. Thus, I suggest to elaborate a bit more on this value. Just to find the impact with respect to the identified cycles. Just use the same time series, with a value equivalent to 1, 2.5, 5 and 10 % of the range in y (or other values) and look for changes in the result. Maybe the results will be different with a change in the growth rate, proxy amplitude or signal to noise ratio of the smoothed record.
But then again, as the approach is semi-automatic, the user can correct anything the program found too much or too less. This should be also mentioned, when doing the sensitivity analysis.
Fig. 2: I suggest to put the red line on its own y-axis (right hand side?). Otherwise it is confusing with respect to equation 1.
Section 2.3: I agree, that this is an option to try to estimate the uncertainty. But to me it reads rather as an unusual one, as it is not the time series/signal, which is uncertain. It is the counting through the choice of the parameter set.
Therefore, I would have tried to randomly change the configuration of parameter set, the user is free to choose (or the default values). At least within a certain environment.
In this way, the signal remains as measured, as it is most likely not the signal, which is loaded with uncertainty, but rather the choice of parameter values.
I know, I would ask much, if I would request to chance the uncertainty algorithm. Probably, this would require some (heavy?) recoding. Therefore please, decide for yourself, how you like to proceed. In any way, I also provide a few thoughts on the present algorithm.
L142: “For multiple random seeds … in 1% increments (from 1% to 100%).” For how many multiple random seeds? Does this mean you tried e.g. 1000 random seeds and increased the white noise from 1 to 100% for each random seeds in a way that you get some statistics for your F1 approach? Please also elaborate on, what the '%' unit means in this relation. I have no idea.
L145: “that are true”. What is meant by this? 'True' as in 'with respect to what was found in the undisturbed signal' (which does not mean that it is really true)?
Fig. 3d and Line 175-176: I understand your argument for the stepped nature of the error estimation. I also understand, that the way the age uncertainty is calculated, floating values are produced. But to my understanding the 'deviation from mean model years' can only be integer values. Either one identified minimum counts or does not count. Is it an option to at least round the values?
Fig. 7c (same in Figs 9c and 11c).: about the “temporal offset”. How can the offset be floating values? I don't understand this. Please explain. To me, showing the floating values suggest that it is possible to differentiate seasons! Is that intended?
Fig. 8 (same in 10 and 12): At least for a) the quality of the figure must be improved to see the differences, when someone is zooming in to see the differences in the ages. In the present manuscript version the resolution is too poor. Maybe use vector graphics?
Line 313: “other method” Please name this other method. Is this method mentioned in the methods-section? I can't find it.