Hi again, Nathan and I had a good skype discussion an we believe the best plan of attack is the following: We run the 100-member ensemble reconstructions for each method for 25 realizations of noise (each screened and un-screened). If the results are sufficiently different from our real-data reconstrutions, this should hopefully be satisfying the reviewer and editor. Here's a step by step to do list :-) 1. Get the noise proxy realisations for both the screened and unscreened versions (I'll send you a download link in a few minutes). Note that this includes 100 realizations of noise proxies each, so if your method is fast and you have enough disk space, you can also run 100 to be prepared for the case we will need them... For the screened data, the zip folder contains one file with data and one with metadata for each realization, same format as usual. For the unscreened data, there is only one metadata file because the metadata are the same for each realization. Data filenames start with "AR-Noise_#" where # is the noise realization number. Screened metadata filenaems start with "metadata_AR-Noise_#" 2. Run at least 25 100-member ensemble reconstructions for the un-screened and screened versions. Or better: One realization we already have from for our last submission, so we need only 24. The one we have is noise realization #100. So please run realizations #76-99 as a minimum. 3. Put these 2x24 ensemble reconstructions on a server where I can download them and fill up all disk space I can find :-) If possible, provide 1 file per reconstruction ideally in netcdf format (with ensemble members in levels) Juanjo, is this feasible for you to do? How much time would you expect this to take? Let me know if there are any questions / comments. cheers raph email from: Jan 8, 2019