cc: Dáithí Stone , Myles Allen , JKenyon , "Bamzai, Anjuli" , Tim Barnett , Nathan , Phil Jones , David Karoly , knutti@ucar.edu, Tom Knutson , Toru Nozawa , Doug Nychka , Claudia Tebaldi , Ben Santer , Richard Smith , "Stott, Peter" , Michael Wehner , Xuebin Zhang , Francis Zwiers , hvonstorch@web.de, "Amthor, Jeff" , Chris Miller date: Mon, 17 Sep 2007 16:48:10 +0100 from: peter.stott@metoffice.gov.uk subject: Re: near term climate change to: Gabi Hegerl Hi Gabi, I think it would be good somewhere to get on the table the importance of bcs and tcr, although maybe this is already there. The sweet spot paper of Cox and Stephenson seems to have got everybody's attention to the idea that initial condition uncertainty dominates over the 30yr timescale of these runs, whereas the conclusion of our 2000 paper as nicely summarised in Myles's University of Pennsylvania Law Review article (see www.penumbra.com/issues) was that external forcings control multi- decadal scale temperatures on large scales for temperature - it would be interesting to see somewhere a more detailed analysis of how the sweet point breaks down and by scale although we could do more on this on the existing database. If we go for advocating SST forced timeslice experiments, I think we need to systematically explore sensitivity of results to the attributable SST component, which implies we need to expand our attributable SST database from the 4 we currently have (AR4 Fig 9.9). Probably because I've not been paying enough attention, I'm not clear where AR5 is going with systematically exploring modelling uncertainty, as opposed to concentrating on initial condition uncertainty and carbon cycle inversion uncertainty. The issue of interpretation of how the initialisation error evolves in time that Doug points out was also identified at an internal Hadley Centre discussion as being a critical point. Peter On Mon, 2007-09-17 at 14:29 +0100, Gabi Hegerl wrote: > Hi Myles, Daithi et al., > > Well the program calls exactly for these historic initial condition > runs, starting 1960, 1980 etc - > so they are planning to take a critical look at the way the model > performs, and my understanding is that this call for runs is suggesting > to groups to decide (over the near future) if their best bet for good > nearterm predictions is using ICs or using BCs, and they are all called > to do both if at all possible. > I think having the runs available to our community would allow us to > assess if they predict the right > change in 'moderate extremes' (I know weird term, things like exceedance > of 90th etc) - I agree with Myles hopes are small that we can get decent > statistics of the rare extremes, and even smaller that we can diagnose > their changes. > > I agree about the volcanoes, and thats what the proposal is still a bit > murky about - we need to worry about it some, and doing at least some > testruns repeating the 60-90s volcanism into the future might be very > useful (as a high case). > > I am less pessimistic about the usefulness of these runs, but maybe I am > being naive? Myles' suggestion is still a great one, but I think if we > can get only a few modelling centres to do something like that (without > the WGCM seal since that would be real hard to come by) then they would > be wonderful to compare aginst this set of runs. I just couldnt get the > modellers at all excited about this....(Myles, but you are more > than welcome to try talk JOhn Mitchell into this - if you win over JOhn > you've won it but I would bet that there is no hope) > > Gabi > > > > Dáithí Stone wrote: > > Gabi and co., > > > > In my view there is little more than academic interest in doing high > > resolution initial condition runs for the next couple of decades unless > > they include volcanic aerosols. The big volcanic eruptions appear (in > > models at least) to be quite influential. The initial condition ensemble > > *may* (as you say in the attachment) have a more precise prediction than > > what has been produced so far, but this will be misleading because a > > potentially big factor is being included. When there is ~50% chance of a > > big eruption in the next couple of decades, saying "we didn't know when > > exactly" is a lame excuse if you are going to all the trouble of including > > initial conditions. > > > > From the attribution point of view, these initial condition ensembles will > > be pretty useless I think in the form that they will be done. What we > > would need is a series of historical initial condition forecasts, like > > they have for seasonal forecasts, to determine drifts and skill. I'm sure > > data from ARGO floats, etc. will be going into the current forecasts, but > > that sub-surface ocean data does not exist for the past which I expect > > means past forecasts will have very little skill. So are past forecasts > > useful for evaluating the current forecast? The point is I'm not sure how > > useful current forecasts will be (beyond academic) if we have no way of > > evaluating them. > > > > My 2p, > > DA > > > > On Mon, 17 Sep 2007, Gabi Hegerl wrote: > > > > > >> Well, I tried to suggest the SST forcing experiments but got very > >> little enthusiasm. Many were interested in only talking about and > >> synchronizing the coupled carbon cycle scenarios, and so having > >> anything systematically done and stored targeted to shorter timescales > >> sounded better than just letting each group do whatever they wnt and > >> wind up with no comparability at all and no data available outside the > >> respectitve modelling groups > >> I am hoping (but couldnt get anybody to be willing to discuss details) > >> taht 20th century runs will be collected also from the models used to > >> 2100. > >> > >> We could try two things: Add that we need the runs from 1960 starttime > >> also with ghg only also rather than complete forcing (I admit I > >> assumed that but its nowhere said directly - I just missed that), or > >> add a caution/pull out... > >> > >> Gabi > >> > >> Quoting Myles Allen : > >> > >> > >>> Hi Gabi, > >>> > >>> I know this is what the modelers want to do, but I'm still not clear > >>> what relevance these experiments have either for attribution or for > >>> predicting trends in extremes. The ensembles are too small, the > >>> resolution is too low and there is no provision for systematically > >>> removing the impact of different forcings (apart from a nod towards > >>> GHGs, but it is unclear to me how uncertainty in current attributable > >>> warming is to be dealt with, particularly in the runs initialized from > >>> observations). > >>> > >>> I totally appreciate the community interest in the design you describe, > >>> and I don't blame you in the slightest. There is a lot of inertia here. > >>> But if we want to attribute current risks or predict trends in risk over > >>> the next couple of decades, then the best design still looks to me like > >>> large ensemble time-slice experiments with SSTs either prescribed or > >>> relaxed with a time-constant of only a few weeks. This way you get the > >>> signal-to-noise up, you benchmark against a good simulation of the > >>> present day and you can run high enough resolution and large enough > >>> ensembles actually to simulate the events people care about. What can > >>> they say about 100-year return-time events with 10-member ensembles? > >>> > >>> Wouldn't the best strategy just to say straight out that, while we > >>> support these runs being done, they aren't particularly interesting for > >>> attribution and certainly not for attribution of changes in extremes, > >>> nor for the very closely related problem of near-term prediction of > >>> trends in extremes. For that we need a different set of experiments. If > >>> people care about understanding and predicting changes in extremes, they > >>> can allocate time accordingly. My concern is that if we let people think > >>> of these runs (which will be very expensive) as "the attribution > >>> experiments", they will (a) expect us to generate results from them and > >>> (b) object to us asking for other experiments. > >>> > >>> Myles > >>> > >>> -----Original Message----- > >>> From: Gabi Hegerl [mailto:gabi.hegerl@ed.ac.uk] > >>> Sent: Friday, September 14, 2007 5:09 PM > >>> To: JKenyon > >>> Cc: Bamzai, Anjuli; Myles Allen; Tim Barnett; Nathan; Phil Jones; David > >>> Karoly; knutti@ucar.edu; Tom Knutson; Toru Nozawa; Doug Nychka; Claudia > >>> Tebaldi; Ben Santer; Richard Smith; Daithi Stone; Stott, Peter; Michael > >>> Wehner; Xuebin Zhang; Francis Zwiers; hvonstorch@web.de; Amthor, Jeff; > >>> Chris Miller > >>> Subject: near term climate change > >>> > >>> Hi all, > >>> > >>> I was at the WGCM meeting last week, and the issue of saving 20th > >>> century runs and high resolution runs was only discussed marginally > >>> among the big worry about scenarios and carbon cycle. However, there > >>> seems to be a lot of momentum to do initial value forced predictions. I > >>> think it would be very good to get for AR5 predictions based on various > >>> techniques including attributable ghg and initial values. So TIm > >>> Stockdale and I hammered out this proposal (with some suggetsed edits by > >>> > >>> me but those are still subject to Tims ok) > >>> - this may sound like its going down way to far the initial value trail > >>> > >>> for our interests, but it tries to serve all kinds of communities able > >>> to do some form of prediction. > >>> Comments welcome, Peter has a collegue going to a meeting in the > >>> Netherlands next week where this issue will be more discussed, so having > >>> > >>> a view say this weekend or monday would be particularly good > >>> > >>> Gabi > >>> > >>> -- > >>> Dr Gabriele Hegerl > >>> School of GeoSciences > >>> The University of Edinburgh > >>> Grant Institute, The King's Buildings > >>> West Mains Road > >>> EDINBURGH EH9 3JW > >>> Phone: +44 (0) 131 6519092, FAX: +44 (0) 131 668 3184 > >>> Email: Gabi.Hegerl@ed.ac.uk > >>> > >>> > >>> > >>> > >> > >> -- > >> Gabriele Hegerl > >> School of GeoSciences > >> University of Edinburgh > >> IN TRANSIT FROM DUKE, BOTH EMAILS WORK. Physically in Edinburgh Aug 10 > >> > >> -- > >> The University of Edinburgh is a charitable body, registered in > >> Scotland, with registration number SC005336. > >> > >> > >> > >> > > > > -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- > > AOPP, Department of Physics, University of Oxford, U.K. > > Tyndall Centre for Climate Change Research, U.K. > > MAIL: Dáithí Stone, AOPP, Department of Physics, University of Oxford, > > Clarendon Laboratory, Parks Road, Oxford OX1 3PU, United Kingdom > > TELEPHONE: 44-1865-272342 FACSIMILE: 44-1865-272923 > > E-MAIL: stoned@atm.ox.ac.uk > > WEBPAGE: http://www.atm.ox.ac.uk/user/stoned/ > > -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- > > > > > > > >