date: Fri May 12 19:06:12 2000 from: Mike Hulme subject: Re: climate scenarios Brazil to: Maarten Krol Maarten, See answers below ......... At 15:23 05/05/00 +0100, you wrote: >Dear Mike, > >Working in a project on the integrated modelling of water avaiability >and socio-economic impacts in NE Brazil under a.o. climate change, I am >very interested in the paper 'Cenarios de alteracoes climaticas para o >Brasil' of CRU/WWF, also because in our scenario work we plan a regional >interpretation of the SRES scenarios. >I have a few methodological questions on the paper and would appreciate >your response. > >- in the MAGICC simulations in Fig. 3, simulations appear do differ as >of year 1970 or 1975; moreover directly A1 and B2 are close whereas from >the CO2 concentrations a 'clustering' A1+A2 and B1+B2 would be expected >(or do CH4 or sulfur make the difference here?), The reason is that for A2 we used a climate sensitivity of 4.5C and for B1 a sensitivity of 1.5C. For A1 and B2 we used 2.5C sensitivity. The reason was to span the range of possible outcomes better. This obviously causes curves to diverge sooner, even though CO2 concentrations may be similar. > >- in choosing a GCM for depicting plausable regional precipitation >changes, one would expect to take the ability of the GCM to mirror >present conditions as a main criterium (as Declan Conway did in his Nile >study, where I was involved in linking it with the IMAGE model), rather >than taking a medium from the full set of GCMs. For NE Brazil e.g. it >would crucial that the GCM simulates semi-aridity. In our project >(WAVES) the climate group studied one GCM run, a transient run of ECHAM >4, in considering climate change in NE Brazil. This GCM reasonably >simulates precipitation season and amount for present climate, but not >much better than reasonable, and a clear reduction in the scenario run. >Of course, it would interest me what GCMs/runs are the ones appearing in >Fig. 6. > There are lots of way of deciding how to select, use or combine GCM results in scenarios. In our WWF work we used the median response of 7 GCMs on the IPCC DDC rather than a single model. See the extract below from our chapter in current IPCC that discusses some of these issues (not to be cited). ____________________________ >From IPCC TAR Chapter 13 WGI - NOT TO BE CITED 13.5.2 Approaches for Representing Uncertainties There are different approaches for representing each of the above five generic sources of uncertainty when constructing climate scenarios. The cascade of uncertainties, and multiple options for representing them at each of the five stages, can result in a wide range of climate outcomes in the finally constructed scenarios (Henderson-Sellers, 1996; Wigley, 1999; Visser et al., 2000). Multiple choices have most commonly been made at the stage of modelling the climate response to a given forcing, where it is common for a set of climate scenarios to use alternative GCM results. The alternative results are usually provided by different GCMs, but an ensemble of simulations from one model may also be used. In practice, this sequential and conditional approach to representing uncertainty in climate scenarios has at least one severe limitation. Typically at each stage of the cascade, only a limited range of the conditional possibilities have been explicitly modelled. For example, GCM experiments have used one, or only a small number, of the concentration scenarios that are plausible (for example, the transient AOGCM experiments provided by the IPCC Data Distribution Centre have all been forced with a future concentration scenario of a 1% per annum growth in greenhouse gas concentration). Similarly, regionalisation techniques have been used with only a small number of the GCM experiments that have been conducted. These limitations restrict the choices that can be made in climate scenario construction and often mean that climate scenarios do not fully represent the uncertainties inherent in climate prediction. In order to overcome some of these limitations, a range of techniques has been developed to allow more flexible treatment of the entire cascade of uncertainty. These techniques manipulate or combine different modelling results in a variety of ways. If we are truly to assess the risk of climate change being dangerous, then impacts and adaptation studies need scenarios that span a very substantial part of the possible range of future climates (Pittock, 1993; Parry et al., 1996; Risbey, 1998; Jones, 1999; Hulme and Carter, 2000). The remainder of this section, therefore, assesses four aspects of climate scenario development that originate from this concern about adequately representing uncertainty: 1. scaling climate response patterns across a range of forcing scenarios; 2. defining appropriate climate change signals; 3. risk assessment approaches; 4. annotation of climate scenarios to reflect more qualitative aspects of uncertainty. 13.5.2.1 Scaling climate model response patterns Pattern-scaling involves the scaling of normalised climate change response patterns derived from GCMs by estimates of global-mean temperature change derived from simple climate models. This approach allows a wider range of possible future forcings (e.g., the IS92 or SRES emissions scenarios) and climate sensitivities (e.g., the 1.5ºC to 4.5ºC IPCC range) to be represented in climate scenarios than if only the direct results from GCM experiments were used. The response patterns are normalised by a denominator which is estimated using the simple climate model and which acts as the scalar. This is usually the global-mean temperature change, although in some cases zonal-mean temperature profiles have been used. This pattern scaling method was first suggested by Santer et al. (1990) and was employed in the IPCC First Assessment Report to generate climate scenarios for the year 2030 (Mitchell et al., 1990; pp.155-158) using patterns from 2xCO2 GCM experiments. It has subsequently been widely adopted in Climate Scenario Generators (CSGs), and other climate scenario construction exercises, starting with ESCAPE (Rotmans et al., 1994), IMAGE-2 (Alcamo et al., 1994), SCENGEN (Hulme et al., 1995a,b) and COSMIC (Schlesinger et al., 1997). Other CSGs that use this technique include OZCLIM (CSIRO, 1996), CLIMPACTS (Warrick et al., 1996; Kenny et al., 2000), BDCLIM (Warrick et al., 1996) and PACCLIM (Kenny et al., 1999). Two fundamental assumptions of this technique are, first, that the defined response patterns adequately depict the climate "signal" under anthropogenic forcing (see section 13.5.2.2) and, second, that these response patterns are representative across a wide range of possible anthropogenic forcings. These assumptions have been explored by Mitchell et al. (1999) who examined the effect of scaling decadal, ensemble-mean temperature and precipitation patterns in the suite of HadCM2 experiments. Although their dimensionless response patterns were defined using only 10-year means, their use of four-member ensemble means improved the performance of the technique when applied to reconstructing climate response patterns in AOGCM experiments forced with alternative scenarios (see Figure 13.7). This confirms earlier work by Oglesby and Saltzman (1992), among others, who demonstrated that temperature response patterns derived from equilibrium GCMs were fairly uniform over a wide range of concentrations, scaling linearly with global-mean temperature. The main exception occurred in the regions of enhanced response near sea ice and snow margins. Mitchell et al. (1999) concluded that the uncertainties introduced by scaling ensemble decadal-mean temperature patterns across different forcing scenarios are smaller than those due to the model's internal variability, although this conclusion may not hold for variables with high spatial variability such as precipitation. One situation where the scaling technique may need more cautious treatment is in the case of stabilisation-forcing scenarios (see Chapter 9). Whetton et al. (1998b) have shown that for parts of the Southern Hemisphere a highly non-linear regional rainfall response was demonstrated in an AOGCM forced with a stabilisation scenario, a response that could not easily be handled using a linear scaling technique. [INSERT FIGURE 13.7 HERE] Where the technique is applied to handling scenarios incorporating the effects of aerosol forcing, there is some evidence to suggest that separate greenhouse gas and aerosol response patterns can be assumed to be additive (Ramaswamy and Chen, 1997). Nevertheless, similar global-mean warmings can be associated with quite different regional patterns depending on the magnitude and pattern of the aerosol forcing. This concern was addressed by Schlesinger et al. (1997; 2000) who deconstructed the global climate response to aerosol forcing into six regional responses, each of which were then re-combined using global weights based on the unique regional pattern of aerosol forcing embedded in a given anthropogenic emissions scenario. The above discussion demonstrates that pattern-scaling techniques provide a low cost alternative to expensive AOGCM and RCM experiments for creating a range of climate scenarios that embrace uncertainties relating to different emissions, concentration and forcing scenarios and to different climate model responses. The technique almost certainly performs best in the case of surface air temperature and in cases where the response pattern has been constructed so as to maximise the signal-to-noise ratio. When climate scenarios are needed that include the effects of sulphate aerosol forcing, pattern-scaling methods may still be applied, but regionally-differentiated patterns and scalars must be defined. It must be remembered, however, that while these approaches are a convenient way of handling several types of uncertainty simultaneously, they introduce an uncertainty of their own into climate scenarios that has not been thoroughly explored for a wide range of climate variables. Neither has much work been done on exploring whether patterns of change in interannual or inter-daily climate variability are amenable to scaling methods. 13.5.2.2 Defining climate change signals The question of signal-to-noise ratios in climate model simulations was alluded to above, and has also been discussed in Chapters 9 and 12. In Chapter 9, climate "signal" and climate "noise" received the designations Tf and T", respectively. We adopt this usage here. The treatment of Tf and T" in constructing climate scenarios is of great importance in interpreting the results of impact assessments that make use of these scenarios. If climate scenarios contain an unspecified combination of Tf plus T", then it is important to recognise that the impact response to such scenarios will only partly be a response to anthropogenic climate change; an unspecified part of the impact response will be related to natural internal climate variability. However, if the objective is to specify the impacts of Tf alone, then there are two possible strategies of climate scenario construction: · attempt to maximise Tf and minimise T"; · do not try to disentangle Tf from T", but supply impact assessments with climate scenarios containing both elements and also companion descriptions of future climate that contain only T", thus allowing impact assessors to generate their own impact signal-to-noise ratios (Hulme et al., 1999a). The relative strength of Tf/T" can be demonstrated in a number of ways. Where response patterns are reasonably stable over time, Tf/T" can be maximised by using long (30 year) averaging periods. Alternatively, regression or principal component techniques may be used to extract Tf from the model response, Tf plus T" (Hennessy et al., 1998). A third technique is to use results from multi-member ensemble simulations as first performed by Cubasch et al. (1994). Sampling theory shows that T" is reduced by Ö(n), where n is the ensemble size. Using results from the HadCM2 four-member ensemble experiments, Giorgi and Francisco (2000), for example, suggest that uncertainty in future regional climate change associated with internal climate variability (T") at sub-continental scales (107 km2), is generally smaller than the uncertainty associated with inter-model or forcing differences. This conclusion is scale- and variable-dependent, however, (see Chapter 9 figure 9.5; also Räisänen, 1999), and the inverse may apply at the smaller scales (104 to105 km2) at which many impact assessments are conducted. Further work is needed on resolving this issue for climate scenario construction purposes. A different way of maximising Tf is to compare the responses of single realisations from experiments completed using different models. If the error for different models is random with zero mean then sampling theory shows that this model average will yield a better estimate of Tf than any single model realisation. This approach was first suggested in the context of climate scenarios by Santer et al. (1990). Treating different GCM simulations in this way, i.e., as members of a pseudo-ensemble, is one way of possibly defining a more robust climate change signal, either for use in pattern-scaling techniques or directly in constructing a climate scenario. The approach has been discussed by Räisänen (1997) and used recently by Wigley (1999), Hulme and Carter (1999b) (see Figure 13.8) and Carter et al. (2000) in providing regional characterisations of the SRES emissions scenarios. It should be noted, however, that this approach will not allow the full representation of model response uncertainty, since some of the model-to-model differences will be systematic rather than random. [INSERT FIGURE 13.8 HERE] The second strategy requires that the noise component (T") be defined explicitly. This can be done relying either on observed climate data or on model simulated natural climate variability (Hulme et al., 1999a; Carter et al., 2000). Neither approach is ideal. Observed climate data may often be of short duration and therefore yield a biased estimate of T". Multi-decadal internal climate variability can be extracted from multi-century unforced climate simulations such as those performed by a number of modelling groups (e.g., Stouffer et al., 1994; von Storch et al., 1997; Tett et al., 1997). In using AOGCM output in this way, it is important not only to demonstrate that these unforced simulations do not drift significantly (Osborn, 1996) but also to evaluate the extent to which model estimates of low-frequency variability are comparable to those estimated from measured climates (Osborn et al., 2000) or reconstructed palaeoclimates (Jones et al., 1998). Furthermore, anthropogenic forcing may alter the character of multi-decadal variability and therefore T" defined from model control simulations may not apply in the future. _________________________ >My colleagues from the climate group at PIK involved in WAVES (Dr >Gerstengarbe, Werner, Oesterle) prepared a downscaling of the ECHAM4 >results for our project region (states of Ceara and Piaui) using their >climate database (mainly precipitation) for NE Brazil. > >Recognizing your outstanding work on precipitation data my colleagues >from the climate group and I would like to invite you to visit PIK >discuss precipitation/climate issues. > I am hoping maybe to visit PIK in June (21-22) in connection with the MIT/PIK seminar. If so, we could talk then. Regards, Mike