Data collection, analysis, and interpretation:: Weather and climate



This material is excerpted from a teaching module on the Visionlearning website, to view this material in context, please visit Data: Analysis and Interpretation.

The weather has long been a subject of widespread data collection, analysis, and interpretation. Accurate measurements of air temperature became possible in the mid-1700s with Gabriel Fahrenheit's invention of the first standardized mercury thermometer in 1714 (see our Temperature module). Air temperature, wind speed, and wind direction are all critical navigational information for sailors on the ocean, but in the late 1700s and early 1800s, as sailing expeditions became common, this information was not easy to come by. The lack of reliable data was of great concern to Matthew Fontaine Maury, the superintendent of the Depot of Charts and Instruments of the U.S. Navy. As a result, Maury organized the first international Maritime Conference, held in Brussels, Belgium, in 1853. At this meeting, international standards for taking weather measurements on ships were established and a system for sharing this information between countries was founded. Defining uniform data collection standards was an important step in producing a truly global dataset of meteorological information, allowing data collected by many different people in different parts of the world to be gathered together into a single database. Maury's compilation of sailors' standardized data on wind and currents is shown in Figure 1 (see the Research links for the original text). The early international cooperation and investment in weather-related data collection has produced a valuable long-term record of air temperature that goes back to the 1850s.

Figure 1: Plate XV from Maury, Matthew F. 1858. The Winds. Chapter in: Explanations and Sailing Directions. Washington: Hon. Issaac Toucey.

This vast store of information is considered "raw" data: tables of numbers (dates and temperatures), descriptions (cloud cover), location, etc. Raw data can be useful in and of itself ? for example, if you wanted to know the air temperature in London on June 5th, 1801. But the data alone cannot tell you anything about how temperature has changed in London over the past two hundred years, or how that information is related to global-scale climate change. In order to see patterns and trends in the data, they must be analyzed and interpreted first. The analyzed and interpreted data may then be used as evidence in scientific arguments, to support an hypothesis or a theory.

Good data is a potential treasure trove – it can be mined by scientists at any time – and thus an important part of any scientific investigation is accurate and consistent recording of data and the methods used to collect that data. The weather data collected since the 1850s has been just such a treasure trove, based in part upon the standards established by Matthew Maury. These standards provided guidelines for data collections and recording that assured consistency within the dataset. At the time, ship captains were able to utilize the data to determine the most reliable routes to sail across the oceans. Many modern scientists studying climate change have taken advantage of this same dataset to understand how global air temperatures have changed over the recent past. In neither case can one simply look at the table of numbers and observations and answer the question – which route to take, or how global climate has changed. Instead, both questions require analysis and interpretation of the data.

Though it may sound simple to take 150 years of air temperature data and describe how global climate has changed, the process of analyzing and interpreting that data is actually quite complex. Consider the range of temperatures around the world on any given day in January (see Figure 2): in Johannesburg, South Africa, where it is summer, the air temperature can reach 35° C (95° F), and in Fairbanks, Alaska at that same time of year, it is the middle of winter and air temperatures might be -35° C (-31° F). Now consider that over huge expanses of the ocean, where no consistent measurements are available. One could simply take an average of all of the available measurements for a single day to get a global air temperature average for that day, but that number would not take into account the natural variability within and uneven distribution of those measurements.

Figure 2: Satellite image composite of average air temperatures (in degrees Celsius) across the globe on January 2, 2008 (http://www.ssec.wisc.edu/data/).

image ©University of Wisconsin-Madison Space Science and Engineering Center

Defining a single global average temperature requires scientists to make several decisions about how to process all of that data into a meaningful set of numbers. In 1986, climatologists Phil Jones, Tom Wigley, and Peter Wright published one of the first attempts to assess changes in global mean surface air temperature from 1861 to 1984 (Jones, Wigley, & Wright, 1986). The majority of their paper – three out of five pages – describes the processing techniques they used to correct for the problems and inconsistencies in the historical data that would not be related to climate. For example, the authors note that "early SSTs [sea surface temperatures] were measured using water collected in uninsulated, canvas buckets, while more recent data come either from insulated bucket or cooling water intake measurements, with the latter considered to be 0.3-0.7° C warmer that uninsulated bucket measurements." Correcting for this bias may seem simple, just adding ~0.5° C to early canvas bucket measurements, but it becomes more complicated than that because, the authors continue, the majority of SST data does not include a description of what kind of bucket or system was used.

Similar problems were encountered with marine air temperature data. Historical air temperature measurements over the ocean were taken aboard ships, but the type and size of ship could affect the measurement because size "determines the height at which observations were taken." Air temperature can change rapidly with height above the ocean. The authors therefore applied a correction for ship size in their data. Once Jones, Wigley, and Wright had made several of these kinds of corrections, they analyzed their data using a spatial averaging technique that placed measurements within grid cells on the earth?s surface in order to account for the fact that there were many more measurements taken on land than over the oceans. Developing this grid required many decisions based on their experience and judgment, such as how large each grid cell needed to be and how to distribute the cells over the Earth. They then calculated the mean temperature within each grid cell, and combined all of these means to calculate a global average air temperature for each year. Statistical techniques such as averaging are commonly used in the research process and can help identify trends and relationships within and between data sets (see our Data: Statistics module.)

Once these spatially averaged global mean temperatures were calculated, the authors compared the means over time, from 1861 to 1984. A common method for analyzing data that occurs in a series, such as temperature measurements over time, is to look at anomalies, or differences from a pre-defined reference value. In this case, the authors compared their temperature values to the mean of the years 1970-1979 (see Figure 3). This reference mean is subtracted from each annual mean to produce the jagged lines in Figure 3, which display positive or negative anomalies (values greater or less than zero). Though this may seem to be a circular or complex way to display this data, it is useful because the goal is to show change in mean temperatures rather than absolute values.

Putting data into a visual format can facilitate additional analysis (see our Data: Using Graphs and Visual Data module). Figure 3 shows a lot of variability in the data: there are a number of spikes and dips in global temperature throughout the period examined. It can be challenging to see trends in data that have so much variability; our eyes are drawn to the extreme values in the jagged lines like the large spike in temperature around 1876 or the significant dip around 1918. However, these extremes do not necessarily reflect long-term trends in the data. In order to more clearly see long-term patterns and trends, Jones and his co-authors used another processing technique and applied a filter to the data by calculating a 10-year running average to smooth the data. The smooth lines in the graph represent the filtered data. The smooth line follows the data closely, but it does not reach the extreme values.

Data processing and analysis are sometimes misinterpreted as manipulating data to achieve the desired results, but in reality, the goal of these methods is to make the data clearer, not to change it fundamentally. As described above, scientists report the data processing and analysis methods they use in addition to the data itself when they publish their work (see our Scientific Writing I: Understanding Scientific Journals and Articles module), allowing their peers the opportunity to assess both the raw data and the techniques used to analyze them.

The analyzed data can then be interpreted and explained. In general, when scientists interpret data, they attempt to explain the patterns and trends uncovered through analysis, bringing all of their background knowledge, experience, and skills to bear on the question and relating their data to existing scientific ideas. Given the personal nature of the knowledge they draw upon, this step can be subjective, but that subjectivity is scrutinized through the peer review process (see our Scientific Writing II: Peer Review module). Based on the smoothed curves, Jones, Wigley, and Wright interpreted their data to show a long-term warming trend. They note that the three warmest years in the entire data set are 1980, 1981, and 1983. They do not go further in their interpretation to suggest possible causes for the temperature increase, however, but merely state that the results are "extremely interesting when viewed in the light of recent ideas of the causes of climate change."

The data presented in this study were widely accepted throughout the scientific community, in large part due to their careful description of the data and their process of analysis. Through the 1980s, however, a few scientists remained skeptical about their interpretation of a warming trend. In 1990, Richard Lindzen, a meteorologist at the Massachusetts Institute of Technology, published a paper expressing his concerns with the warming interpretation (Lindzen, 1990). Lindzen highlighted several issues that he believed weakened the arguments for global temperature increases. First, he argued that the data collection was inadequate, suggesting that the current network of data collection stations was not sufficient to correct for the uncertainty inherent in data with so much natural variability (consider how different the weather is in Antarctica and the Sahara Desert on any given day). Secondly, he argued that the data analysis was faulty, and that the substantial gaps in coverage, particularly over the ocean, raised questions regarding the ability of such a data set to adequately represent the global system. Finally, Lindzen suggested that the interpretation of the global mean temperature data is inappropriate, and that there is no trend in the data. He noted a decrease in the mean temperature from 1940 to 1970 at a time when atmospheric CO2 levels, a proposed cause for the temperature increases, were increasing rapidly. In other words, Lindzen brought a different background and set of experiences and ideas to bear on the same dataset, and came to very different conclusions.

This type of disagreement is common in science, and generally leads to more data collection and research. In fact, the differences in interpretation over the presence or absence of a trend motivated climate scientists to extend the temperature record in both directions – going back further into the past and continuing forward with the establishment of dedicated weather stations around the world. In 1998, Michael Mann, Raymond Bradley, and Malcolm Hughes published a paper that greatly expanded the record originally cited by Jones, Wigley, and Wright (Mann, Bradley, & Hughes, 1998). Of course, they were not able to use air temperature readings from thermometers to extend the record back to 1000 CE; instead, the authors used data from other sources that could provide information about air temperature to reconstruct past climate, like tree ring width, ice core data, and coral growth records (Fig. 4, blue line).

Figure 4: Differences between annual mean temperature and mean temperature during the reference period 1961-1990. Blue line represents data from tree ring, ice core and coral growth records, orange line represents data measured with modern instruments. Graph adapted from Mann et al. published in IPCC Third Assessment Report.

image ©IPCC

Mann, Bradley, and Hughes used many of the same analysis techniques as Jones and co-authors, such as applying a ten-year running average, and in addition, they included measurement uncertainty on their graph: the gray region shown on the graph in Figure 3. Reporting error and uncertainty for data does not imply that the measurements are wrong or faulty – in fact, just the opposite is true. The magnitude of the error describes how confident the scientists are in the accuracy of the data, so bigger reported errors indicate less confidence (see our Data: Uncertainty, Error, and Confidence module). They note that the magnitude of the uncertainty increases going further back in time, but becomes more tightly constrained around 1900. In their interpretation, the authors describe several trends they see in the data: several warmer and colder periods throughout the record (for example, compare the data around year 1360 to 1460 in Figure 4), and a pronounced warming trend in the twentieth century. In fact, they note that "almost all years before the twentieth century [are] well below the twentieth-century – mean", and these show a linear trend of decreasing temperature (Fig. 4, pink dashed line). Interestingly, where Jones et al. reported that the three warmest years were all within the last decade of their record, the same is true for the much more extensive dataset: Mann et al. report that the warmest years in their dataset, which runs through 1998, were 1990, 1995, and 1997.

The debate over the interpretation of data related to climate change as well as the interest in the consequences of these changes have led to an enormous increase in the number of scientific research studies addressing climate change, and multiple lines of scientific evidence now support the conclusions initially made by Jones, Wigley, and Wright in the mid-1980s. All of these results are summarized in the Fourth Assessment Report (AR4) of the Intergovernmental Panel on Climate Change (IPCC), released to the public in 2007 (IPCC, 2007). Based on the agreement between these multiple datasets, the team of contributing scientists wrote that, "Warming of the climate system is unequivocal, as is now evident from observations of increases in global average air and ocean temperatures, widespread melting of snow and ice, and rising global average sea level." The short phrase "now evident" reflects the accumulation of data over time, including the most recent data up to 2007.

A higher level of data interpretation involves determining the reason for the temperature increases. The AR4 goes on to say that "Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations." This statement relies on many data sources in addition to the temperature data, including data as diverse as the timing of the first appearance of tree buds in spring, greenhouse gas concentrations in the atmosphere, and measurements of isotopes of oxygen and hydrogen from ice cores. Analyzing and interpreting such a diverse array of datasets requires the combined expertise of the many scientists that contributed to the IPCC report. This type of broad synthesis of data and interpretation is critical to the process of science, highlighting how individual scientists build on the work of others and potentially inspiring collaboration for further research between scientists in different disciplines.



Activate glossary term highlighting to easily identify key terms within the module. Once highlighted, you can click on these terms to view their definitions.

Activate NGSS annotations to easily identify NGSS standards within the module. Once highlighted, you can click on them to view these standards.