The BEST is the Worst: Global Temperature Measures Redux; Not!
Anyone who knows anything about global climate knows the Earth has generally warmed since the 1680s. Politics has made that period, which covers the Industrial Revolution, of climatic interest as people wanted to prove that human production of CO2 was causing warming. The problem, ignored by proponents, is that it has warmed naturally since the nadir of the Little Ice Age in the 1680s. The issue isn’t the warming, but the cause.
The Berkeley Earth Surface Temperature (BEST) says,
“Our aim is to resolve current criticism of the former temperature analyses, and to prepare an open record that will allow rapid response to further criticism or suggestions.”
Their actions and results completely belie this claim and all appear to point to a political motive. The entire handling of their work has been a disaster. It is not possible to say it was planned but it has completely distorted the stated purpose and results of their work. The actions are almost too naive to believe they were accidental, especially considering the people involved in the process. Releasing reports to mainstream media before all studies and reports are complete is unconscionable from a scientific perspective. They replicate the deceptive practice of the Intergovernmental Panel on Climate Change (IPCC) of releasing the Summary for Policymakers (SPM) before the Scientific Basis Reports with which it differs considerably.
Like the IPCC, the BEST panel appears deliberately selected to achieve a result or at least ensure a bias. It begins with the leader Richard Muller, who has historically supported the anthropogenic global warming (AGW) hypothesis. There is only one climatologist, Judith Curry, who only recently shifted from a very vigorous pro AGW position to a more central and conciliatory position indicating awareness of the political implications. Involvement in the BEST debacle and especially her admission that early release of some results before the peer-reviewed articles and supporting documentation, in other words the IPCC approach, was at her suggestion is troubling. Ms. Curry’s comments indicate a very peripheral involvement in the entire process. This appears to suggest her participation was for public relations. This is supported by her admission that she was not involved in the data portion of the work.
“I have not had “hands-on” the data.”
Failure to include a skeptical climatologist appears to confirm the political objective.
Climate is the average of the weather so it is inherently statistical. I am not a statistician, but whenever statistical analysis was required I sought professional advice. I watched the discipline change from simple analysis of the average condition at a location or in a region to a growing interest in the change over time. My experience with climate statistics taught me that the greater and more detailed the statistical analysis applied the more it underscores the inadequacies of the original data. It increasingly becomes an exercise in squeezing something out of nothing. The instrumental record is so replete with limitations, errors, and manipulations that it is not even a crude estimate of the pattern of weather and its changes over time. This is confirmed by the failure of all predictions, forecasts, or scenarios from computer models built on that database.
Problems start with the assumption that the instrumental measures of global temperature can produce any meaningful results. They cannot! Coverage is totally inadequate in space and time to produce even a reasonable sample. The map (Figure 1) shows the pattern of Global Historical Climate Network (GHCN) stations from the BEST Report. It distorts the real situation. Each dot represents a single station but in scale probably covers over 500 Sq. km. They also don’t show the paucity of stations in Antarctica, most of the Arctic Basin, the deserts, the rain-forests, the boreal forest, and the mountains. Of course none of these equal the paucity over the oceans that cover 70 percent of the world. It’s bigger problem in the Southern Hemisphere, which is 80 percent water.
BEST also shows the reduction in the number of stations after 1960. A major reason for the reduction was the assumption that satellites would provide a better record, but but BEST didn’t consider that record. It appears the main objective is to offset Ross McKitrick’s evidence that much of the warming in the 1990s was due to a reduction in the number of stations (Figure 2).
It is presented as a ‘Surface” temperature record but it isn’t. It’s the temperature in a Stevenson Screen (Figure 3), which is set according to the World Meteorological Organization (WMO) between 1.25 m (4 ft 1 in) and 2 m (6 ft 7 in) above the ground. The difference is significant because temperatures in the lower few meters vary considerably as research has shows. The 0.75 m difference means that you are not comparing the same temperatures.
Temperatures, sometimes to four decimal places, are thrown around as if they are real, measured, numbers. All temperatures are recorded to half a degree because until thermocouple thermometers appeared any greater precision was impossible.
Most of the land data is concentrated in western Europe and eastern North America so these latitudes dramatically overrepresent the record. This is important because climate change is reflected most in these latitudes as the Circumpolar vortex shifts between Zonal and Meridional flow and the amplitude of the Rossby Waves vary.
BEST used a subset of global temperatures, albeit a larger subset than anyone else however, because the full data set is inadequate, a bigger subset does not improve the analysis potential. Also those who used smaller subsets did so to create a result to support a hypothesis. The BEST study apparently was designed to confirm the results and negate the criticisms
Regardless of the BEST findings the other 3 agencies did achieve different results using by the stations they chose and they are significant. For example, one year there was a difference of 0.4°C between their global annual averages, which doesn’t sound like much, but consider this against the claim that a 0.7°C increase in temperature over the last approximately 130 years. What people generally ignore, is that in the IPCC estimate of global temperature increase produced by Phil Jones of 0.6°C the error factor was ±0.2°C. An illustration of how meaningless the record and the results are is given by the fact that in many years the difference in global annual average temperature is at least half the 0.7°C figure. In summation, all 4 groups selected subsets, but even if they had used the entire data set they could not have achieved meaningful or significant results.
The use of the phrase “raw temperature data” is misleading. What all groups mean by the phrase is the data provided to a central agency by individual nations. Under the auspices of the World Meteorological Organization (WM0) each nation is responsible for establishing and maintaining weather stations of different categories. The data these stations record is the true raw data. However, it is then adjusted by the individual national agencies before it is submitted to the central record. They didn’t use “all” stations or “all’ data from each station. However, it appears there were some limitations of the data that they didn’t consider, as the following quote indicates. Here is a comment in the preface to the Canadian climate normals 1951 to 1980 published by Environment Canada.
“ No hourly data exists in the digital archive before 1953, the averages appearing in this volume have been derived from all available ‘hourly’ observations, at the selected hours, for the period 1953 to 1980, inclusive. The reader should note that many stations have fewer than the 28 years of record in the complete averaging.”
BEST adjusted the data, but they are only as valid as the original data. For example, the ‘official’ raw data for New Zealand is produced by NIWA and they ‘adjusted’ the “raw” data. The difference is shown in Figure 4. Which set did BEST use? Most nations have done similar adjustments.
hey failed to explain how much temperature changes naturally or whether their results are within that range. The original purpose of thirty-year ‘normals’ was to put a statistically significant sample in a context. It appears they began with a mindset that created these problems and it has seriously tainted their work. For example, they say,
“Berkeley Earth Surface Temperature aims to contribute to a clearer understanding of global warming based on a more extensive and rigorous analysis of available historical data.”
This terminology indicates prejudgement. Why global warming? It doesn’t even accommodate the shift to “climate change ” forced on proponents of anthropogenic global warming (AGW) as the facts didn’t fit the theory. Why not just refer to temperature trends?
The project indicates lack of knowledge or understanding of inadequacies of the data set in space or time or subsequent changes and adjustments. Lamb spoke to the problem when he established the Climatic Research Unit (CRU). On page 203 of his autobiography he said,
“When the Climatic Research Unit was founded, it was clear that the first and greatest need was to establish the facts of the past record of the natural climate in times before any side effects of human activities could well be important.”
BEST confirms Lamb’s concerns. The failure to understand the complete inadequacy of the existing temperature record is troubling. It appears to confirm that there is an incompetence or a political motive, or both.