BRIGHT FUTURE: Abundance and Progress in the 21st Century

David McMullen

 

 

First published in 2007

Copyright Đ David McMullen 2007

http://brightfuture21c.wordpress.com/

 

ISBN 978-0-646-46832-7.

303.49

 


CONTENTS

INTRODUCTION AND SUMMARY  1

CREATING FOOD ABUNDANCE  5

Better Plants and Livestock  7

Better Plants  9

Improved Livestock and Poultry Production  19

Defending GM Food  24

Food Safety  24

Environmental Scares  29

Environmental Benefits  32

Crop Lands, a Declining Resource?  34

Extent of the Resource  34

Soil Degradation  37

Summing up on Land  41

Water 42

Harnessing More of the Resource  42

Depletion and Degradation  43

More Efficient Use  44

Competition from Non-Agricultural Uses  46

Non-Conventional Water Resources  48

Genetic Base  52

Fisheries  53

Non-Renewable Resources  56

Nitrogen Fertilizer 56

Phosphate  56

Potassium   57

Fuel for Farm Machinery  57

"Alternative" Agriculture is No Such Thing  57

PLENTY OF RESOURCES  61

Aiming for Global Affluence  61

Our Energy Needs  64

Fossil Fuels  66

Oil 66

Coal 71

Natural Gas  71

Fossil Fuel as a Whole  74

CO2 Emissions and Global Warming  76

Uncertainties  76

Alarmism   79

What about Eco-Catastrophes?  84

Business as Usual for Now  86

Adapting to any Climate Change  86

CO2 Capture and Storage  88

Solar Energy in its Various Forms  91

Direct Solar 91

Wind  96

Waves  98

Hydroelectric Power 100

Biomass  100

Other Possible Resources  101

Summing up on Solar 101

Nuclear Power without the Phobia  102

Resources  106

The Safety of Nuclear Power 108

Concluding Comments on Nuclear 129

Geothermal Energy  129

Energy Overall 135

Minerals and Other Raw Materials  136

Tidying the Nest - Our Effect on the Environment 137

Air Pollution  138

Water Pollution  139

Pollution and Development 141

Pollution Scares  142

Loss of Forests  146

Mass Extinctions  147

CAPITALISM, THE TEMPORARY TOOL OF PROGRESS  150

Introduction  150

Soviet Hangover 151

Africa: More Capitalism Please  152

Capitalism Outgrows Itself 158

What about the "Communist" Countries?  162

Economic Calculation without Capitalism   163

Collective Ownership will be More Efficient 166

Leaping into the Unknown  177

ABBREVIATIONS  179

REFERENCES  181

NOTES  194

 


1

INTRODUCTION AND SUMMARY

Gloom reigns supreme. Any thought of progress is scoffed at. According to the received wisdom, the earth's "carrying capacity" will not permit global prosperity and "human nature" guarantees that any attempt to advance beyond capitalism will end in tears. Challenging these grim prognoses requires a "technofix" approach, and that is what the reader will find in the following pages.

The planet's capacity to comfortably accommodate us is limited only by the application of human ingenuity, something we are never going to run out of. Food production can be increased by making better use of land and water resources, modernizing backward agriculture, and developing higher yielding and more resilient varieties of crops and livestock. Our increasing energy needs can be met from an array of old and new resources. The fossil fuels - coal, oil and gas - on which we presently rely so very heavily, are ample enough, with the application of better methods of extraction and processing, to continue playing a major role for quite some time, and they can do so while keeping CO2 emissions within reasonable limits. In the longer run other energy resources will take on a greater importance, as their technologies develop and their costs decline. The options in view include sun, wind and wave, as well as uranium and thorium for nuclear power, and the geothermal energy beneath our feet. Then there are others we can only dimly foresee, if at all. At the same time, we will find all the raw materials we need to produce ever increasing quantities of goods and services. Most of these materials are in great abundance and are bound to become cheaper with new methods and new opportunities to substitute less costly for more costly ones.

We can get what we want without threatening the biosphere's "life support systems". While our impact on the natural environment is extensive, it is nothing compared with the battering that the earth withstands on a regular basis from super volcanoes, meteors and ice ages. Furthermore, progress leads to cleaner technologies and better knowledge of how to conserve and manage ecosystems.

We will definitely be making increasing use of our large and expanding carrying capacity as the economies of developing countries continue to grow, albeit patchily. By mid century the number of countries and proportion of the world's population in the affluent category will have increased significantly. Others will follow later in the century with some stragglers such as Sub-Saharan Africa taking until early in the 22nd century.

As the world's population increases from its present 6.5 billion to 9 or 10 billion in the second half the century (at which point it is expected to plateau, at least temporarily), a 2.5 to 3 fold increase in grain production will provide everyone with all the food they need, including produce from grain fed livestock. This can be achieved mid century with an average annual production growth rate of 2 per cent. A slower rate would only mean a delay by several decades.

As the century progresses an increasing proportion of the developing world will reach the per capita energy consumption levels presently achieved in the rich countries.[1] Total per capita energy production for a world with 9 billion people requires a 4.5 fold increase to reach the current rich country average. For a world with 10 billion, a 5 fold increase is required. These increases could easily be achieved this century if we maintain the growth rates seen in recent times and those expected in the next few decades. We can expect raw material needs to grow at a similar pace given they are used to build the industries, infrastructure, motor vehicles and homes that use the energy.

We can expect to see the demand on resources by the countries that are already rich to decline in importance. Their food consumption will stabilize given that their population is not expected to grow much beyond its current level of around one billion and satiation levels have been generally achieved. Being at the technology frontier their economies will grow more slowly than those of catch-up countries. Also their stage of development and static population means less expansion of energy intensive production such as heating, cooling, transport and infrastructure.

Being permanently stuck with capitalism is certainly a gloomy thought. Affluence on average conceals gross inequality, and whatever affluence is achieved is for most people accompanied by alienating employment and limited personal development. If human nature has made capitalism necessary, it was because we needed profit seeking capitalists to make us work. However, in the developed economies this is becoming less and less the case as technological progress transforms work generally into something which we want to do primarily for its own sake. On average it is becoming more interesting, complex and challenging as evidenced by the fact that over half the present workforce requires post-secondary training. Most of the really dreadful, dangerous and exhausting jobs have already disappeared and with increasing automation most of the dreary and menial ones will decline over the course of coming decades. Furthermore, under these new conditions, collective ownership by willing producers provides a more efficient economic motive force than ownership by a master class. It can more effectively tap into the creative powers of the vast majority and is not hidebound by sectional interest.

Any case for collective ownership, of course, has to pay heed to the prevailing view that the economically inefficient police states in the "communist" countries have shown that socialism is inherently flawed. As argued in the final chapter, socialism's lack of success in those countries was mainly due to the fact that they were only beginning to emerge from feudalism.[2] Just getting capitalism to develop in such backward conditions is a mighty achievement, let alone socialism. A socialist revolution in North America or western Europe, while having its own challenges, would be on far firmer ground. In particular, there is the transformation of work just referred to plus the fact that it is carried out by a working class that is in the majority, is educated, is worldly wise, understands what the revolution is about and is not easily browbeaten.

A somewhat more obscure argument against socialism which economists raise is also addressed. They argue that we cannot do without capitalism because we require markets for intermediate goods. These are the inputs that firms obtain from other firms for use in the production processes, and include raw materials, components, factory buildings and machinery. According to this view, if you do not have market relations you are stuck with top-down direction of what is produced and by whom, and this is a method which becomes increasingly ineffective as the economy becomes more complex. As argued in the final chapter, they are right about identifying markets for intermediate goods with capitalism but mistaken in their belief that decentralized price setting and resource allocation requires a market exchange.

Of course, radical change does not occur just because conditions for it are favorable. We have to understand what has happened and then act. With anything new and daunting, it takes a while to catch on and then leap into the unknown. And when we finally make a move we are bound to confront a steep learning curve and considerable resistance from remaining supporters of the existing order. So while the future is a bright one, the road ahead may still be long and bumpy.


2

CREATING FOOD ABUNDANCE

Food production will have to increase considerably over the next half century to ensure that everybody has all the food they want. The 9 billion or so individuals expected by 2050 will have to be much better fed than most of the present 6.5 billion.

At the moment, almost a billion people are under-nourished, receiving far from adequate levels of calories and other nutrients. The region with the largest number in this category is South Asia, where just under a quarter of the population are in this wretched condition. The region with the highest proportion in this category is Sub-Saharan Africa with over a third.[3] Worldwide, some 170 million children under five years of age are underweight due to malnutrition.[4] This makes them vulnerable to a range of diseases and it is estimated that around 3.7 million died in 2000 as a result.[5] Two billion people or more have iron, iodine and zinc deficiencies[6] and one fifth of the global disease burden is due to undernourishment.[7]

Then there is the majority of people who receive a more or less adequate diet but with rising incomes aspire to more 'luxury' foods, such as fruit, vegetables, meat and dairy products, which for a given level of calorie consumption require a lot more resources to produce. The calories from a hectare of most varieties of fruit or vegetable are far less than the calories from a staple grain, such as corn, rice or wheat, grown on the same area. Likewise, in the case of grain-fed livestock and poultry that consume far more calories than what humans get from the final product. About 50 per cent of current world grain production goes to feeding animals rather than humans.[8] Obviously if the grain were consumed directly, it would feed a lot more people. It would be more 'calorie efficient'. Then we have increasing demand for products such as tea, coffee, alcoholic beverages, chocolate, herbs and spices that are not consumed for the nourishment but which draw on resources that would otherwise be available for the production of staples.

This increasing pressure on resources as most people move up the "food ladder" will be alleviated to some extent by a number of factors that increase calorie efficiency. These include: an increased preference for chicken rather than red meat; the development of a greater range of palatable meat substitutes; and the development of improved livestock and feed.

A major upsurge in vegetarianism might help, however, there are no signs of this happening. Besides, vegetarianism of the affluent requires a wide range of fruit, vegetables, herbs and spices and possibly various exotic grains that are low in yield and resource efficiency. Even in India where vegetarianism is imposed by the tyranny of religion, a growing share of grain is going to support the burgeoning dairy industry. Furthermore, total vegetarianism would actually be unhelpful, given that some resources are best used for meat production, e.g., grain by-products and pasture land that is unsuited for crops.

So, what are the prospects for improving average consumption levels and eventually reaching a stage where all countries have reached the satiation level achieved in developed countries? They are good as long as we can increase grain production at rates that exceed population growth. As for how long it will take, that will depend on the difference in the two growth rates.

Over time the task will be made easier as the rate of population growth declines. It has been falling since the late 1960s when it peaked at around 2.1 per cent.[9] It is now around 1.13 per cent and according to the UN's medium growth scenario, it is expected to fall further to 1.05 percent in the period 2010 to 2015, to 0.7 percent during 2025 to 2030 and 0.33 per cent during 2045 to 2050.[10] So by mid century even a very modest increase in output would lead to an increase in the per capita average.

Doubling per capita consumption in developing countries can probably be achieved with a 2.4 to 2.6 fold increase in output.[11] This is on the assumption that their population increases by 50 to 65 per cent (i.e., a world total between 9 and 10 billion) and that all the increase in output goes to developing countries. The latter assumption is realistic given that people in developed countries already have plenty to eat and their population is not expected to increase.

Grain production has been increasing at more or less a linear fashion over the last 45 years or more. While varying considerably from year to year, annual growth generally gravitated around 30 million tonnes.[12] If we continue with a similar annual increase we will double production by around 2060 and provide more than a 50 per cent increase in per capita consumption in developing countries. A 2.5 fold increase would take until the final decade of the century.

If we can push up the pace and achieve the 2‑4 per cent annual growth rates achieved in the 1960s and 1970s we would reach the desired levels far more quickly. For example a 2 per cent annual growth rate would provide a 2.5 fold increase by 2050.

So, can we match or even exceed past achievements? Can science keep coming up with higher yielding crops and livestock? Can we ensure that there will be sufficient resources such as good quality land and water? Can we maintain or even increase the fish harvest? These and related questions will be addressed in the rest of the chapter.

Better Plants and Livestock

The main reason for our success in increasing food production during the last 50 years has been the ability of science to increase the yield potential of our plants and livestock, and to improve their ability to cope with a range of hostile conditions. Can we maintain this performance or are we running out of steam?

The prospects look promising when you consider that we are at the beginning of a biotechnology revolution that is bound to bring major advances in plant and animal breeding. Biotechnology provides a tool kit that includes genetic engineering, genomics, marker assisted selection, cell and tissue culture, and increasing knowledge of how physiological characteristics can act as indicators of performance. This tool kit is already starting to bring results.

Both genetic engineering and marker assisted selection rely on our growing knowledge of genes being provided by genomics which aims to describe and decipher the location and function of all the genes of an organism, and the interactions between them.[13]

Genetic engineering is opening up a totally new area of plant and livestock improvement. It allows us to directly manipulate the genes responsible for various attributes. This includes controlling the level of activity of genes by turning them on or off, or up or down, and also transferring genes from other plants or animals. Scientists will produce an increasing flow of results as they learn more about what genes confer what characteristics and improve their ability to manipulate genes, particularly multi-gene manipulation which is necessary in many cases.

Marker assisted selection (MAS) uses genetic markers to assist in selecting plant varieties with particular traits for inclusion in breeding programs. They are easily identified DNA sequences that are located near a specific gene associated with the trait. This technology is revolutionizing breeding methods because a large number of varieties can be screened for a desirable trait without having to grow them to maturity to determine its presence. Samples can be taken from seedlings and tested for the presence of the molecular marker. With traditional methods, a similar procedure would be far more time consuming and expensive, and often impractical. Checking for the continued presence of the markers can also determine whether the desired trait has been successfully transferred through the various stages of breeding and cross-breeding. This technology has already achieved considerable results, but will achieve a lot more as more is learnt about the role of genes in bestowing traits and their corresponding genetic marker is identified.

Tissue culture refers to a process where new plants are grown from individual cells or clusters of cells, often bypassing traditional cross-fertilization and seed production.[14] This technique enables breeders to attempt wide crosses between varieties that could not be hybridized before and enable faster stabilization of breeding lines. Such methods are also used to produce pathogen-free plants for distribution to farmers and for germ plasm storage.[15]

Better knowledge of what identifiable physiological features are associated with tolerance to certain conditions will assist the selection of varieties for inclusion in breeding programs. Features that are relevant to performance include: root structure that allows higher nutrient uptake, early ground cover to reduce evaporation of soil moisture and large seeds to assist early crop establishment.[16]

There is also much that can be done without cutting edge science or major breakthroughs. In large parts of the developing world high yield plant varieties still need to be adapted to local conditions. And there are neglected crops such as cassava and banana that would benefit from the kind of attention that rice and wheat have received in the past. Likewise, livestock suited to the tropics can benefit from more research. Breeding the water buffalo for meat and milk is an example often cited.

There are also many farmers in the developing world who have yet to take advantage of what has already been achieved because their use requires a more advanced form of agriculture and economic infrastructure. This includes distribution of high-yielding hybrid seeds that cannot be home grown, support from extension services and access to necessary ancillary inputs such as fertilizer and reliable water needed to achieve the promised higher yields.

What is achieved in coming years will depend very much on the level of research funding. Research in recent times has been hampered by the cut back in funding to the major research institutions in the last decade, including those associated with the Consultative Group on International Agricultural Research (CGIAR), the main umbrella organization for research in developing countries. Funding began to decline once research efforts had dealt with the urgent problems of the 60s and 70s by providing a new generation of high yielding rice and wheat. Notwithstanding this problem there are still a whole range of important improvements at various stages along the pipeline. We'll look first at the higher yielding plants currently being developed and then the advances in livestock and feed.

The examples provided are meant to give a general indication. They are not exhaustive and some of those in the pipeline may never see the light of day. Not included is speculation of what may be on offer two or three generations down the track when our knowledge is far more advanced. Perhaps, one day we will produce food directly without having plants or animals as intermediaries. If a plant can produce grain surely we can, and with a multitude of customized features. Likewise for producing muscle tissue (i.e., meat) without the rest of the animal. The output from any given input of land or water with such technology would increase dramatically.

Better Plants

Plants can be improved in a number of ways. Firstly we can increase their yield potential. This is the yield that can be achieved under the best conditions in terms of weather, water and soil. Secondly we can increase their capacity to narrow the gap between actual and potential yields under less than favorable conditions, i.e., conditions of stress. Thirdly we can increase the ability of the crop to survive in an edible state after it is harvested.

Increasing Yield Potential

Yield potential of plants can be increased by various means. Hybridization is one approach. Thanks to an imperfectly understood effect called heterosis, the hybrid from a cross of two different plant varieties grows more vigorously and produces more grain than either parent. In some crops including maize this process is simple. However, with other major crops it is a difficult and ongoing process.

Hybrid rice was first developed in China in the early 1970s and is now planted on about half that country's rice growing area. First generation hybrids have increased yields by 15 to 20 percent and second generation by a further 5 to 10 per cent of their predecessors.[17] A third generation is now being developed which will increase yields even further. Other Asian countries are now beginning to follow suit, developing their own varieties in line with their own palates and environmental conditions. Chinese scientists have also recently managed to develop and test a hybrid wheat breed that could at least double their country's present per-hectare yields.[18] They have also recently created the first hybrid soybean which is expected to increase yields by 20 per cent.[19]

Researchers can also breed plants that put more energy into grain production and less into the rest of the plant. Researchers are working on a new rice and wheat 'architecture' that will significantly increase their harvest index, in other words more grain and less plant. This will be achieved by developing varieties with larger grain heads on thicker but fewer stems.[20]

Other fascinating approaches include: crop plants with an algae gene that boosts yields by almost a third because the new strain converts nitrogen fertilizer far more efficiently[21] and rice with an antisense gene, that inhibits the formation of certain proteins and thus prolongs the grain-filling period of the plant. This rice, in its first field test, increased productivity by 40 per cent[22].

A long-term hope among some scientists is to create plants that are far more efficient at photosynthesis, the process that converts sunlight into energy. If this could be improved, plants could reach maturity more quickly, allowing more crops each year. Nitrogen fertilizer consumption could also be reduced because photosynthesis is the main consumer of this input. Apparently photosynthesis is very inefficient and worked a lot better back in the early days of plants when the atmosphere had little or no oxygen. A number of projects are doing very preliminary research on this problem.

Helping Plants Cope Better with Harsh Conditions

Crops are generally grown under less than ideal conditions which subject them to stresses that can reduce yields quite significantly. These stresses are usually classified into biotic and abiotic. Biotic stresses include the ravages of diseases and insects, and competition from weeds, while abiotic (or non-biological) stress includes the effect of too much or too little water, excessive heat or cold and soil problems such as salinity, acidity and erosion.

Because the amount of damage is quite high, there is much to be gained by improving plant tolerance. And the prospects for progress in this area look very good, with the level of success depending mainly on the extent of funding and the supply of trained workers.

Where crops have a particular strain or wild relative that copes well with a particular stress, this feature can be incorporated into existing commercial varieties through cross breeding. This can be assisted by the use of molecular marker technology to screen a large number of varieties where it is known which gene expresses a particular feature. Where reproduction is through cuttings, tissue culture can provide farmers with disease free plants.

Using genetic engineering, changes can be made directly at the gene level to bestow tolerance. This may involve transferring a gene from a totally different life form that is known to cope well with the particular stress or tweaking the plants existing genes to achieve a similar effect. The more we know about genes and how to manipulate them, the more that can be achieved.

The examples provided below should give a fair indication of the kind of work being done. While some are already showing up in farmers' fields most are still at the research and development stage. It is not an exhaustive list and there are no doubt very important omissions, and possibly some inclusions that will not meet up to expectations. We will start with biotic stresses.

Biotic Stresses

Rice has been genetically engineered to resist devastating diseases such as sheath blight, bacterial blight and tungro virus.[23] Sheath blight resistance has been achieved through the transfer of genes from an insect and from the soil bacterium Bacillus thuringiensis, more commonly called Bt.[24] Scientists have also successfully transferred a bacterial resistance gene from wild rice to cultivated rice.[25] In the case of barley, resistance to a particular disease has been conferred by a wine grape gene.[26]

Scientists are working on a bread wheat that is resistant to the devastating leaf blotch. They have discovered a gene that provides resistance to this disease and are using marker technology to find wheat varieties with this gene. As soon as a seedling sprouts, a small piece of the young leaf can be ground and then a DNA test can be run. This shows whether the markers for the gene are present.[27]

A transgenic potato is being developed that is resistant to the late-blight that caused the Irish famine in 1840 and still causes major havoc. Fungicide has limited effect, is expensive and when used in large amounts can be an environmental problem. The protective gene comes from a wild potato that scientists believe co-evolved in Mexico alongside the blight pathogen.[28] Potatoes could also become a major food source in tropical countries with the development of a variety incorporating a gene from chicken that resists bacterial rot.[29]

Reducing disease in the banana would make a major difference in poor countries where it is a staple food. Because the domesticated banana is usually a seedless clone which grows from cuttings, tissue culture is being introduced to propagate offspring that do not carry over disease from the parent plant. Disease-free cells are selected, removed under sterile conditions and placed in a growth medium. The resulting plants are then distributed to farmers.[30]

Researchers are working to map the entire genetic code of a wild banana from East Asia in the hope it will reveal the genes that provide resistance to the two worst enemies of the banana crop - black Sigatoka and Panama fungal diseases. Once identified, researchers hope to insert these genes into edible var-ieties.[31]

In the battle against viruses, techniques have been developed to insert harmless parts of the virus into the plant to set off an immune reaction much like an inoculation. And having become part of the plants genetic make up it is passed on to the next generation. The most dramatic example of such a process was papaya in Hawaii which had been devastated by papaya ring spot virus. A genetically engineered plant virtually brought the industry back from extinction. A biotech papaya is now being brought to farmers in Southeast Asia, the Caribbean and several other developing areas where papaya is a staple food. In Australia, scientists have developed a similar 'vaccination' technique that has already been used to create potatoes resistant to Potato Leaf Roll Virus and which they hope to apply to a range of plants that are vulnerable to viruses which up till now have proven to be virtually unbeatable.[32]

Providing plants with their own defenses against insects and pests can often be far more effective than other measures such as pesticides, or, at the very least, an important adjunct. In recent years the most dramatic advance in this regard has been the development of so-called Bt crops. This has also been one of the major genetic engineering success stories to date. The plant expresses insecticidal proteins derived from genes cloned from the soil bacterium Bacillus thuringiensis (Bt). These proteins bind to receptor proteins in the insect gut, destroying cells and killing the insect in several days.

Quite large areas are now being sown with Bt corn, canola and cotton. A Bt potato has also been available for use, however, major buyers such as fast food chains have put the stopper on it for fear of been picketed by bio-fearmongers. The more widespread use of these Bt varieties in coming years and the introduction of the gene into other crops including rice and wheat will bring continuing benefits. Yield gains from using Bt corn are estimated to average 5 per cent in temperate regions and 10 per cent in tropical regions.[33]

This is just the first of many toxins that will be provided to plants through genetic manipulation. Included among them will be the transfer of genes from other plants that have shown themselves to be more resistant to given insects. Australian researchers have added a gene from green beans to field peas, creating a crop with a built-in insecticide that is almost 100 per cent effective against pea-weevils, the most damaging pest in that country's pea crop.[34] Also resistance to the white-backed plant hopper is being transferred from barley to rice.[35]

Plant 'architecture' can also provide protection. This includes: developing wheat with more solid stems that will be less susceptible to attack by Hessian fly and sawfly;[36] maize with thicker epidermal cell walls that prevent armyworm larva establishing in the whorl (hair) of the plant;[37] and plants with an enhanced natural ability to produce leaf wax which makes them more difficult for insects to consume.[38]

One of the most destructive pests is the nematode, a microscopic worm that feeds on plant roots and comes in about 15,000 varieties. Scientists are using the genes for defense proteins that occur naturally in rice and sunflowers to fortify potatoes and bananas from this pest.[39] And the breeding of a nematode resistant soybean has been made possible with the help of molecular marker technology.[40]

Another stress that reduces crop yields is the competition from weeds. The biggest news in this area is the introduction of herbicide resistant genetically modified corn and soybeans. The crop contains a gene that makes it tolerant of the herbicide, Roundup, so that when a farmer sprays the field, weeds are killed but not the crops. This proves far more effective than when spraying can only be carried out prior to planting. A weed called striga can devastate grain and legume harvests in Sub-Saharan Africa. Researchers are countering this problem by developing a maize with a herbicide resistance derived from a naturally occurring gene in maize.[41]

There is a concern that pests can evolve resistance to the pesticide being incorporated in plants by genetic engineering, just as they evolve resistance to pesticide sprays. So far resistance has not become a problem with Bt crops despite their having been in use since 1996.[42] Scientist are looking at a number of strategies to delay or eliminate this danger. One approach is to use a Bt gene that is more widely expressed in the plant giving a better knock out punch that leaves less room for the build up of immunity that can happen when the insect experiences lower levels of exposure.[43] Another is the use of "pyramiding" where two Bt genes which are lethal in totally different ways are inserted into the plant. It is highly improbable that an insect would develop resistance to both.[44]

Abiotic Stresses

As well as dealing with a host of weeds and pests, crops also have to contend with the elements. The weather can be too dry, wet, hot or cold, and the soil can be poor.

Droughts and Flooding Rains

Lack of water is a major constraint on crop yields in many areas[45] and can destroy a crop in severe cases.

Crossing with drought resistant wild relatives is one approach. For example, CIMMYT (Centro Internacional de Mejoramiento de Maíz y Trigo or International Maize and Wheat Improvement Center) is currently developing drought-resistant wheat varieties descended from crosses that included goat grass, one of wheat's wild relatives.[46]

Progress is being made by CIMMYT in mapping genes for drought tolerance in wheat[47] and researchers at the University of Queensland are endeavoring to do the same for rice.[48]

Transferring to crops draught tolerance genes from other plants is another promising approach. It is believed that once the genes responsible for superior drought tolerance in sorghum are identified that these genes could be activated in maize because the two plants are likely to share the same basic drought tolerance pathways.[49] Scientists at the University of Bonn have identified a gene in the resurrection plant from South Africa which helps it to survive droughts. The plant can lose up to 95 per cent of its moisture without being harmed by slowing down its metabolism to almost zero during a dry period. It then springs back to life in a few hours once it receives water.[50] In Texas, US Department of Agriculture (USDA) researchers have identified the genes that help a type of grass from South Africa and a type of moss native to the High Plains of the United States to survive extended dryness.[51]

Plant varieties can be bred with physiological traits conferring drought tolerance. Molecular biologists in Oklahoma are developing a drought resistant wheat by adding genes to synthesize a naturally occurring sugar alcohol called mannitol which accumulate in leaf tissues.[52] Other helpful physiological features include larger seed size that improves crop establishment, early ground cover and pre-anthesis biomass that reduces evaporation of soil moisture, and roots that are able to extract water deep in the soil.[53]

Another strategy which might be more aptly called drought avoidance rather than drought tolerance involves breeding plants that match their development cycle with the availability of water. This has been used in the past and still offers much promise. In some cases this could mean that the periods of maximum water requirement match the periods of maximum availability.[54] In other cases it could mean ensuring maturity prior to the arrival of a dry period at the end of the growing season. This could be achieved through faster maturity[55] or through allowing earlier planting by developing varieties that can cope with shorter daylight hours.[56]

Where plants are not being deprived of water there is a good chance they are being drowned in it. A widespread problem in irrigated and high rainfall wheat-growing regions is water logging due to poor drainage. The prospects for developing water logging tolerant wheat are considered good because of genetic variability for this trait.[57] Breeders have found that "synthetic" wheat, bred from grass species that are the wild relatives of wheat, are exceptionally good sources of tolerance.[58]

Heavy rain on maturing wheat crops can cause the grain to start germinating before it is harvested. This degrades end-use quality due to the undesirable proteins produced during germination. CIMMYT has identified high rainfall wheat lines with high levels of sprouting tolerance which could be employed in breeding programs to rectify this problem.[59]

Hot and Cold

Just as it can be too wet or too dry, it can also be too hot or too cold. Agricultural research bodies in developing countries consider heat stress as one of their top research priorities.[60] CIMMYT has already had some success in identifying wheat varieties in their seed banks that have various traits generally associated with tolerance to heat stress. This includes leaf traits such as evapotranspiration, rolling, greater thickness or uprightness.[61] They expect genetic markers to facilitate the process. Similar efforts are also being made in the case of tropical maize.[62] And on the genomic front, researchers have identified a protein that acts as a master regulator of the tomato heat stress response.[63]

Cold tolerance can bestow a range of benefits. It can ensure that the crop won't be destroyed by a cold snap at the beginning of growing season. It can allow crops to be grown in climates too cold to support them currently or permit an extra crop by earlier planting and/or later harvesting.

The Chinese claim to have inserted cold tolerance genes from fish into beets, while British researchers are achieving similar results by incorporating a gene from carrots into various crops.[64] Researchers in Canada, have isolated a powerful gene from larvae of the yellow mealworm beetle that keeps the worms from freezing to death during the winter. They believe it is far more powerful than 'antifreeze genes' found in flounder, a fish which is no slouch when it comes to protecting itself in cold waters.[65]

Where the cold cannot be dealt with head on, it can be avoided by making plants grow faster. Researchers at Cambridge University's Institute of Biotechnology in England put a gene from a flowering weed into tobacco plants, making the tobacco grow much more quickly. The gene produces a protein that causes the plant cells to divide much faster at the tips of roots and shoots.[66]

Unfriendly Soil

Plants can find the soil far from friendly for a range of reasons ‑ in particular, salinity, poor structure and acidity.

Some crops have genetic variation for salt tolerance which can be exploited in breeding programs, particularly with the help of molecular markers.[67] Chinese researchers claim to have developed salt-tolerant varieties of rice[68] and Australian researchers announced that they have successfully bred salt-tolerant durum wheat by crossing an ancient salt-tolerant durum wheat variety with modern commercial ones.[69]

Genetic engineering can take traits from plants and organisms that thrive in a high salt environment. Scientists have genetically modified a tomato plant that thrives in salty irrigation water. The tolerance comes from a protein known as a 'sodium/proton antiporter,' which uses energy available in the plant cells to move salts into compartments within the cells.[70] Once the salt is stashed inside these compartments - called vacuoles - it is isolated from the rest of the cell and unable to interfere with the plant's normal biochemical activity. Not only does the tomato tolerate salt, it also removes salt from the soil. Work is being done to extend this technology to other crops. Chinese scientists have cultivated salt-resistant tomatoes, soybeans, rice and a fast-growing poplar using a gene cloned from a salt-resistant plant called Suaeda Salsa.[71] Another possible approach is to take genes from a bacteria that lives in places like the Dead Sea and splice it into crops. [72] This bacteria can thrive in salt levels ten times higher than ocean water.

Plants often have difficulty accessing the micro-nutrients in the soil because of its structure or composition. Improving the take up ability of the root system may help, possibly through better root system geometry.[73] So would increasing the nutrient reserves in seeds. These would sustain life until the root system is well developed.[74] Another approach might be to reduce the plants needs for certain nutrients. Improving their distribution within the plant would be one way of achieving this.[75] Little effort has gone into breeding crops adapted to these kinds of soil conditions despite the genetic potential.[76] Molecular markers will greatly facilitate the selection of micronutrient efficient genotypes.[77] Genetic engineering also offers promise. A gene for copper efficiency has been transferred from rye to wheat. The transferred gene confers on plants a much greater ability to mobilize and absorb copper ions tightly bound to the soil.[78] Major crops such as corn, wheat and rice have a lot of trouble absorbing iron from alkaline soils which make up a significant proportion of arable land. However, some crops including barley have no trouble. So researchers at the University of Tokyo took two genes from barley and introduced them into rice plants. The result was a four-fold yield increase in the same soil.[79]

Soil acidity is a major constraint on crop production. This is mainly because it releases aluminum ions which are highly toxic to plant roots. Vast areas either have their yields seriously reduced by it or are made unsuitable for cultivation. The problem is particularly serious in tropical regions. While, improving acid soils is part of the answer, its role is limited by the expense and by the fact that disturbing the soil can lead to erosion. Developing aluminum tolerant plants is often a feasible solution either on its own or as a complement to soil improvement.

There are good prospects for wheat given that there is considerable genetic diversity in aluminum tolerance. In Brazil local low yield aluminum tolerant wheat varieties have been interbred with high yielding varieties to provide the benefits of both.[80] A Portuguese landrace also has a high aluminum tolerance and has yet to be exploited in breeding programs.[81] Another strategy is to transfer rye's greater aluminum tolerance to wheat. Triticale (a cross between rye and wheat) could serve as a bridging parent to achieve this transfer.[82]

In the case of maize, researchers are confident that molecular markers and genomics will lead to the development of aluminum tolerant suitable maize cultivars.[83] Genetic engineering is also making some progress in the area. There has been some preliminary work on transferring a gene from an aluminum tolerant plant to maize.[84] Another strategy being pursued is to insert into plants a bacterial gene that codes for citric acid secretion. This allows them to emulate aluminum tolerant plants the roots of which secrete the acid into the surrounding soil in order to 'capture' the toxic aluminum ions which would otherwise attack the plant roots. This approach has been trialed on tobacco, papaya, and rice plants.[85]

Stacked Traits

There are many cases where yields would be increased significantly by the plant having more than one form of improved stress tolerance. For example, a crop may confront dry weather, acid soil and regular insect plagues. Genetic engineering ought to be able to contribute a great deal in this area by "stacking on" genetic changes appropriate for each of the stresses. A genetically modified maize that combines both herbicide tolerance and insect resistance has already been released; and there are already plans to extend this combination to other varieties, notably sugar beet, rice, potatoes and wheat.[86]

Post-Harvest Waste

As well as increasing the size of the crop, plant improvements can also increase the proportion that actually reaches the consumer. Ultimately this is what matters - yield net of post-harvest waste. A lot of food is lost through post-harvest spoilage, so anything that can make the harvested crop more robust will increase the effective food supply. This is particularly so in developing countries with poorer harvesting methods and a lack of refrigeration, storage and transport.

The delayed ripening of fruit and vegetables would improve shelf life and reduce spoilage. Genetic engineering is being used to control the amount and timing of the production of the hormone ethylene that regulates ripening in fruits and vegetables. Research has reached an advanced stage with tomatoes, raspberries, melons, strawberries, cauliflower and broccoli.[87] And in the Philippines, scientists have developed a papaya that instead of rotting in one week, can stay fresh for three months.[88] Researchers in England have found a "freshness gene" in petunias which shows promise.[89]

In the case of grain, greater resistance to storage pests would make a big difference. CIMMYT has discovered a source of such resistance and is incorporating this trait into maize breeding stocks.[90]

Improved Livestock and Poultry Production

Just as crops have to deliver up more from every hectare of land and kiloliter of water, so do livestock and poultry. These resources are used directly in the case of grazing and indirectly where the animals consume feed crops. About 3.3 billion hectares are under permanent pasture - more than twice the area under arable and permanent crops.[91] And as already mentioned domestic animals consume about half of world grain production. This ranges from around 60 per cent in Europe[92] and the US down to quite low levels in India and Sub-Saharan Africa.[93]

Improving livestock and poultry productivity becomes even more important as people in the developing countries increase their per capita consumption of meat and dairy products in step with rising income levels. For most of these countries increases in meat consumption have so far been fairly slight if not stagnant, as in the case of Sub-Saharan Africa. However, if middle income countries such as China and Brazil are anything to go by meat consumption can rise dramatically over a number of decades. In the case of China, meat consumption has quadrupled over the past 20 years.[94]

Increased productivity comes down mainly to improvements in feed and forage, disease control and livestock breeding. As with grain production for human consumption, this is in part a matter of developing countries catching up with the practices of developed ones and in part a matter of pushing out the technology frontier. The former is exemplified by the fact that in 1997-98 beef yield per animal was less than 60 per cent and milk yield per cow was less than 20 per cent of those achieved in the developed countries.[95]

Improved Feed and Fodder

There are many ways in which we can improve what livestock have to eat. Feed and fodder can be made more nutritious, the diet can be enhanced with additives and supplements and nutrients in food can be made more accessible by improving digestibility.

In developing countries simply catching up with the world's best feed practices can bring large gains. Feed can be harvested at the right time to maximize nutrient recovery, processed to retain more nutrition and improve digestibility, and stored properly to avoid nutrition loss. Animals can also benefit from being fed well balanced mixtures and provided with food supplements.

Smil (2000) points out that the overwhelming majority of China's pigs (which account for 90 percent of the country's meat output) is still not fed well-balanced mixtures but just about any available edible matter and hence it is commonly lacking in protein.[96] As a result feeding rates are well above the norms prevailing in Western countries and pigs take at least twice as long to reach slaughter weight as a typical North American animal - and its carcass is still lighter and fattier. He tells a similar story for chickens. The hundreds of millions of chickens roaming China's farmyards take three times as long to reach a lower slaughter weight as North American broilers.[97] China is far from being the most backward when it comes to animal raising.

Conway mentions a number of possible ways that genetic engineering could transfer greater nutrition to feed and forage.[98] Cereals are low in lysine compared with legumes such as peas and lupins. A gene transfer from legumes to cereals would benefit pigs and chicken. On the other hand legumes are deficient in a number of sulphur amino acids required by cattle and sheep. These would benefit from transferring genes from sunflower seeds and chicken egg protein to forage legumes such as lucerne and clover.

The wider use of growth hormones could considerably improve productivity. Those in cattle (BST) increase milk production efficiency by up to 40 per cent per cow and increase feed to beef conversion by about 9 per cent.[99] According to Avery the potential of pig growth hormone (PST) is even greater than for cattle.[100] He claims that PST will produce hogs with up to 60 percent less fat and 15 percent more lean, using one-third less feed grain.

Reducing methane production in livestock could save up to 10 percent of feed because of the energy loss avoided.[101] Smil refers to an additive, produced from a fungus, which reduces methane production by altering the metabolism of ruminant bacteria,[102] and scientists are developing a vaccine that will discourage the production of these methanogenic micro-organisms.[103]

Much can be done to improve the digestibility and nutrient absorption of feed. Better processing is one approach. For example, straw can be made more digestible by a range of chemical treatments, and lignin and cellulose in crop residue can be broken down using a range of methods including fermentation.[104] Plants can be bred to remove or neutralize substances that interfere with digestibility and nutrient absorption. Soybeans and wheat are being genetically engineered to express the phytase enzyme. This neutralizes phytate, a substance that "is widely distributed in cereals and legumes and reduces the absorption of iron, zinc, phosphorus and other minerals in humans and other animals."[105] Researchers have genetically modified lupin so that sheep absorb more sulphur amino acids required for wool and muscle growth. Presently, a large proportion of the acids are broken down in the rumen before reaching the small intestine where they would otherwise be absorbed. The lupin has been modified to contain a sunflower gene that produces a protein that is both rich in sulfur amino acids and stable in the sheep's rumen.[106] And Conway refers to research aimed at inserting genes from crops like sorghum, maize, millet into forage legumes to reduce lignin content and increase their digestibility by 10-30 per cent.[107]

Better Disease Control and Healthcare

Disease significantly affects livestock productivity. Alexandratos refers to estimates showing that at least 5 percent of cattle, 10 percent of sheep and goats and 15 percent of pigs die annually due to diseases.[108] And in the case of animals that survive, productivity is less than that for healthy animals.

Farmers in developing countries will benefit from access to better veterinarian services and disease control measures as they modernize and farmers worldwide will benefit from the forward march of medical and veterinary science which will improve our ability to prevent, diagnose and control disease.

As with human health, biotechnology will play an important part in disease control. Some progress has already been made in the development of genetically engineered vaccines. For example, researchers at the School of Veterinary Medicine at the University of California, Davis have been developing such a vaccine for rinderpest, a devastating viral disease that is responsible for millions of deaths among cattle herds each year throughout Africa and Asia.[109] The vaccine is produced by transferring two genes from the rinderpest virus into the virus used to make the smallpox vaccine. It is particularly suited to the backward areas affected because it requires no refrigeration and is simply scratched onto an animal's neck or abdomen. Furthermore, a cattle herder can produce thousands of doses by scratching the skin of a calf, applying the seed vaccine, and a week later harvesting the scab in saline solution.

In central Africa sleeping sickness (trypanosomiasis) poses an enormous obstacle to human health and cattle production. A range of measures offer the hope of recovering infested areas for agriculture. These include trypanocidal drugs, aerial spraying, adhesive insecticides, impregnated screens and traps and the use of sterile insects.[110]

Breeding Better Farm Animals

Breeding programs can improve livestock and poultry in a range of ways that ensure that they make the most of what they are fed. This can mean more meat as a proportion of body weight, taking less time to reach slaughter age, greater disease resistance, better ability to process food, less nutrient needs, more milk or eggs for a given level of feed, improved reproductive efficiency and the ability to consume a wider range of foods.

As with plants, biotechnology will play a major role in future breeding programs. With rapid advances in understanding the genetic make-up of animals, genes that are important for economic performance, such as those for disease resistance or for adaptation to adverse environmental conditions, can be identified and transferred into animals, either through marker-assisted selection or through genetic engineering.[111]

Conway reports on the development of genetically engineer livestock that produce greater quantities of bovine growth hormone.[112] This would enable them to reach optimal slaughter age more quickly, meaning that less of the feed and water consumed would go into just standing around breathing rather than growing.

Another strategy is to reduce the nutrient needs of the animal. Conway suggests the possibility of introducing genes for sulphur amino acid biosynthesis, present in the bacterium E. coli, directly into sheep, bypassing the need for improved fodder.[113]

Given the high losses from disease, the transfer of genes encoding for resistance from other animal species and even from plants could bring significant benefits.[114] For example, a genetically modified cow is being developed with a mouse gene that makes them resistant to mastitis of the udder.[115] Currently, antibiotics are used to treat the disease, and the milk cannot be used while the cows are on the drugs.

Smil sees much to be gained in devoting more resources to breeding animals suited to the tropics, a region that has not received anywhere near the attention of the temperate zones.[116] He gives the example of how the water buffalo might be transformed from a working animal into a valuable meat and milk specie. It is particularly suited to tropical and sub-tropical climates. Because of the higher count of cellulose-breaking bacteria and protozoa in their guts, they use low-grade roughages more efficiently than normal cattle and have overall lower feed/gain ratios. In addition, buffalo milk is richer in protein and fat than cow's milk. However, a breeding program would be needed to raise their average milk and meat yields which are far behind those for temperate-climate cattle.[117]

Defending GM Food

While genetic engineering promises to contribute much to the challenge of increasing food production, its opponents have managed to whip up considerable opposition. We are told that it is 'unnatural', tinkering with nature, playing God; that it poses fearful food safety risks and threatens the environment with 'super weeds' and 'genetic pollution'. We are like the master's apprentice tinkering with forces we do not really understand and which can get out of our control.

A host of inquiries have given genetically modified food the nod of approval and firmly rebutted the claims of opponents. They consider the risks are mostly identical to the risks associated with conventional foods and that those that are different are well covered by the regulatory regimes in place. The only real concern is that as changes made by genetic engineering become more varied and complex that the science and technology needed to assess them keeps pace.

A range of regulatory bodies are involved assessing and regulating transgenic crops. In the US there is the Food and Drug Administration (FDA), the Department of Agriculture (USDA) and the Environmental Protection Agency (EPA). At the international level there is the World Health Organization (WHO) and the Food and Agricultural Organization (FAO). Genetically engineered crops have been grown and tested for 20 years and eaten by millions of people on a daily basis since 1996 without any disastrous consequences.

Food Safety

There is no credible evidence that GM foods are less safe to consume than other food. The risks are not of a different nature than those that are already familiar to toxicologists and can be created by conventional breeding.[118] In fact some transgenic food currently on the market are identical to the conventional product because the gene change is not in the final product. Where there is a change to the final product, it will be easier to evaluate for safety compared to those developed through traditional breeding because the new method is more precise. Instead of randomly combining all the traits of the two parent organisms, as happens with conventional breeding, genetic engineering permits identification and transfer of only desirable traits. Scientists know what has been changed and therefore what to look for when evaluating possible risks.[119]

In its short history, transgenic food has had three main food safety claims against it. These were the study which claimed that laboratory rats were being poisoned by GM potatoes; concerns about the use of antibiotic resistance genes in the gene transfer process; and the introduction of allergens. We examine these in turn.

Rats Don't Like Raw Potato

A preliminary study performed at the Rowett Research Institute in Scotland by Dr Arpad Pusztai reported that rats developed intestinal problems after being fed raw transgenic potatoes containing a lectin with known insecticidal properties.[120] The study was heavily criticized by independent experts and by the Rowett Institute itself, which discredited the study entirely after performing an audit of the research.

The British medical journal Lancet made the unwise decision to publish the study. In an editorial disowning it, they conceded that they were caving into pressure from GM opponents who were running the line that failure to publish was suppression. This flies in the face of the normal practice of academic journals of only publishing papers that have successfully run the gauntlet of peer review. With laboratory studies the emphasis is on ensuring that the usual rules of evidence are being applied. This study failed that test totally.

Antibiotic Resistance Marker Genes

There has been some objection to the incorporation of antibiotic-resistant marker genes in transgenic crops together with the gene conferring the desired trait. The antibiotic kills seedlings to which the genes have not been properly transferred.[121] There has been a worry that if these genes were present in transgenic food or feed, they could confer resistance to disease-causing micro-organisms in the stomachs of any human or farm animal eating it. There is even a concern that antibiotic resistance could be passed on to people who consume livestock products.

There are a number of answers to these fears: (1) The resistance gene protects against an antibiotic which is not used on humans and animals; (2) The antibiotic resistance may not necessarily be transferred to the final plant variety distributed to farmers because it is the product of a cross between the original transgenic plant and a commercial one; (3) if it is transferred, the chance of it being incorporated into the genetic make-up of micro-organisms is zero, and this is even if we didn't allow for the effect of digestion which tends to totally destroy genes and DNA; and (4) recent attempts to get microbes to pick up the trait confirmed the impossibility.[122] Even if the worry had some grounds to it, it is becoming a thing of the past as researchers develop other methods to determine whether a gene has "taken."

Introducing Allergens

Another potential risk posed by GM foods is the introduction of genes from organisms that cause allergic reactions in some people. This would pose a problem if people with allergies are unaware of the danger. This is most likely if the GM food is a widely used ingredient. Outside a relatively small number of genes associated with a limited number of foods, allergic reactions are fairly rare. Most food allergies occur in response to specific proteins in only eight foods: peanuts, tree nuts, milk, eggs, soybeans, shell fish, fish and wheat. Furthermore, the small risk is totally under control. Any additional components added to a GM crop are clearly defined and easy to detect and can be tested for any allergic reaction or other toxic effect. These would be picked up in mandatory tests that are much the same as those for pesticides and food additives. This is what happened in the case of an experimental soybean with an added Brazil-nut protein. It was abandoned once the problem was recognized.

Safety Endorsements

A long list of relevant bodies have concluded that genetically modified food is as safe as any other food. These include: the WHO, the FAO, the United Nations Food Program, the International Society of Toxicology, the French Academy of Science and Medicine, the American College of Nutrition, the American Medical Association, the General Accounting Office (the investigative arm of the US Congress), the National Academy of Sciences (NAS), the Royal Society and the British Medical Association.

Safer and Healthier

GM foods are not only safe, they have the potential to make food even safer and healthier.

Removing allergens Eliminating or reducing the allergenic properties of food would be a major service to that significant proportion of people who suffer from allergies. Using gene silencing techniques which reduce or shut off the production of the offending protein, researchers have already grown low allergy rice, wheat and soybean while progress is being made with peanuts and prawns.

Healthier oils In an effort to create healthier fats, researchers have modified the fatty acid composition of soy and canola in several ways. This includes oils with reduced or zero levels of saturates and trans-fatty acids and with high levels of oleic acid.[123] Plans are also afoot to introduce fish type omega-3 into oil-seed crops. This could be achieved by introducing genes from algae and marine micro-organisms.[124]

Better frying potatoes A transgenic potato has been developed which contains a gene for an enzyme which greatly increases starch synthesis. The increased starch content makes the potatoes take up less fat during frying, resulting in a lower-fat product.[125]

More protein In India, a gene was added to ordinary potatoes giving them a third more protein than normal, including substantial amounts of the essential amino acids lysine and methionine. The new gene comes from the amaranth plant which grows in South America.[126] Protein enriched maize and soybeans have also been produced[127] and researchers are seeking to improve the protein content of vegetable staples such as cassava and plantain.[128]

Antioxidants Scientists have produced tomatoes with two and half times the normal level of lycopene. Lycopene is thought to reduce the risk of several types of cancer and some forms of heart disease. However, it is normally difficult to increase the amount in one's diet and taking it as a supplement does not work.[129]

Another antioxidant that genetic engineering can help with is vitamin E. Studies show that vitamin E lowers the risk of cardiovascular disease, cataracts and some cancers, and it may slow the progression of some degenerative diseases such as Alzheimer's.[130] However, to achieve its results it needs to be taken in levels that are not practical to receive from our diet because of the quantities involved, e.g., four pounds of spinach per day or 3,000 calories of soybean oil.[131] Researchers are hoping to increase our intake by tinkering with a gene that converts the less potent gamma form of vitamin E to the more potent alpha form in soybean, corn and canola oils.

Vitamin A Every year some 500,000 children in the developing world go blind because of vitamin A deficiency. Researchers are hoping to reduce this appalling statistic with the help of a gene from daffodils. This produces elevated levels of beta-carotene which are then converted to Vitamin A in the human body. Rice with this gene (called golden rice because of its color) has been crossed with local varieties of rice which are undergoing field trials and will hopefully be available to subsistence farmers in the near future.[132] Work is also progressing on developing a similarly enhanced mustard which is grown widely in developing countries for its oil.[133]

Access to iron Most people get too little iron, with almost one third of the world's population believed to be anemic, and possibly around one fifth of all malnutrition deaths caused by a lack of iron.[134] Scientist are working on a variety of rice that has a higher level of iron in the grain and also makes it more accessible.[135] The amount of iron is doubled with the aid of a gene from the French bean, while accessibility is aided by two mechanisms. The first one involves a gene from fungus which counteracts a molecule called phytate that locks up about 95 percent of the iron in the plant. The second involves a gene from basmati rice which makes a protein that aids iron absorption in the human digestive system.

Food tolerance The vast majority of east Asians and blacks, and many whites are intolerant of cow's milk. That's because their bodies do not produce enough of the enzyme lactase, which is needed to digest the milk protein lactose. In France researchers are working to eliminate the problem by giving cows a gene that will cause them to manufacture their own lactase, which will be present in their milk.[136]

Many people cannot eat wheat, oat, rye or barley products because the gluten makes them ill. Consequently, British researchers are working on a process to remove from gluten the part which causes illness while leaving the part that is important for baking.[137]

Medical Uses

As well as the nutritional benefits of genetically modified organisms, there are the medicinal.

Vaccines in food offer considerable promise for developing countries. Because they would be administered orally it would avoid the horrendous number of HIV and hepatitis B infections that are presently caused by unsafe injection. They would also be inexpensive and require no refrigeration. Potatoes, tomatoes, carrots, bananas and rice are being developed containing vaccines for a range of diseases including food borne E. Coli, cholera and hepatitis B.[138]

A plant could possibly provide an edible form of immunotherapy for asthma. Tests with mice are promising. Consumption of engineered lupin plants that contained an asthma allergen from sunflower seeds protected the mice from a large otherwise asthma-inducing dose of the allergen in the air.[139]

Genetically modified plants, bacteria and animals are being turned into little factories churning out cheap ingredients such as proteins, enzymes and hormones for the pharmaceutical industry. The diseases being treated so far include hemophilia, cystic fibrosis and multiple sclerosis.[140]

Menaces can be neutralized. For example a ryegrass is being develop with less hay fever allergens in its pollen[141] and work has been done that may lead one day to a malaria resistant mosquito.[142] Also, new friends can be made, such as a gene-altered microbe that when applied to the mouth elbows out bacteria that cause tooth decay,[143] animals with tissue and organs suitable for human use[144] and plants that change color in the presence of landmines.[145]

Environmental Scares

GM crops are accused of posing a number of environmental risks. There is said to be a danger from gene flows which can create "super weeds" in the wild and "genetic contamination" of other crops. Also US Bt crops have been accused of endangering the monarch butterfly.

'Super Weeds'

Critics raise the specter of genetically enhanced crops breeding with wild relatives to create a 'super weed' that could overwhelm the natural environment and curtail genetic diversity both among plants in general and among the existing wild varieties that provide the 'gene pool' for breeding better commercial varieties.

The first question to ask is how likely is such inter-breeding? To begin with, there needs to be wild relatives in the region. This rules out wheat, corn, soybean, cotton and potato in most places where they are grown. The main possible concerns would be rice in Asia and Africa, corn and potatoes in Mexico and Central America, wheat in the Middle East and soybean in Korea and China

The proximity of related species does not necessarily mean that they will inter-breed. They need to flower at the same time, share the same insect pollinator (if insect-pollinated) and be close enough for the transfer of viable pollen. The latter can easily be thwarted by creating a buffer zone planted with traditional crop varieties to minimize any possible effects of pollen flow to a neighboring farmer's field or to a wild plant relative.[146]

Would a trait provide a selective advantage? Some traits are obviously not a risk. For example, tolerance to a particular herbicide is not likely to confer an advantage to a plant in the wild because the herbicide is not encountered there. If the weed becomes a problem on farms or areas of human settlement, it can be controlled with some other herbicide. Other traits - such as resistance to pests or disease, or tolerance of hostile growing conditions such as drought or poor soil - could theoretically give a weedy relative an advantage. However, the likelihood is diminished when we keep in mind that wild plants by their nature are already stress tolerant. If they were not they would simply die out. Domesticated plants on the other hand have lost much of the hardiness of their ancestors. Farmers select for edible yield while making up for any drop in stress tolerance by a range of farm practices such as irrigation, soil improvement and pest control. Any reintroduced stress tolerance would have to be quite strong to compete with the wild varieties. One example is sunflower which has been given a gene from wheat to resist white mould. If this genetically modified variety were introduced, gene flow would be inevitable because the crop is grown in the same regions as the wild varieties. However, it would have very little effect because the latter already have resistance to white mould.[147]

It should also be kept in mind that the risk faced is identical to the risk from domestic plants bred conventionally for stress tolerance. Ironically, genetic engineering opens up a number of ways of ruling out gene flow. One approach is to incorporate what has been called a 'terminator' gene into the plant which renders it sterile while another approach involves passing on the attribute on the maternal side and hence not transmitted through the pollen.

Genetic Contamination

Similar to the 'super weed' ruckus was the one made over the claimed appearance of genes from genetically modified maize in Mexican landraces. This was dubbed 'genetic contamination'. Landraces are the varieties developed by small-scale farmers over the centuries and have evolved through selection to thrive under particular environmental conditions and to meet local food preferences.

It was finally determined that the claim was unfounded.[148] However, if indeed there had been a gene transfer it would have been no different in nature from those involving conventional modern varieties. These have been occurring for may decades without causing any problems. If plants are superior from the farmers point of view, their seeds will be retained. If they are inferior they will not be.

Monarch Butterfly

Opponents of GM crops have made a big deal out of a supposed threat to monarch butterflies from Bt crops.[149] It is something of an enviro icon, and no anti GM rally is complete without the presence a number of eco-bubble-brains dressed up as butterflies. The kafuffle started when the journal Nature in 1999 published a paper by Dr. John Losey of Cornell University showing the toxic effects when monarch butterfly larvae in a laboratory study were fed their favorite food, milkweed, covered with pollen from Bt corn.

A number of objections have been raised to the study. To begin with, only one type of Bt corn pollen was tested from among the many types of Bt corn in use. Recent studies indicate that a few types of Bt corn pollen may kill or slow the growth of monarch caterpillars, while other types of Bt pollen have no harmful effect.

More importantly, the laboratory results were in no way indicative of the real world risks to the butterfly. In the field, the risks to the larvae are minimal for a range of reasons: corn pollen is produced for only a short time during the growing season; farmers control milkweed in and around their fields, just as they control other weeds; corn pollen is heavy and is not blown far from corn fields by the wind; and even if milkweed were within a few meters of cornfields, pollen density on the leaves would not be high enough to pose a danger

The EPA, a body that is more often than not the greeny's friend has given Bt crops a clean bill of health. After evaluating the evidence, the EPA concluded that the scientific evidence demonstrates that Bt corn does not impact on monarch butterfly populations and that a hazard in the laboratory does not translate into a risk in nature.[150] Finally, if there had been a problem with Bt corn it would be resolved by varieties currently being developed that only express Bt in the stalk. Only insects that actually attack the plant would have any possibility of being affected.[151]

Environmental Benefits

Often ignored are the environmental benefits of GM crops. These include reducing impacts on the environment and providing remedies for past damage.

Less Use of Pesticide

The insect resistance of Bt crops has lead to a greatly reduced use of pesticide. US corn growers, for example, have reduced pesticide treatment for the European corn borer by about a third[152] and according to one projection the use of pesticide in the corn crop will drop by 70 per cent once resistance to corn rootworm is also incorporated into seeds.[153] It has been reported that Chinese farmers of Bt cotton have slashed their use of pesticides by about 80 percent.[154] In Australia, pesticide use on Bt cotton crops was about half that on conventional crops, and with a newly released version, trials suggest a 75 per cent reduction.[155] In India the use of Bt cotton has cut pesticide spraying by two thirds.[156] The recently announced blight resistant potato promises large reductions in the use of fungicide and insecticide, given the high levels currently used to control the disease.

The adoption of herbicide resistant crops is also leading to a more environmentally friendly herbicide regime. Because the crop is resistant to it, a post-emergent herbicide can be sprayed over the crop killing any weeds that may have sprouted. The herbicide used - glyphosate with the trade name Roundup - is required in lower quantities because it kills such a wide range of weeds replacing the need to use a multitude of herbicides. It is also environmentally quite benign. It has extremely low toxicity to people and animals. It also binds well to the soil until it completely deteriorates, so there is very little that can run off into water supplies.[157]

Less Tilling of the Land

Because herbicide resistance allows for crops to be sprayed after they have been planted, herbicides can more effectively control weeds so reducing the need to till the soil for that purpose.

Reduced tillage has a range of benefits in terms of conserving agricultural resources and the environment generally. It dramatically reduces soil erosion which affects fertility and clogs up rivers and streams, carrying pesticides and fertilizer with it. The crop mulch shades the ground and slows evaporation and the improved soil structure resulting from less plowing actually increases the movement of water into the soil following rain or irrigation and holds it there, which means less irrigation is necessary.[158] Low tillage also means less tractor passes and less fuel consumption. According to one study no-till saves on average about 3.9 gallons of fuel per acre.[159] Studies by groups such as the Conservation Technology Information Center and the American Soybean Association all attest to the fact that herbicide resistant crops have significantly encouraged the use of low till methods.[160]

Higher Yields Mean Less Pressure on Resources

A primary objective of GM crops is to increase yields, and to the extent that they succeed they lessen the demand for resources such as land, water and energy and leave more land for wildlife. In the case of the crops that have been used to date, the gains have been through significantly reducing the losses caused by weeds and pests.

Environmental Remediation

Genetic engineering is about to bring a revolution in bioremediation. This is the use of organisms to remove contaminants from water and soil. University of Georgia researchers have modified a poplar tree which can suck mercury from the soil with the help of a bacteria gene which bestows a tenfold increase in mercury tolerance.[161] Another group of researchers have added a gene from the E. Coli bacteria and another from soybeans to make a distant relative of cabbage into a connoisseur of arsenic. The plant pumps arsenic from the soil and stores it in its leaves, where it can be easily harvested and disposed of.[162] Biologists at the University of California at San Diego modified a relative of the mustard plant so that it sucks up various heavy metals into its stems and leaves. These include lead, arsenic, mercury and cadmium.[163] At the University of Washington researchers have inserted a mammalian liver enzyme into a tobacco plant enabling it to absorb and degrade a variety of chemicals including the most widespread ground water contaminants called chlorinated solvents[164] And researchers at Ohio State University have engineered a form of algae to make it extract copper, zinc, lead, nickel, cadmium and mercury, and other metals from contaminated water.[165]

Crop Lands, a Declining Resource?

While the prospects are good for more productive plants and livestock, will achievements in these areas simply be compensating for a decline in the land resource base rather than actually increasing output? This will mainly depend on the following factors which we will discuss in turn:

·        the amount of extra land that can be brought into crop growing;

·        the encroachment of the built environment onto cropland; and

·        the extent of soil degradation which either makes land unusable or seriously reduces yields.

Extent of the Resource

How much extra land could be opened up to crop production? It has been estimated that the 1.5 billion hectares currently used for crop land represents about 36 per cent of the land that is to some degree suitable for that purpose. [166] In other words, there is an extra 2.7 billion hectares. This gives a total of 4.2 billion hectares which is about a third of the non-ice-covered land area. The remaining 9 billion hectares or so is excluded mainly because of unsuitable soil and/or climate.

Of course, most of this extra 2.7 billion hectares would never become available. Some of it is covered in human settlement or is too inaccessible, while a large part is taken up with forests and other natural areas that are (or should be) mainly committed to uses in their existing state such as conservation, water catchment and timber.

However, it only requires a relatively small proportion of this area to be available and of reasonable quality for it to represent a significant addition to crop area. According to Buningh and Dudal (1987),[167] out of a total forest land of 4.1 billion hectares, 100 million had high crop potential and 300 million had medium potential. While out of 3 billion hectares of grasslands, 200 million have high crop potential and 300 million have medium potential. With current crop land at around 1.5 billion hectares, a few hundred million would be a significant addition. This includes some of the old cropland in places such as North America, Europe and Argentina which could be returned to use if costs were lower or prices higher. Of course, in the case of land currently used for grazing, one would have to take into account the loss of livestock production.

Then we have large areas which are presently not used because of degradation or natural infertility but which could be brought into use with improved soil management methods and new crops that can tolerate the poor soil. These include the large areas with acid soils, particularly in South America and central Africa[168] and also some of the land that is very saline either naturally or through human mismanagement.

Next we have good land which up till now has been unus-able because of the lack of fresh water. This could be brought into use by the desalination of sea water and brackish groundwater. This process, which is discussed in more detail in the section on water resources, is getting cheaper by the day with new innovations and industry maturity.

So overall, there seem to be good reasons to conclude that, while there are not the vast virgin lands of yesteryear, the extra land still available will nevertheless provide a sizeable cushion against the impact of increased human settlement and soil degradation. In fact these would have to be quite large in order to actually reduce the land resource base.

Encroachment on crop land by the built environment is primarily an issue for developing countries and the US. Developing countries will house 90 per cent of the population increase expected over the next half century and at the same time will undergo a great deal of space consuming economic development. The US is the only major developed region expected to undergo a population increase in the foreseeable future, due mainly to high immigration and high fertility rates among immigrants.

Europe is expected to shrink from its present 726 million to 632 million in 2050, opening up the prospect that the area under built environment may actually decline and the availability of cropland increase.[169]

The FAO estimates that on average people in developing countries use about 0.04 hectares of built environment per head.[170] With the population expanding by two billion or so by 2025, an extra 80 million hectares would be required by then. If this were all cropland it would represent 5 per cent of the total. By the time the population reaches 9-10 billion mid century the increase between now and then would be 120-160 million hectares. This would be 8-10 per cent of total cropland.

Of course not all of this will in fact be suitable for crops or be premium grade if it is. Nevertheless, it is probably correct to assume that a significant proportion would be, given that urban centers are often sited on fertile agricultural land in coastal plains or river valleys. Alexandratos surmises that about 60 per cent of any increase would be on potentially arable land. This would include both actual cropland and land that could be useable.[171] So this would mean 3 per cent being taken out by 2025 and around 5 or 6 per cent by 2050.

With the rate of urbanization increasing during this period, a growing proportion of the expansion in the built environment will take the form of urban expansion. According to one study, for all developing countries, the annual loss of arable land transformed to urban uses due to expanding urban populations is estimated at 476,000 hectares.[172] This is 12 million hectares over 25 years and 24 million over 50 years.

When looking at urban expansion in developing countries, it needs to be kept in mind that a significant proportion of urban land is still used for agriculture by households, for example, 28 per cent of Beijing and 60 per cent of Bangkok.[173] These urban activities can take many forms:

Horticulture takes place in home sites, parks, rights- of- way, roof tops, containers, wet lands, and green houses. Live stock are produced in zero-grazing systems, rights-of-way, hill sides, coops, peri-urban areas, and open spaces. Agro forestry is practiced using street trees, home sites, steep slopes, within vine yards, green belts, wet lands, orchards, forest parks, and hedge rows. Aquaculture is practiced in ponds, streams, cages, estuaries, sewerage tanks, lagoons, and wet lands. Food crops are grown in home sites, vacant building lots, rights-of-way for electric lines, schoolyards, churchyards, and the unbuilt land around factories, ports, airports, and hospitals.[174]

Another thing to keep in mind is that in many places there is going to be an absolute drop in the rural population, and the corresponding decline in rural settlement will free up some land for crops.

In the US, the total developed area, including non-urban infrastructure, was estimated at 5.2 per cent of the total in 1997.[175] That is 48.4 million hectares or 0.17 hectares per head. With the US population expected (on the most likely assumptions) to increase by another 100 million over the next half century that would mean another 17.2 million hectares assuming the average area per head remains the same.

US cropland covers 455 million acres (182 million hectares).[176] If all of the increase in developed area were on cropland, it would represent a 9 per cent reduction in the latter. However, that is not likely to be the case given that many fast growing areas such as Florida and Arizona are not areas with high concentrations of prime crop land.[177]

Curiously the rural population in the US takes up a lot of residential land. In 1997 this was estimated to be 73 million acres (30 million hectares), typically 8 hectares or larger for each household.[178] Presumably a significant proportion of this could be placed under crops if costs were lower or prices higher.

So, to sum up, while the encroachments by human settlement are bound to be significant, they will not be on a scale that will make them a threat to food security. This is particularly so when we keep in mind that the process is gradual and that much of the increase will be a generation or two away, at a time when agriculture should be much more productive than it is now.

Soil Degradation

The next question is whether continuing soil degradation is going to seriously undermine agriculture's resource base. Farming practices can harm the soil in a range of ways. Water and wind erosion cover the biggest areas, and is the main problem for rain-fed cropland. For irrigated land, the main concern is increasing soil salinity.[179] Other significant forms of degradation include loss of organic matter and nutrient depletion.

Erosion occurs where soil is dislodged and removed by water or wind. The impact of any level of erosion on productivity will depend on the depth of the topsoil. Salinity is caused by excessive irrigation and poor drainage leading to the build up in the soil of salt left there by evaporating water. Nutrient depletion is due to insufficient application of fertilizer or to applications in the wrong proportion. Low levels of organic matter lead to a degradation of physical properties of the soil so that it loses the ability to hold water, and to retain and release nutrients.

The extent of degradation is not well understood. There is a serious lack of detailed studies and conflicting interpretations of what is known. For example, in the case of India, estimates by different public authorities vary from 53 million up to 239 million hectares.[180]

Notwithstanding this uncertainty, there is general agreement that while soil degradation is a major problem most land is not seriously affected. Studies reviewed by Scherr suggest that soil quality on three-quarters of the world's agricultural land has been relatively stable since the middle of the twentieth century.[181] Also, at least to this stage soil degradation has not had a serious overall impact on crop productivity.[182]

Of particular importance is the fact that degradation is not a serious constraint on food production in the temperate regions of the world. These include most of North America and Europe. Their soils are the result of glaciation in the last Ice Age, are deep and fertile and are fairly resistant to degradation.[183] And they are better managed by modern agriculture.

Furthermore, a lot of soil degradation is on lands that while extensive in area are not major contributors to total food production because of inherently poor growing conditions. The climate or terrain is unsuitable and the soil is inherently of low fertility.[184]

Soil degradation has also had its share of alarmism. During the 1970s and 1980s, so-called desertification received a great deal of attention. It was believed that deserts such as the Sahara were spreading irreversibly. However, since then remote sensing has established that desert margins ebb and flow with changes in the climate and studies have revealed the resilience of crop and livestock systems and the adaptability of farmers and herders.[185] In the case of wind erosion in North America, past concerns showed insufficient recognition of the fact that erosion usually involves soil being blown from one field or farm to another and hence no loss to agriculture. According to Crosson and Anderson, US studies have found very small long-term yield effects due to erosion. They indicate that if erosion rates were to continue at the same rate as in 1982 for 100 years, national average yields in the US would only be reduced by be 3-10 percent. [186]

Arguably the main soil problems are (1) salinity and the excessive use of nitrogen relative to other nutrients in a lot of irrigated farming in Asia[187] and (2) the grossly inadequate use of fertilizer of any kind in Sub-Saharan Africa.

There are a range of countermeasures that can be taken against degradation. In some cases problems can be remedied and in others preventive measures adopted to avert or retard further damage.

To a considerable extent the ability to take effective preventive and remedial action depends on technical capacity which in turn is a function of the level of modernization and the stage reached by science and technology. Where agriculture is backward, many soil and land management measures are not possible because there is not the access to the knowledge, infrastructure, inputs and equipment made possible by modern science and industrial development. Backwardness limits knowledge of the soil and its vulnerabilities and the ability to keep track of and analyze any changes in its condition. There are not the resources to carry out measures such as earth movement to prevent erosion and better irrigation systems to reduce salinity.

A change of institutional or political conditions will in many cases also make a major difference. Land management and agriculture generally will benefit if there is a government that is willing and able to progressively increase infrastructure, extension services and research and does not simply see agriculture as something to be taxed for the benefit of the ruling elite and its urban support base. A change in the incentives facing farmers in developing countries would also improve how they respond to the problem. Greater land ownership among farmers would mean a greater willingness to invest in measures to conserve land because their future rights to use it are more secure. And ending the common policy of subsidizing water and nitrogen would also assist in the battle with salinity and nutrient problems. Funding of research is critical in soil management as with other aspects of agriculture.

Below we look at the main forms of degradation and the measures that can be adopted to deal with them.

Erosion

Wind and water erosion can be prevented by a range of measures. The movement of wind and water can be impeded or diverted by planting trees, hedgerows and grass strips and the construction of terraces and storm water drains. And, as we mentioned above, the soil can be protected by conservation tillage which minimizes disruption of the soil surface and maintains a cover of plants or plant litter.

Salinity

Estimates of the rate at which land is being seriously impaired by salinity vary considerably. One claims that 0.5 million hectares per year are being affected while another claims 2 million hectares.[188]

Measures to prevent the problem include additional drainage, better canal lining or use of pipes, and more judicious water applications. Remedial action can also be taken where the problem has emerged. Planting salt tolerant trees and grasses, which "suck up" the salt, is one approach,[189] and plants are being bred that are particularly suited to this job. Another approach which can be applied in some cases is to lower the water-table below the root zone and flush the salts away to newly constructed subsurface drainage systems. According to Conway writing in the mid 1990s, the cost of doing this in India was of the order of $325-$500/ha.[190]

Loss of Organic Matter

For loss of organic matter, the answer often lies in leaving more of the crop residue in the field and making greater use of livestock dung. However, in many parts of the developing world these are used for fuel, so improvement may have to await ready access to modern energy sources such as electricity and fossil fuel.

Nutrient Mining

In some particularly backward regions, especially Sub-Saharan Africa, only further economic developed and higher incomes will end what is often referred to as nutrient mining, where nutrients taken from the soil either by plants or leaching are not replaced by adequate applications of fertilizer. In this region fertilizer use per hectare is only about 10 per cent of the global average[191] and will have to increase about four times to meet nutrient needs at the current level of production. Generally more nitrogen is required than potassium, and more potassium than phosphorus.[192] Prices for fertilizer are high because of inefficient local production, high shipping costs for imports and poor transport. Where transport is particularly poor fertilizer is simply not available. And most farmers could not afford it even if it were delivered to their door at world prices.

In other areas such as China and India nutrient mining occurs even though relatively high amounts of fertilizer are used. This is because the mix is not in the right proportions for the plants' needs. Given that plants use nutrients in a certain proportion to each other, the increase in the external supply of one nutrient, enables plants to extract more of the natural supply of the other nutrients in the soil. The main problem is the overuse of nitrogen relative to the other macronutrients, phosphorous and potassium and to micronutrients such as sulfur and zinc.

Farmers needs a change in incentives so that they are less drawn to the short term gains from nitrogen use and are more heedful of the longer term effects of nutrient depletion. This requires reduced poverty so that they are not living hand to mouth, changes in property rights so that they have more of a stake in the future productivity of the land and an end to the common practice of subsidizing nitrogen. Better knowledge would also assist. This requires a greater general appreciation of the problem by farmers and the means to carry out necessary soil testing and plant analysis.

Summing up on Land

So to sum up on the state of the land resource, the evidence indicates firstly, that there is still a significant amount of extra new land available; and secondly, that recent degradation has not been enough to significantly slow down average crop yield increases and large areas are not seriously affected. While this does not rule out the presence of a real and increasing problem, it does suggest that the resource as a whole is not in imminent, grave danger. Whether the situation improves or deteriorates in the future will depend on the extent that remedial and preventive measures are applied, and this in turn depends mainly on the pace of economic and social progress.

Water

The other major resource required by agriculture is water. As with land, there are concerns about whether the resource will be sufficient to meet our food needs. This will depend on the following factors:

·        how far we can increase our use of rain, rivers, lakes and groundwater;

·        how well we can stem or reverse the depletion or degradation of presently exploited resources;

·        the extent that we can become more efficient in our use of water, both in food production and in other activities that compete with agriculture for water; and

·        the prospects for tapping into the non-conventional resources, namely salty water and polar ice.

Harnessing More of the Resource

Some regions get all the water they need from the rain that falls on the field (green water). For others rain water is insufficient or at the wrong time. They have to rely on water brought in from elsewhere (usually by rivers) or local rainwater which has been stored in aquifers, dams and lakes (blue water). This is drawn off and distributed by irrigation systems.

Presently around 280 million hectares are under irrigation.[193] Over 70 per cent of this area is in developing countries, which are often in regions that are either arid or have monsoons that bring the rain all at once. China and India have about 20 per cent each.[194]

There is scope to expand this area significantly, although by how much is open to some dispute. Presently about 10 per cent of blue water is diverted or pumped for human use.[195] Much of the remainder is unavailable for a range of reasons. For example, rivers run through regions unsuited to farming or the local farmland has all the water it needs, and some water is required for navigation and environmental flows. The FAO has published what some consider an upbeat estimate of 200 million hectares of extra irrigated land in developing countries.[196] What we cannot possibly expect to achieve is the kind of expansion that occurred over the last 50 years when withdrawals were doubled[197] and the area of irrigation increased two and a half fold.[198]

Depletion and Degradation

Part of any expansion, will have to make up for some deterioration in existing systems. Each year infrastructure becomes more dilapidated, more silt builds up in reservoirs, and aquifers become more depleted and in some cases mined out.

Turning these problems around will be one of the objectives of political and economic development over coming decades. The level of infrastructure investment in both rejuvenation and expansion will need to increase considerably from what it is at present. Ensuring that schemes are properly maintained will also require a revolution in management which is generally incompetent and corrupt.

Moves are afoot in many countries to reform their systems. This includes greater accountability for performance and participation by farmers in various aspects of management,[199] separation of service delivery from regulatory functions, and contracting out of operations and maintenance tasks to the private sector or non-government organizations.

The depletion of aquifers is a serious problem in some areas including many parts of India, China and the United States.[200] How important are they? It has been claimed that 10 per cent of the world's food production is dependent on aquifers that are being depleted.[201] Over-drawing of groundwater was estimated to have been 200 cubic kilometers in 1995, 8 per cent of withdrawals by agriculture.[202] On the assumption of equal water productivity this over-drawing would provide about 4 per cent of food given that irrigated land as a whole provides 40 per cent. However, because groundwater irrigation is more reliable than surface irrigation its contribution will be higher than that.

The main problem with most aquifers is that there is no regulation of their use. They are a common pool resource and any individual farmer can drill a hole and install a cheap pump. This is compounded by the fact that in many countries farmers have managed to obtain large subsidies for electricity and diesel fuel, the biggest recurrent pumping costs. Governments will have to bite the bullet and take on the politically difficult task of removing these subsidies. Access to the resource also has to be regulated. One approach is to provide farmers with a right to a certain quota which would be assigned once a study had determined what was a sustainable level of total use or acceptable level of depletion. Farmers who pump more than their quota would then be either charged very high prices or forced to buy pumping rights on an open market from others not using their full entitlement.

On the supply side, depletion can be addressed by taking measures to increase the rate of aquifer recharging by various 'water harvesting' techniques which capture some of the rainwater which presently evaporates. These include containing the water behind dams, or in ponds so that much of it can be absorbed into the ground or digging recharge wells or cisterns that drain water from surrounding higher ground. The water captured would include floodwaters that do not flow into streams and rainwater that falls on areas other than cropland such as pasture and wasteland. One particular proposal is to encourage, through subsidies if necessary, flooded paddy rice cultivation in lands above the most threatened aquifers in the wet season.[203] At this stage it is not known how far artificial recharge measures could go in countering large scale depletion.

More Efficient Use

An alternative to increased water supply is increased efficiency in use. Doubling the output from a given amount of water is just as good as doubling the amount of water. There are many ways that farmers can get more crop from each drop of water applied to the field. These can be divided into measures that increase the efficiency of water application in the field and measures that increase the plants response to water.

The two traditional methods of water distribution that still dominate irrigation in most countries are flood irrigation which covers the whole field with a layer of water and furrow irrigation which channels water from ditches to crops along slightly inclined parallel rows.[204] With these methods significant amounts of water are lost to evaporation, leaching or runoff. Better methods from this point of view are sprinklers, and drip irrigation where the water is delivered by pipes running along the surface or underground near the roots. To date these new methods have not been widely adopted. Although where they have, the results have been dramatic. Cyprus and Israel are leading examples and show that they can be put to widespread use.[205]

Field management measures can ensure that both irrigation and rain water are better used. Increased crop residues or ground cover, made possible with low till techniques, helps retain water or melting snow that would otherwise runoff or evaporate. Increased level of organic matter in the soil increases its ability to absorb and retain moisture.[206] Land leveling, with the help of cheap laser technology, can benefit both irrigated and rain-fed agriculture by reducing run-off and ensuring that water is distributed evenly. It has been reported that field leveling in a region of Arizona lead to a water use decline of between 20 and 32 percent and yield increases from 12 to 22 percent.[207]

Water efficiency can also be improved by increasing our knowledge of the plant's water requirements at various stages in its growth. This knowledge can be combined with equipment monitoring the field for information about soil moisture and the condition of the crop. This can even be used to trigger water applications.[208] Measuring soil moisture can be performed by fairly simple and inexpensive devices such as gypsum blocks containing two electrodes which are buried at several locations and depths in root zones. A pocket-size impedance meter can then measure changes in moisture content.[209]

Of course, harnessing the water and applying it to the land as efficiently as possible is only half the story. The ultimate measure of efficiency is the final harvest achieved. This will also depend on the choice of plant varieties and measures taken by the farmer to maintain soil quality and to protect the crop from various stresses.

The development of plant varieties that put more of their effort into producing grain rather than stalks or leaves, or that speed up or bring forward the grain growing phase will mean more final output for a given amount of water. Likewise, having plants that cope better under stress means less water is wasted on plants that end up dying or underperforming.

Water efficiency can also be improved by using plants that require less water. We can switch to less thirsty crops. For example, growing sorghum instead of corn as stock feed would lower water needs by 10-15 percent and sunflowers instead of soybeans as an oil crop would reduce water by 20-25 percent.[210] Or we can breed plant varieties that require less water. This includes plants that can grow in drier areas where nothing of interest could grow before. Another approach is to develop plant varieties that are more tolerant of saline water hence creating a water resource out of what was otherwise unusable.

These methods of increased efficiency in water use should take us a long way to ensuring that the water supply is sufficient for our needs. An important impetus to greater efficiency would be an end to the heavy subsidizing of water through under-pricing. At the moment prices are generally nominal and collection rates low.

Competition from Non-Agricultural Uses

Agriculture can expect to face increasing competition for water from non-agricultural uses. At the moment they make up about 30 per cent of withdrawals - about 20 per cent for industry and 10 per cent for municipal use.[211] This demand will increase in developing countries as their populations and economies grow. However, there is much that can be done to keep non-agricultural uses of water in check.

Having a water system that does not leak is a good start. In many cities in developing countries a large proportion of water is lost to leaks in the system because of poor maintenance.[212]

Another part of the solution is to achieve most outcomes using less or even no water. For cooling in electricity generation, water-free technologies can be used such as dry cooling towers. In production, innovation can bring forth new water saving technologies. For example, at the Oberti olive plant in Madera, California, where water is used in the curing process, they almost halved water use by reducing curing time from seven days to three.[213] Consumers can have the same need met with a less water intensive product. For example, reading the news on the Net requires no water whereas producing the paper used in the traditional tabloid or broadsheet requires a considerable amount.

It is not hard to imagine a whole range of innovations that could reduce water consumption in the home. Water-saving shower heads could be more widely used. Toilets that use little or no water could significantly reduce domestic consumption. A waterless, electrically powered toilet has been developed which has no odors or insect problems, and safely and effectively biodegrades human wastes into water, carbon dioxide and a soil-like residue.[214] Fumento cites the case of a new train toilet which sanitizes the waste and returns water for flushing and hand washing:

Something called a macerator chews up waste and feeds it into an aerated tank containing membranes coated with muck munching bacteria. The solids are broken down primarily into carbon dioxide and water, while the gas is pumped off and bacteria free water passes across the membrane. Some of the water is sterilized with ultraviolet light and returned to the flushing tank. The rest goes through a reverse osmosis device that filters out the remaining chemicals, such as proteins and urea, so that the water is entirely microbe free and can be used for hand washing. This way the system needs servicing only once a month to remove built up sludge.[215]

No doubt the computerized and automated kitchens of the future will be able to make more efficient use of water both in cooking and dishwashing. Future washing machines may work with less water or have their own recycling systems. There is even talk of waterless washing using nano-machinery that imitates the behavior of enzymes. It is also possible to imagine the development of fabrics that repel dirt and grease.

Reuse is another way of saving water. In some cases no treatment is required, e.g., washing or cooking water diverted to the garden or to the toilet cistern. In other cases various levels of treatment would be needed. For example, sewage and industrial effluent can be cleaned up using technologies that are now getting cheaper and more effective. This can be fed back into the municipal water supply or made available for specific uses.

San Diego is looking at a proposal to mix recycled sewage water with the city's drinking water.[216] The sewage water will undergo conventional tertiary and advanced treatment steps. Advanced treatment will include micro-filtration pretreatment, reverse osmosis, disinfection and nitrate removal. The re-purified water will then be blended with other local supplies. Upon withdrawal, the water undergoes final treatment including conventional coagulation, mixing, clarification, filtration and disinfection before introduction to the city's pipelines.

Waste water can also be made available to agriculture. In water starved Israel, for example, well over half of waste water is used for irrigation after treatment and it makes up about 20 per cent of the irrigation total.[217]

In many industrial activities, there is a great deal of scope for internal recycling. New water would only be needed as a top up where there are losses from evaporation or leakages, or the scale of operation is increased. A major user of water is the power industry for cooling. This is an area where far more recycling can be applied. Even in the sensitive area of food processing recycling seems to be an increasing option. The Californian olive processing plant previously mentioned reuses 80 per cent of its processing water with the aid of a membrane filtration system.[218]

Households, municipalities and industry can also do more to harvest their own rain water. Rain water runoff that goes down drains can be better used. Rain can be collected from the roofs of houses, factories and other large buildings, and stored in tanks. The Frankfurt Airport terminal, for example, collects water from its vast roof for such low-grade water needs as cleaning, gardening, and flushing toilets.[219]

So in a nutshell, households and industry can reduce their competition with agriculture by finding less water intensive ways of meeting their needs, by making more discarded water available to agriculture and by harvesting their own rain. All these approaches can be encouraged by having water charges that reflect the true cost of water and encourage more frugal use and the development of more water efficient technologies.

Non-Conventional Water Resources

There are two non-conventional sources of fresh water that we need to consider: (1) desalinated sea water and briny groundwater; and (2) polar ice. They are non-conventional in the sense that they have only been tapped on a small scale and would require a considerable amount of technical development before they could play a bigger role. They would also require a lot more capital and energy than the conventional resources.

Desalination

Seawater covers 70 per cent of the planet and comprises 95.5 per cent of all water. Brackish groundwater is found in vast underground aquifers throughout the world and often far inland in otherwise dry climates. It includes the vast supplies that accompany fossil fuel extraction.

So far desalination has only been put to limited use, because of the cost. Desalting capacity is about 32.4 million cubic meters (or 8.6 billion gallons) per day.[220] This is a tiny fraction of our present fresh water consumption. About half of this capacity is installed in Persian Gulf countries where water from other sources is very limited and cheap energy to run the process is available.[221] Desalination plants can also be found at specific locations, including island resorts, where there are no alternatives and the demand is sufficient at the high price to economically justify a facility. They are also sometimes used to bring slightly brackish water up to a standard where it can top up conventional supplies. Investment in new capacity appears to be quite healthy. In the US, for example, there are new large-scale facilities being built or planned in Southern Florida, Southern California and El Paso, Texas.[222] The facility in Tampa Bay Florida is the largest desalination facility so far in that country,[223] while San Diego hopes that planned facilities will provide 15 per cent of its water from the ocean by 2020.[224]

Virtually all desalination capacity is provided either by thermal or membrane units. Each provides roughly half of capacity, although membrane technology is edging ahead.[225] In the distillation process salt water is heated to boiling point to produce water vapor which is then condensed to form fresh water. In the membrane process the salt and water are physically separated. Electrodialysis (ED) uses voltage to separate the salts, whereas reverse osmosis (RO) uses water pressure.[226] Most membrane facilities use RO while significantly less use ED. Generally, distillation and RO are used for seawater desalting, while low pressure RO and electrodialysis are used to desalt brackish water. Treating brackish water is far cheaper than treating seawater.

Costs have fallen significantly over the last decade and this is expected to continue. Like other industries, desalination has benefited from a range of advances such as better materials to choose from and improved computerized management of operations. Membranes are achieving faster flow rates, longer lifespan, less fouling[227] and greater energy efficiency.[228] At the same time their cost of manufacture is declining with increased automation.[229] It is believed that a better understanding at the molecular level of the RO process will lead to faster flow rates and better salt rejection.[230]

Completely new technologies may offer possibilities for much greater cost reductions. A number are already in view and expectations are that, with a greater research effort, others could be around the corner.

One at the early commercial stage is called the Rapid Spray Evaporation process which ejects water through a nozzle into a stream of heated air. Because the water is a fine mist it creates a vast surface area which allows the water to evaporate quickly, leaving behind salt in a dry form or as a supersaturated solution easily converted to sea salt.[231] The company developing the technology, AquaSonic International, has at the time of writing started producing small portable units and is in the throws of developing the technology for large-scale plants.

A modified reverse osmosis process is being developed at New Mexico Tech.[232] It uses cheap clay membranes that do not require the usual water pretreatment, operate under lower hydraulic pressure, produce a solid salt waste and yield 100 per cent water recovery.

Reassessing some of the many past failures in the light of subsequent advances in scientific and technical knowledge could prove fruitful. For example, knotty design problems may be sorted out with new computational modeling techniques and the technology made feasible with new materials and production processes.[233]

Then there are totally new concepts. One that looks promising is a nanotube-based membrane. This is being developed by researchers at the Lawrence Livermore National Laboratory.[234] A field of nanotubes functions as an array of pores which allows water molecules through, while keeping salt and other unwanted molecules at bay. And, despite their diminutive dimensions, these pores allow water to flow at faster rates for a given pressure compared with reverse osmosis membranes. This will mean that less energy will be required.

While desalination can expand the water resource, it does so by placing greater demands on other resources, particularly energy which it would use in large quantities even with considerable improvements. Our ability to meet our increasing energy and raw material needs is discussed in the next chapter.

Polar Ice

Three quarters of the world's fresh water is polar ice. It starts out either as snow or as seawater which loses its salt when it freezes. The quantities are enormous. There are tens of millions of cubic kilometers of ice. In comparison the 2500 cubic kilometers of water we withdraw for irrigation is miniscule.[235]

At the moment ice exploitation is just a 'boutique' industry catering to the bottled water and spirits market. Its appeal is that it is extremely pure and does not require the normal extensive range of treatments. A specially equipped ship comes along side of an Arctic iceberg located in a quiet cove and cuts off chunks which are then thawed. However, to ship quantities that make a significant contribution to our irrigation needs would require a massive fleet of supertankers. Just supplying 5 per cent of current levels would require over 700 deliveries per day by 500,000 tonne super tankers.[236] Tankers moving the equivalent in water of our current oil consumption (i.e., 4.5 km3) would move less than 0.2 per cent of our present irrigation withdrawals.

Towing or nudging icebergs with the help of ocean currents is another option which has been discussed. These would have to come from the Antarctic because those from the Arctic are insufficient to make a big difference. Of course if you simply tried to tow an iceberg to Saudi Arabia or California, it would have melted away before it arrived. A number of solutions have been proposed to deal with this. One is to cut a bow into the front end and cover it with kevlar. This would reduce melting to acceptable levels. The iceberg would then be cut up and melted down, and the water piped to irrigation systems and reservoirs. Another approach which has been trialed is to seal the iceberg in reinforced plastic so that melting is no longer an issue.[237]

Moving icebergs into unfamiliar territory could raise a range of environmental issues that would have to be taken into account. There may be an increased risk of oil spills in polar regions, however, these should be countered by better ship design and more effective clean-up methods. There may be some destinations or routes where icebergs would be unwelcome because of an excessive intrusion on the environment. Moving through shallow seas, an iceberg could cool down the surrounding waters or scrape the bottom, damaging marine life. Ice cold fresh water runoff would also reduce the salinity of the surrounding sea water and could precipitate a sudden change in temperature. The plastic bag solution is less likely to cause these problems because they would be smaller and the ice perhaps already melted before it arrived at problem spots, and no freshwater is released. However, given that the size of icebergs is typically no bigger than a tanker, one would be looking at a similar number of icebergs as tanker trips.

What about transporting thawed ice by pipeline? They would need to stretch for thousands of kilometers to the more arid regions and in the case of the Antarctic much of it would have to be underwater. There would also need to be many of them. The Baku-Tbilisi-Ceyhan pipeline from oil fields in the Caspian Sea to the Mediterranean Sea can carry one million barrels of oil per day. That is a lot of oil. However, that much water is nothing. It is 0.06 km3 per year, a minute fraction of total irrigation withdrawals. Even under pressure, a pipeline is only the equivalent of a small stream. It can never compare to a river.

So, in sum, at this stage it is difficult to foresee polar ice being an important contributor to our water supply.

Genetic Base

There is a concern that modern agriculture is narrowing the genetic diversity of crops by replacing a large number of local varieties (landraces) with a small number of widely used modern high yield varieties (HYVs). It is claimed that this "genetic erosion" is eliminating much of the gene pool required for breeding various favorable traits and is making us more vulnerable in the face of new stresses. However, the evidence does not back up these claims.

Despite the popular belief to the contrary, HYVs retain a considerable amount of diversity. There are many varieties in use at any one time adapted to a range of conditions. Furthermore, the level of diversity has been increasing continuously over the past few decades as adaptations to specific stresses have been fine tuned. In the case of wheat there is actually a more diverse range of varieties in the field than at the beginning of the 20th century.[238]

A considerable number of the traditional landraces are still in use, particularly in the areas from which the crops originated. In some regions they are still dominant, for example, rice in Sub-Saharan Africa and maize in West Asia/North Africa, Asia (excluding China), Sub-Saharan Africa, and Latin America.[239] In the case of wheat, landraces are still grown extensively in parts of West Asia, North Africa, and Sub-Saharan Africa (Ethiopia and Sudan).[240] West Asia, where much of agriculture originated is still home to a vast array of traditional varieties of the lesser crops such as lentils, oats, barley, rye, almonds, apricots, cherries, figs, grapes, olives and plums.[241]

Arguably more important than landraces, particularly with improved breeding techniques, are the original wild varieties of domesticated plants. Because they survive without human care and protection they are generally more resistant to biotic and abiotic stresses. These will not be found in farmers fields but out in the wild.

If you include in diversity not only what is in the fields of farmers but also what is in the fields and greenhouses of research stations and in gene banks, there has been an improvement over time for both rice and wheat. This adds a number of other dimensions to diversity all of which have been increasing.

They include temporal diversity (average age and rate of replacement of cultivars); polygenic diversity (the pyramiding of multiple genes for resistance to provide longer-lasting protection from pathogens); and pedigree complexity (the number of landraces, pureline selections, and mutants that are ancestors of a released variety).[242]

Something else to consider is effective diversity. Traditionally the diversity available to a farmer was confined to whatever was in the local region and what they could do with that was limited in the absence of modern plant breeding. Now we have breeding institutions that can pull germ plasm from anywhere in the world when breeding a new plant. They have speedier and more effective ways of screening for desirable traits and using them to create new commercial varieties. The greater effective diversity is evidenced by two facts. Firstly, traditional methods took millennia to greatly increase yields, while modern breeding methods have tripled them within a number of decades. Secondly, yields are far more stable from year to year than they used to be because modern varieties are more stress tolerant than their landrace predecessors. Finally with genetic engineering we have a further extension to diversity because it allows scientists to draw on the characteristics of totally unrelated life forms.

Fisheries

While not a major supplier of food energy, fish provide about one-sixth of all animal protein,[243] and in developing countries the harvest nearly equals the combined local production of cattle, sheep, pigs and poultry.[244] About 70 per cent of fish are caught while the rest are cultivated.[245] About 30 per cent of caught fish are used for non-food purposes, mainly animal feed.[246]

The catch has increased fivefold over the last 50 years.[247] However, there does not seem to be much if any room for the fish catch to continue growing. The FAO anticipates a small increase if fisheries are better managed and a decline if they are not.[248] They believe that between 70 and 80 per cent of fish stocks are fully exploited, overexploited, depleted or recovering from depletion.[249]

Remedial measures include reducing the capacity of fishing fleets, setting up marine reserves, removing government subsidies and assigning property rights to individuals or groups of fishermen to provide an incentive for good stock-management practices. Other threats to coastal fisheries that have to be dealt with are pollution and degradation of coral reef and mangrove habitats.[250]

Fish cultivation or aquaculture has prospects for significant expansion. At the moment output is concentrated on crustaceans and mollusks, freshwater carp in China and salmon. Around 80 per cent of mollusks are cultured, around 20 per cent of shrimps/prawns and around 33 per cent of salmon.[251]

Growing fish in captivity instead of catching them is comparable to the move on the land from hunter-gathering to farming. This allows the development of better breeds and the adoption of management practices such as protection from other predators, provision of better feed and optimal timing of slaughter. While they have a long way to go to catch up with the changes that we have made on land, the industry has made some progress.

Tilapia, a freshwater, plant-eating fish popular in America, has been bred to be hardier and grow 60 per cent faster than the wild variety.[252] Genetically modified salmon are being developed which possess a gene that protects them from freezing when raised in icy waters and a gene that expresses a growth hormone so they reach maturity more quickly, while requiring less food.[253] Other areas of improvement being investigated by breeders include disease resistance and increased fertility. Feed suppliers have also had some success in improving feed efficiency. For example, the amount of feed used for growing salmon is 44 per cent of what it was 30 years ago.[254]

As with any other human endeavor, aquaculture can have impacts on the environment that need to be checked. Chemicals, uneaten feed, dead fish and fish feces from inland and shoreline aquaculture has contaminated drinking and irrigation water, seeped into aquifers, and affected coastal fisheries. Where there is limited water exchange, the decomposition of organic waste can contribute to local eutrophication and all the environmental problems that can cause.

In intensive shrimp production about a third of the water has to be changed daily, and about half of it is fresh water needed to obtain the optimum salinity level. This call on freshwater can lead to a drop in groundwater levels; and large volume pumping of freshwater and seawater also affects the biodiversity of affected areas.[255] Shrimp production has also caused extensive damage to wetlands and mangroves and the creation of infertile land through salinity.

Much of the remedy lies in improvements to the poor regulatory and institutional arrangements to be found in the developing countries involved. This is similar to logging where the government fails to properly protect land supposedly under its control.

Improved technologies and practices can make a difference. One area of success has been in the development of more digestible feed formulations that leach less waste into the environment. For example, nitrogen waste for a given quantity of salmon is one sixth of what it was thirty years ago.[256] A shrimp farm in the US uses other fish to mop up shrimp waste.[257] The use of antibiotics in Norwegian aquaculture is less than 0.5 per cent of what it was ten years ago. Vaccines have brought about great reductions in the use of antibiotics and other chemicals.

We can expect to see the greatest growth in fisheries out at sea where there is not the same competition for resources found on land or near the sea shore. This will take time to develop if only because of the lack of knowledge and new investment involved. The technology to pen, cage or otherwise control the fish still has to be designed and built; and to domesticate a new specie, knowledge is required of such matters as stocking densities, water quality, breeding conditions, animal behavior and precise nutritional requirements. For aquaculture to make a major impact on the food supply, there would have to be large scale investment in facilities such as pens or other means of controlling the fish.

Future cultured fisheries out at sea will face environmental problems similar to some of those above plus a range of new ones. One concern relates to 'genetic pollution' from domestic varieties breeding with wild ones. Like any environmental concern it would need to be assessed on the evidence on a case by case basis. However, given the option of breeding fish that cannot breed in the wild this will never be an overriding problem. Then we have the effect on wild species and ecosystems generally of building large pens and cages and concentrating large numbers of domesticated fish in a relatively small area. These are similar to the issues that we have faced and continue to face in land based food production.

Non-Renewable Resources

One of the reasons modern agriculture is often slammed for being unsustainable is its use of non-renewable resources particularly inorganic fertilizer and fossil fuels. Inorganic fertilizer refers to the three macronutrients when obtained from outside agriculture, in other words, not from the recycling of organic matter. Nitrogen is the most important followed by phosphorous and potassium. Fossil fuels are important mainly in the production of nitrogen and as fuel for farm machinery.

Nitrogen Fertilizer

Inorganic nitrogen fertilizer is produced primarily from synthetic ammonia which is obtained by combining nitrogen and hydrogen. The ammonia is then used to produce various synthetic nitrogen fertilizers including the most common one, urea. Nitrogen makes up almost 80 per cent of the atmosphere (and much of the nitrogen not in the atmosphere eventually returns to it) while hydrogen is the most abundant element in the universe.

Natural gas is presently the most commonly used fossil resource input, both as the source of hydrogen and for the energy in the production process. According to estimates from the late 1990s, if natural gas had provided all the feedstock for hydrogen and all the fuel, its total consumption would have been just under 7 per cent of the world's natural gas extraction.[258]

Nitrogen fertilizer production is certainly very energy intensive, however, as discussed in the next chapter, a diverse range of options will allow us to meet our energy needs. The next chapter also examines the prospects for using water as a hydrogen feedstock instead of hydrocarbons. (Hydrogen is the H in H2O.)

Phosphate

Phosphate fertilizer is made from phosphate rock treated with sulfuric acid. Commercially viable reserves of phosphate rock are estimated to be 18 billion tonnes.[259] This would last 60 years if we consumed at twice our present annual level of 148 million tonnes. The reserve base is estimated to be 50 billion tonnes.[260] This also includes explored resources which are presently non-economic or would require at least some use of unproven technology. Assuming the same rate of consumption these would last 170 years.

With further exploration we should expect discoveries of extensive new deposits in the future.[261] Furthermore, large phosphate resources have been identified on the continental shelves and on seamounts in the Atlantic Ocean and the Pacific Ocean.[262] These cannot be recovered economically at the moment but this could change with new technologies.

The sulfur in evaporite and volcanic deposits, and that associated with natural gas, petroleum, tar sands, and metal sulfides amount to about 5 billion tons.[263] At double current usage rates of 59 million tons a year, these would last 65 years. The sulfur in gypsum and anhydrite is almost limitless, and some 600 billion tons is contained in coal, oil shale, and shale rich in organic matter.[264]

Potassium

Commercially viable reserves of potash or potassium oxide are estimated to be 8.3 billion tonnes and the total reserve base 17 billion tonnes.[265] At double current usage rates of 31 million tons a year, these would last 170 and 350 years respectively. The estimate for the total known resource is 250 billion tonnes.[266]

Fuel for Farm Machinery

Farm machinery takes a very small share of fuel consumption. US agricultural field machinery consumes annually no more than 1 percent of the country's liquid fuels.[267] In terms of resource use it is a vast improvement on draft animals. Using grass as a fuel is extremely land intensive. In the US, the shift from draft animals to internal combustion engines released 30 million acres of prime arable land for crops.[268] To match the 1995 mechanical power of American tractors with horses would require at least 250 million of these animals and 300 million hectares, or twice the total of US arable land, to feed them.[269]

"Alternative" Agriculture is No Such Thing

While we can be optimistic about everybody being fed as a result of advances in the agricultural sciences and the modernization of Third World agriculture, we cannot be the same about the 'alternative' agriculture espoused by the greens. With their alternative we would not be able to feed ourselves and we would trash all remaining natural habitats in the futile attempt. This alternative would have us do without 'unnatural' things such as inorganic fertilizer, chemical pesticides and genetic engineering.

Instead of getting nitrogen from the air as we mainly do now, we would have to confine ourselves to getting it in 'natural' ways such as from animal manure, human sewage and 'green manure' legumes. At the moment inorganic nitrogen provides the bulk of our needs so there would be a big shortfall to fill.

Organic enthusiasts reassure us that there is lots of potential organic fertilizer that we could be using. According to them there is lots of animal manure, crop residue, urban sewage and compostable landfill going to waste. However, experts at the USDA have calculated that the available animal manure and sustainable biomass resources in the US would provide only about one-third of the plant nutrients needed to support current food production.[270] What about using urban sewage sludge more broadly on crops? In the US, all of the urban sewage equals only 2 percent of the nitrogen currently being applied in commercial fertilizers and a significant proportion is already being used for agricultural fertilizer.[271] What about compostable materials from current urban landfill waste? Any urban waste would only be a small addition to the manure and other farm waste already being used by farmers in the US and elsewhere.[272]

The only 'unlimited' source of nitrogen for organic farmers is 'green manure' legumes grown in a crop rotation to provide nitrogen for subsequent crops. However, land put aside for this purpose is not available for crop production. So the land taken up both directly and indirectly for a given quantity of crop output is increased. In any year a significant proportion of the land is not taken up with growing final crops but rather with growing manure! So even if the yields in the fields growing the final crop were the same as for modern sensible agriculture, the average for cropland as a whole is going to be far less.

There is a similar story with chemical pesticides. If you let insects eat part of your crop rather than use pesticide, you need more land for a given crop. Pesticide is not always the only remedy for pests and in some cases if wrongly or over used can make the pest problem worse. However, this does not negate the fact that in most cases there is no substitute for chemical pest control. Furthermore, other measures tend to be adjuncts to pesticide use rather than substitutes. A study by Texas A&M University indicates that U. S. field crop yields would decline drastically if farmers in that country substituted the currently available organic pest controls for synthetic pesticides. Soybean yields would drop by 37 percent, wheat by 38 percent, cotton by 62 percent, rice by 63 percent, peanuts by 78 percent, and field corn by 53 percent.[273]

Other productivity reducing and resource wasting practices of organic farmers include foregoing the use of antibiotics and growth hormones for livestock and the use of genetic engineering.

The exponents of 'alternative' agriculture also tend to have a strong, low-tech streak to them. Machines are seen as unnatural and dehumanizing, and their use is destroying the planet. In a similar vein, the small farmer is the hero and agribusiness the demon. Once again this is at odds with efforts to economize on the use of land and water. Two examples should make the point, namely, the present move to precision farming and the prospect some time in the future of factory farming.

The first of these technologies will allow farmers to micro-manage each separate patch of ground. Its particular stresses can be detected and specific solutions applied. Photography from satellites or aircraft can tell a considerable about how the crop is performing in each field particularly in the infrared and near infrared range. Farm vehicles can assess soil conditions with corers and electromagnetic induction (EMI) equipment while recording their position with the use of GPS. This information can then be fed into a computer with geographical information system (GIS) software which can present the data as maps, tables graphs, charts or reports. A tractor can be directed by a computer to dispense variable amounts of pesticide, fertilizer and water on the basis of location information provided by the GPS and field condition data provided by the GIS. The process can also be put in reverse. Different inputs, plant varieties and cultivation methods can be tried in different fields and their performance easily compared. So far this technology has only been adopted on a small scale. However, it will no doubt become more widespread once the technology matures, costs come down and farmers get used to the idea.

In the longer term crop growing may actually become factory production carried out in multi-level buildings. This would allow for massive increases in output per hectare. A 20 story farm factory on one hectare of land would not just grow the equivalent of 20 hectares. Output would be even higher than that because crops would be grown under optimal conditions in terms of growing medium, lighting, climate and water supply.

A population of 10 billion people with grain output of 550 kilograms per capita each per year (double the present average) and achieving a yield potential of 10 tonnes per hectare requires an area of 550 million hectares. Assuming 20 story facilities that is a land area of 27.5 million hectares or 275,000 km2. That is slightly larger than New Zealand or Colorado, and slightly smaller than Italy.

Per person the land area is 27.5 m3, the size of a living room. The building floor space per person of 550 m3 (23.5 x 23.5) is half the area of a quarter acre suburban block. If each floor only needs to be about a meter or two high, you are looking at a cubic area comparable to a typical bungalow. The construction investment required to accommodate our food production would then be no greater than that required to accommodate ourselves. So, it is unlikely to be a daunting task for the economy of the 22nd century.

There would also be greater water efficiency. The water would be delivered precisely as required, and none of it wasted on underperforming plants. The energy consumption of such food production methods would probably be greater than present methods. Pumping water to each floor, lighting, heating, cooling and building construction would require a lot of energy. However, at the same time, activities such as plowing, planting and harvesting would either no longer be necessary or be done with greater energy efficiency.


3

PLENTY OF RESOURCES

Aiming for Global Affluence

Being able to eat all that you want is, of course, only part of achieving the basic level of prosperity which is enjoyed by most of the one billion people living in developed countries. They also generally have well built and comfortable accommodation, and ready access to infrastructure such as sewerage, electricity, communications, transport and hospitals, to domestic labor saving appliances and to an abundance of cheap food and clothing. They can also afford a regular night out and an occasional holiday. These relatively fortunate can be found in western Europe, the US, Canada, Japan and various outposts such as Australia, New Zealand, Hong Kong, Singapore and Taiwan. At the moment, Portugal could be considered the cut off point with an annual GDP per capita of $18,000.[274] In the following discussion this will be taken as the minimum target that other countries need to achieve.

Once you move outside the top group, the level of economic development and living standards drops away quite quickly. The average income[275] for the middle group of countries between Portugal and China is only half the rich group minimum. It is home to 1.16 billion people of whom less than 220 million live in countries with average incomes over $10,000. Also, we find here a lot of very poor people because of wide income disparities. Brazil, Peru, Mexico and Colombia provide good examples of this. The same can be said for China where the bottom 20 per cent of its 1.3 billion people are much poorer than its average income of $5,600 suggests.[276]

Between China and India in terms of average income, is a group mainly comprising the Philippines, Egypt and Indonesia with a population of 550 million. Their average income is just over one fifth that of the rich country minimum. Then we have the bottom group topped by India. It includes Sub-Saharan Africa, Pakistan, Bangladesh and Burma. Here we find 2.45 billion people or 38 per cent of the world's total. India with a population of 1.1 billion has an average income of $3,072, just over one sixth of the rich group minimum, while 1 billion live in countries with an average income of less than $2,000. Table 3.1 shows a list of selected countries outside the rich list in descending order of GDP per capita. It shows how many times this has to increase to reach our minimum target of $18,000.

Table 3.1: Factor required to increase GDP per capita to $18,000, selected countries (2004 data, purchasing power parity)

Country

Required factora

 

Country

Required factor

Czech Republic

1.07

 

Philippines

3.67

Hungary

1.21

 

Egypt

4.41

Argentina

1.47

 

Indonesia

5.26

Poland

1.50

 

India

5.86

Saudi Arabia

1.53

 

Vietnam

6.62

Mexico

1.83

 

Pakistan

8.42

Russia

1.90

 

Bangladesh

9.42

Thailand

2.20

 

Sudan

9.49

Brazil

2.25

 

Burma

11.39

Iran

2.37

 

Uganda

12.46

Turkey

2.46

 

Nepal

12.60

Colombia

2.75

 

Kenya

17.56

Algeria

2.76

 

Nigeria

18.44

Ukraine

2.83

 

Ethiopia

23.96

Venezuela

3.15

 

Congo

25.59

Peru

3.24

 

Tanzania

27.91

China

3.24

 

Somalia

33.64

Source: CIA Fact Book on line, accessed January 2006

a. For example, the Czech Republic's GDP per capita needs to be 1.07 times its present level.

 

If, as this chapter will argue, resources place no limit on affluence, the poorer countries can be expected to make up a lot of ground in the course of this century. There is no crystal ball. However, given recent performance and the kinds of growth rates that are required by different countries to reach $18,000 per head, it is possible to make some broad brush predictions and be fairly sanguine about the possibility of a large proportion of the developing world achieving a level of affluence this century and the worst off ones early in the 22nd century. A lot will be achieved even if growth is less than stunning and there are periods of war, revolution or economic stagnation.

We will get off to a fairly good start if World Bank predictions for the next 10 years prove correct. The bank expects GDP per capita growth for developing countries to average 3.5 per cent per annum over that period.[277] This would be similar to the performance of the last five years [278] and provide a 41 per cent increase by 2015.

Countries that only need a doubling in per capita income to reach our minimum target would reach it mid century with an average annual growth rate of 1.5 per cent while a tripling in the same time would require 2.3 per cent. These growth rates are not hard to imagine and if achieved would bring into the affluent camp Latin America, Eastern Europe, the former Soviet Union and about half the population of the Middle East and North Africa. The expected annual growth rates in per capita income for these regions over the next decade should put them on track if achieved: for the former Soviet Union and eastern Europe it is 3.5 per cent, for the Middle East and North Africa 2.6 per cent and for Latin America 2.3 per cent. [279]

China needs a three and a quarter fold increase. If as expected it continues at the 6 per cent per annum which it has been averaging over the last 25 years, it will be halfway to $18,000 by 2015.[280] A further 10 years at the same rate would take it to target (2025). Alternatively, a further 20 years at 3 per cent would achieve the same result (2035). If things do not go so well, it may be mid century.

India's GDP per capita has been growing at around 4 per cent over the last decade.[281] If that rate continues for another ten years, GDP per capita would increase by 50 per cent and the country would then be a quarter of the way to the target. If it were to continue at that rate of growth, it would reach it by 2050. Otherwise, any average growth rate greater than 1.65 per cent will achieve the desired result before 2100.

Countries such as Bangladesh and Pakistan that have a GDP per capita of around $2,000 need to achieve good growth performances to reach $18,000 by 2100. However, they do not need to come near the record pace of Japan, South Korea and Taiwan that grew about 18 fold during the last century (or 3 per cent per year). Almost matching Spain, Finland or Italy would be sufficient. They grew 10 to 12 fold.

However, most of Sub-Saharan Africa would require a record performance to meet the same deadline. As discussed in the final chapter, present political conditions are totally un-conducive to economic development, so expecting such a level of success does seem, at least from the present vantage point, excessively sanguine. However, the region would have to be fairly unlucky not to have made some significant inroads into the political obstacles by mid-century. Furthermore, increasing per capita income levels will be helped by the slowdown in the growth, and then the stabilization, of population during the course of the century.

What about the poor countries eventually catching up with the rich ones? This will require outstripping their growth rates for an extended period. There are a number of reasons why this is likely to occur:

·        at an early stage of industrialization where current production methods are relatively backward, moderately small investments in improvements can make a proportionately large difference;

·        at this stage, a lot of people are just learning to do their job and will make considerable improvements in their efficiency over the short to medium term;

·        being followers rather than leaders, poor countries can adopt technologies that have already proven successful. They do not have to worry about the technologies that did not make the grade or go through the initial teething problems. The adoption of US technology by Japan and western Europe after World War II are prime examples of this; and

·        there is the opportunity for technology leapfrogging where the newcomer goes straight to a cheaper technology. For example, mobile phones in India and Sub-Saharan Africa can provide people with telecommunications with far less investment than land lines.

 

********************

The following examination of the viability of widespread affluence in the 21st century and beyond looks at the extent of energy and raw material resources, and at our ability to limit the impact of industrialization on "life support" resources such as air, water, weather and natural habitat. Energy receives the most attention because of the range of technologies and resources involved while raw materials receive the least because it is mainly a matter of detailing their vast abundance and the considerable scope for substitution between them.

Our Energy Needs

In 2004 we produced 11,223 million tonnes oil equivalent (mtoe) or 470 exajoules (EJ) of commercial primary energy.[282] Just on 45 per cent was consumed by the 15 per cent (925 million) living in the rich countries, giving them an average per capita level five times that of the remaining 5.4 billion on the planet.[283]

We will need to increase this output considerably over the course of the century as poor countries develop and narrow the gap with the rich ones. According to the mid-range projection by the US Energy Information Administration (EIA), world energy consumption will grow at an average rate of 2 per cent over the period 2003 to 2030.[284] This is slightly lower than the average annual increase of 2.2 per cent from 1970 to 2002.[285] If this rate were to be maintained throughout the century, annual energy consumption by 2100 would increase more than 6.5 fold to 3146 EJ (77,115 mtoe) per annum. Depending on whether the population is closer to 9 or 10 billion, this would provide a global per capita energy consumption level a bit below or a bit above the present US level.[286] Around 140,000 EJ would be consumed during the course of the century. Mid-century annual consumption would be around 1,170 EJ while 38,000 EJ would be consumed between now and then.

Even lower growth rates would achieve significant results by the end of the century. A rate of 1.7 per cent would increase energy output 5 fold and give a world of 10 billion people the current rich country per capita average of 5.5 toe. A rate of 1.5 per cent would give a 4 fold increase and a per capita average of 4.5 toe.

While the rich countries will continue to increase their energy consumption it will be at a significantly slower rate than the poor ones. This is because of a static population for the group as a whole, slower economic growth rates at the technology frontier and being at a higher and less energy intensive level of development. In line with the recent past, the EIA projects a 1 per cent annual growth rate compared with 3 per cent for the poor countries over the next quarter century.[287]

If rich countries were to continue increasing energy consumption by 1 per cent per year and their population remained static, while overall energy grew annually by 2 per cent and the population of the poor countries increased by 60 per cent, by the end of the century, rich country per capita consumption would increase from 5.5 to 14 toe and poor country per capita consumption from 1.1 to 6.8 toe. This would bring the poor countries as a whole almost up to present US per capita consumption levels and shrink the disparity between rich and poor countries from five to one to two to one.

The task now is to assess our ability to meet these energy consumption levels. We need to know how long we can continue to rely heavily on fossil fuels and to what extent global warming places a serious limit on their use. Then we need to know whether other resources are extensive enough to eventually fill the breech and whether we will have the technology to exploit them. In the case of nuclear power some time also needs to be spent allaying concerns about radiation hazards which are proving to be an obstacle to a rational consideration of this technology.

Fossil Fuels

Around 80 per cent of the energy that we use at the moment comes from fossil fuels - oil, coal and gas.[288] Below we examine each of these fuels in turn and look at how long they can be expected to last given different assumptions about their rate of use. We then conclude with an overall assessment of the fossil resource.

Oil

Oil meets around 35 per cent of our primary energy needs.[289] It is critical to the transport sector where it provides around 95 per cent of what is required. The most recent attempt to quantify the resource base for conventional oil was the World Petroleum Assessment 2000 undertaken by the United States Geological Survey (USGS).[290] They provided a figure of 959 billion barrels for proven reserves. This is the amount of oil that could be produced profitably at current prices if there were no further discoveries or advances in extraction technology.

To this figure they add a range of values for additional resources which are classified into expected reserve growth and undiscovered resources. Reserve growth (also called field growth) is the expected growth in reserves over the next quarter century through better definition over time of what is in known fields and the development of better recovery methods, while the figure for undiscovered resources is an estimate, based on geological knowledge, of further oil that will be found by 2030.

Estimates for these additional resources range from 776 billion to almost 2.8 trillion barrels. The USGS estimates that there is a 95 per cent chance that the real quantity is at least the low value, a 5 per cent chance that it is at least the high value and a 50 per cent chance it is at least 1.6 trillion barrels. These are based on subjective probabilities assigned by people with expert knowledge of the different oil deposits.

So in sum, they are saying that we can be very sure of a total resource, including reserves, of 1.74 trillion barrels (0.959 plus 0.776) but that there is a reasonable chance of it being considerably more. The USGS usually cites a mean value for the additional resources of 1.7 trillion. This gives a total resource of almost 2.7 trillion barrels (0.959 plus 1.7) or 16,522 EJ. That would last 90 years with static consumption levels[291] and until 2056 if consumption were to grow annually by 2 per cent.

A more pessimistic school of thought claims that the ultimate remaining resource is only around one trillion barrels.[292] They argue that the reserve figures of OPEC countries have been exaggerated for political reasons and downplay the scope for new technology to squeeze a bit of extra oil from an increasingly depleted resource. Consequently, they see depletion and increasing costs of extraction occurring in the next decade or so. At the same time they see problems with alternative resources, including unconventional oil, filling the breech because of high costs and environmental damage.

Then we have the optimists who consider the USGS estimates too conservative.[293] They argue that technological advances will increase maximum recovery rates from their current levels of around 50 per cent, increase the ability to exploit resource in difficult geological formations, to drill further into the earth and in deeper oceans and to detect new deposits. For them, the introduction of the more challenging non-conventional resources (see below) and alternatives to oil can be a more leisurely affair allowing plenty of time for ironing out problems and reducing their costs.

The existence of the resource is not the only consideration. There is also the matter of ensuring that it is made available by investing in sufficient extraction and processing capacity. The generally accepted view at the moment is that the price levels we have been experiencing in recent years, and those expected in the future, should be sufficient to induce a considerable increase in investment, including by OPEC countries. A 2 per cent annual increase would mean that in 2025 we would need to be producing 50 per cent more than we were in 2005 and in 2050, 2.4 times as much.

So far we have been discussing what is generally referred to as 'conventional' oil. This is more or less oil that flows from oil wells.[294] 'Non-conventional' oil on the other hand involves more costly extraction techniques and comprises bitumen from oil sands, kerogen from oil shale and extra heavy oil. Despite the higher costs, some of these resources are commercial at current and expected oil prices and more will become so as the required technologies mature.

Oil Sands Oil or tar sands, are grains of sand or porous rocks that are mixed with bitumen. This is a thick, sticky form of oil which at room temperature is much like cold molasses. As a result it does not flow from the ground like conventional oil. Other means are required to extract it and then it has to be further processed to create a synthetic crude oil. This process includes the addition of hydrogen, something in which bitumen is deficient.

The vast bulk of the resource is located in Alberta, Canada where production has grown significantly over recent years as a result of higher oil prices and declining costs. Output is now around one million barrels a day and is expected increase significantly in the next decade.

Most of present exploitation is confined to oil sand close to the surface which is extracted using open cut methods. Giant shovels load up equally huge trucks which cart the oil sands ore to a crusher. Here the sands are pulverized, and the bitumen separated out by various processes employing water, steam and solvents.

However, most of the reserves and resource as a whole are too deep in the ground for surface mining, so there will have to be an increasing reliance on in situ methods. These reduce the viscosity of the bitumen while it is still in the ground so that it can flow sufficiently to be pumped to the surface. Currently about a third of extraction is done this way[295] and it is bound to have an increasing role if the resource is to be extensively exploited.

A range of in situ methods are being developed.[296] The most commonly used method at the moment relies on steam injection to soften up the bitumen. The injection of solvent is also used and a hybrid combination of solvent and steam is being trialed. Two other methods under development are fireflooding and electrovolatization. Fireflooding heats the bitumen by burning some of it. Injected air feeds a fire front that softens the bitumen up ahead. Electrovolatization heats the oil with an electric current. Also being mooted is the use of microbes that would reduce the viscosity of the bitumen.

According to estimates published by the Alberta Energy and Utilities Board, the initial volume-in-place, based on currently available data, is 1.6 trillion barrels while the ultimate volume in place, a value representing the volume expected to be found by the time all exploratory and development activity has ceased, is 2.5 trillion barrels (15,300 EJ).[297] About 300 billion barrels have been identified as recoverable reserves with current technologies and processes.[298]

The USGS provides an estimated technically recoverable resource of 531 billion barrels for Alberta and 120 billion barrels for the rest of the world, giving a total of 651 billion.[299] This broader concept presumably takes into account the less well explored resources and higher oil prices.

Before these figures are compared with those for conventional oil, one needs to take into account the fact that both the extraction of the bitumen and its conversion to synthetic crude are very energy intensive. For every barrel of crude oil produced somewhere near the energy equivalent of one third of a barrel is consumed.[300] So, the reserve of 300 billion barrels needs to be adjusted down to 200 billion barrels (1,224 EJ).

Shale Oil Oil shale is a sedimentary rock containing kerogen, a waxy organic substance that originated from the remains of algae and other living matter and which can be converted to oil through a process called retorting which involves heating the shale in the absence of air to temperatures of 500 degrees C or more. The shale can be mined much like coal and then processed at the surface or retorting can be performed in situ much like the oil sands process. As with bitumen from oil sands, hydrogen has to be added to create an acceptable crude oil.

In many regions there has been no real effort to delineate the resource through lack of commercial interest. In the US during the 1970s there was a lot of interest when oil prices were expected to remain high. So there is some knowledge of that country's resource. Its resource base is estimated to be about 2 trillion barrels.[301] Globally, the resource base is conservatively estimated to be 2.6 trillion barrels.[302]

However, potential world oil shale reserves are put at a mere 160 billion barrels.[303] With such a large resource it is hard to imagine reserves remaining at such a low level. If shale oil is anything like conventional oil, new deposits will be progressively added as the industry develops. Just assuming the same ratio of reserves to the ultimate resource in place as Alberta's tar sands gives a reserve of 312 billion barrels. However, whatever figure we use, there has to be a similar adjustment for energy consumption in extraction and processing which we found for oil sands.[304] For the smaller figure we are looking at 107 (655 EJ), for the larger 208 (1273 EJ).

Heavy Oil Heavy oil is oil with a high density and viscosity which often requires the injection of super-heated steam into the reservoir to reduce viscosity and increase reservoir pressure. As with bitumen it needs to be upgraded to achieve a standard crude oil. Most of the oil is in Venezuela where the resource is currently assessed at over 1.2 trillion barrels (7343.5 EJ) and reserves at about 270 billion barrels.[305] Assuming one third energy loss in production, that is equivalent to 180 billion barrels (1101 EJ) of conventional oil.

So, for non-conventional oil as a whole, there are between 3,000 and 3,600 EJ of energy which could be recoverable in the near future. This would increase recoverable oil resources by 20 per cent to around 20,000 EJ (3.27 trillion barrels) if we accept the USGS estimate for conventional oil. With a 2 per cent consumption growth rate, that would push oil availability out by about 7 years to 2063.

Converting gas and coal to liquid fuel Another option is to produce synthetic oil from gas and coal. In the past such fuel was only produced when access to far cheaper crude oil was denied, Nazi Germany and Apartheid South Africa being the cases in point. They both produced it using their ample supplies of coal. Now with higher oil prices and improved technology there is renewed interest. Plans are presently afoot to establish a gas conversion plant in Qatar whose offshore gas field is home to one tenth of the world's proven gas reserves; China expects to have a coal liquefaction plant operating in Inner Mongolia by 2007; and the South African company Sasol has been having exploratory talks in the US [306] and India.[307] To meet liquid fuel requirement to 2100 with our 2 per cent growth assumption would require over 30,000 EJ (4.9 trillion barrels) from coal or gas. However, as with the other non-conventional forms of oil there are significant energy losses in production which have to be taken into account.

Coal

Coal currently supplies a quarter of primary energy with most of it being used for electricity production where its contribution is 40 per cent.[308] Proven recoverable reserves are about one trillion tonnes[309] which has an energy value of around 21,000 EJ.[310] These are fairly accurately measured resources that would be economical at present prices and accessible with current technologies. At current coal usage rates of around 5 billion tonnes these would last for 200 years.[311] Assuming coal keeps its share and energy consumption increases by 2 per cent per year, present reserves would last 80 years. If tomorrow coal were to take on a bigger role and say grow at 3 per cent per year they would last 65 years.

The potential resource is much larger than these reserves. To begin with, there are the coal deposits that have not being explored or assessed because there is not the demand for them or that are too costly to mine given current market prices. Then there are those that could be exploited with improvements in mining technology. Some deposits are presently too difficult to get at, for example, because the seam is too thin. New tunneling methods or in situ gasification may remove these kinds of obstacles. Recovery rates in mines could also be increased. At the moment, many mines are operated using the traditional room-and-pillar mining method which leaves about half the coal in place. The long wall method which recovers about 90 per cent of the coal could be made more widely suitable or totally new methods devised. Furthermore, in less developed economies higher recovery rates would be achieved by adoption of more sophisticated and capital intensive methods. Wider application of surface mining would be an example of this. The total resource has been estimated to be 6.11 trillion tonnes (179,000 EJ).[312] Assuming 2 per cent annual energy growth and coal retaining its present share by growing at the same rate, the resource would last almost 160 years. At 3 per cent growth it would last 120 years.

Natural Gas

Natural gas is predominantly methane and supplies 21 per cent of our primary energy.[313] Proved conventional reserves are currently around 180 trillion cubic meters (tcm). At a conversion rate of 37 EJ per tcm that has an energy value of around 6500 EJ and is quite close to the energy value of current oil reserves.[314] These would last until 2070 at the 2003 usage rate of 2,618.5 billion cubic meters (bcm). If usage grew at the same rate that we assume for energy as a whole, i.e. 2 per cent, they would last until 2045. If gas increases its share of fossil energy as expected, say averaging an annual growth rate of 2.5 per cent, it would last until 2042.

There are two main estimates of the total resource. The USGS, using the same system of classification as for oil, has a mid range estimate of 386.5 tcm (14300.5 EJ) while the figure from Cedigaz is 490 tcm (18130 EJ).[315] The difference is due to the adoption by USGS of a 30-year forecast period instead of the unlimited forecast span used by Cedigaz. Assuming the more conservative USGS figure, the resource would last until 2150 at current production levels, until 2071 with 2 per cent growth and 2064 with a 2.5 per cent growth.

Non-Conventional Gas Resources

Coal-bed methane Coal-bed methane is methane within coal which is either created chemically as heat and pressure are applied to coal in a sedimentary basin or through bacteria that obtain nutrition from coal and produce methane as a by-product. Because of its large internal surface area coal can hold six or seven times as much gas as a conventional natural gas reservoir of equal rock volume can hold.[316]

Coal-bed methane production grew dramatically in the US during the 1990s. In 2000 it was reported to be 7.5 per cent of US gas production, although somewhat less of consumption given the significant gas imports from Canada.[317] Recently production has also started to take off in a number of other countries.

To extract the methane, some of the water which permeates the coal bed must be pumped out to release the pressure which is keeping the gas trapped within the coal. Once the gas starts to flow, it is at a much slower rate than conventional wells.

On the basis of fairly limited data the USGS estimates that the world resource in place could be up to 210 tcm (7800 EJ).[318] For the conterminous United States, they estimate that the resource could be more than 20 tcm, with about 2.8 tcm recoverable with current technology. If we assume a similar ratio applies to the world as a whole, the recoverable resource would be 30 tcm or 1,110 EJ.

Tight gas A major non-conventional resource that is beginning to be exploited is tight gas. This is gas that requires the host rock to be fractured before it will flow. Extraction will benefit from a range of on-going improvements in mining methods, including drilling and fracturing techniques. In the US it is already providing about 20 per cent of local production.[319]

Although tight gas reservoirs exist in many regions, only the US resources have been assessed. The potential resource for that country has been estimated to be 15.7 tcm (583 EJ) with current technology and 19.82 tcm (733 EJ) with 2025 technology.[320] Germany's Federal Institute for Geosciences and Natural Resources has arrived at a global potential of 2856 EJ.[321] This is only a small fraction of the resource in place. Some estimates suggest that there is as much as 424.7 tcm (15,854 EJ) just within the state of Wyoming alone.[322]

Aquifer gas Natural gas is often found dissolved in aquifers and the amount dissolved increases substantially with depth. It is variously referred to as aquifer gas, hydro-pressured gas or brine gas and is expected to occur in nearly all sedimentary basins of the world. While no detailed assessment of the resource is available, estimates derived from groundwater volume suggest a resource ranging from 2,400 to 30,000 tcm (90,000 to 1,100,000 EJ) with a mean estimate of 16,200 tcm (600,000 EJ). While highly speculative, these estimates suggests a resource of staggering proportions.[323]

Methane hydrates Gas hydrates are ice-like solids in which water molecules trap gas molecules in a cage-like structure known as a clathrate. They form when water and natural gas combine under conditions of moderately high pressure and low temperature. Because the gas is held in a crystal structure, the gas molecules are far more densely packed than under more normal conditions.

Research so far indicates most of the hydrate takes the form of grains or particles in pores of sedimentary rocks in zones which can range from tens of centimeters to tens of meters in thickness. Gas hydrate also occurs as nodules, laminae, and veins within sediment and sometime as thick pure layers.[324] These deposits are to be found beneath the ocean floor at water depths greater than about 500 meters and in Arctic permafrost.

The resource in place is believed to be vast with estimates ranging from 2830 tcm to 8.5 million tcm.[325] According to the IEA, the median estimate is about 21,000 tcm (777,000 EJ).[326] For the US, the USGS provides a mean estimate of 9069 tcm.[327] This would suggest that it is at least similar in size to the conventional resources in place of coal, oil and gas combined and possibly some orders of magnitude larger.

A lot of the resource would be quite a challenge to recover because it is widely dispersed in hostile Arctic and deep marine environments, and encased in sediments with low-permeability. Any large scale production in the near future is easier to imagine if sufficient deposits can be found which are the exception in terms of these characteristics. At this stage the existence or extent of such deposits is not known.

The US and Canada are presently investigating the resource potential within North American permafrost and testing various technologies. Japan and India also have significant programs. The US Department of Energy (DOE) expects that we will have the resource knowledge and technology to begin commercial production by 2015.

The various methods being examined involve perturbing the hydrate in place so that it decomposes to constituent natural gas and water. They include heating, injecting various chemicals and decreasing reservoir pressure. Any mining process would have to take into account the risk of major methane releases into the atmosphere. The magnitude and likelihood of such a release are not yet known, nor is the mitigating effect of seawater which can prevent methane from reaching the atmosphere. The global geologic record appears to indicate destabilization of the hydrate zone in the past has lead to very substantial releases of methane. However, the reasons for this are not yet well understood.[328]

Fossil Fuel as a Whole

As things stand fossil resources that are already recoverable have an energy value in the order of 60,000 EJ (see table 3.2). This includes the reserve estimates for coal and non-conventional oil and gas plus the resource estimates for conventional oil and gas.

With total primary energy consumption in 2004 of 470 EJ, and assuming 2 per cent annual energy growth, currently recoverable fossil resources could continue to meet 80 per cent of our energy until 2075. To get through to 2100, we would need to increase recoverable resources from 60,000 to 110,000 EJ. This would mean tapping quite a small proportion of the remaining resources in place - 3 per cent of the highly speculative total in Table 3.3, and 6 per cent if we leave out methane hydrates.

Table 3.2 Currently Available Fossil Resources

Fuel

Recoverable in near future (Exajoules)

Coal

21,000

Conventional oil

16,522

Heavy oil

1,101

Oil sands

1,224

Shale oil

1,273

Conventional gas

14,300

Coal-bed methane

1,110

Tight gas

2,856

TOTAL

59,386

Sources: See text.

Table 3.3: Resources in place for coal and non-conventional oil and gas (excluding what is already recoverable)

Fuel

Other resources in place (Exajoules)

Coal

158,000

Oil sands

14,800

Heavy oil

6,200

Shale oil

14,700

Coal-bed methane

6,700

Tight gas

>50,000

Aquifer gas

600,000

Methane hydrates

780,000

TOTAL

1,630,400

Sources: See text.

 

Increasing our access to the resource by this amount should not be a major demand on investment or technological innovation. Remote resources can become less so, recovery rates can be improved in coal mining, and drilling, rock fracture and in situ technologies can improve considerably.

This suggests that it would be possible to continue our reliance on fossil fuels through this century and into the next. However, given the vast levels of energy that would be consumed in the 22nd century and beyond, the resource is certainly an historically limited one. With a continuing growth rate of 2 per cent, the entire resource would be fully consumed during the first part of the 23rd century, while a growth rate of 1 per cent after 2100 would only stretch the resource to the last quarter of the 23rd century. Only with a zero growth rate after 2100 would the resource last until the middle of the millennium.

CO2 Emissions and Global Warming

The biggest question mark hanging over fossil fuels is not their availability but rather the effect on the climate of the carbon dioxide (CO2) released when we burn them. For a given unit of energy, coal is the worst in this regard followed by oil then gas.[329]

Carbon dioxide, methane, water vapor and a number of other gases and aerosols, which reside in the atmosphere, retain some of the heat that would otherwise escape back into space. As a result most the earth is well above freezing for most of the time. In fact, average global temperatures are 33oC warmer than they would otherwise be.[330] The concern is that we are increasing temperature levels by adding extra CO2. This has a direct greenhouse effect and also an indirect one because the warming increases the amount of water vapor in the air.

Any atmospheric warming effect would follow a 30 year or more lag due to the fact that the oceans absorb much of the heat.[331] The most pronounced effect of any warming would be a rise in sea levels due to the thermal expansion of the oceans and the melting of ice sheets in Greenland and Antarctica. This would occur gradually over centuries and even millennia. The increase in water vapor will lead to increased precipitation overall.

Uncertainties

While there is general agreement that increased CO2 can cause warming, there is considerable disagreement or uncertainty about the extent of the impact. This shows up even between climate models used by researchers. Their predictions of the effect of doubling CO2 from its pre-industrial level ranges from 1.5-4.5 degrees C.[332] The low end is fairly benign and scarcely noticeable while the high end could be far more serious.

There are three major areas of uncertainty and controversy. These are (1) the extent that there has been warming to date and the blame due to human emissions of greenhouse gases, (2) whether any increase in clouds from global warming amplifies or diminishes the greenhouse effect and (3) the extent that emissions ultimately translate into higher concentration levels in the atmosphere.

The surface temperature records show warming at the rate of 0.17 degreee C per decade since 1976.[333] However, various doubts have been raised about the figures on which this is based.[334] To begin with, some claim that they fail to adequately adjust for the so-called heat island effect, whereby readings from urban weather stations can be influenced by localized warming due to heat-retaining asphalt, brick and concrete replacing grass and trees. Adjustment problems include: the frequent lack of nearby rural stations for comparisons; the use of population rather than construction growth as an index of urbanization; and the failure to take into account the fact that even quite small towns have a significant heat island effect. Other sources of inaccuracy in the record include the uneven placement of weather stations, with most located in the northern hemisphere, at mid-latitudes and on land; the two thirds decline in the number of stations since 1975;[335] the use of sea surface temperature as a proxy for air temperature over the ocean; and changes in vegetation and structures in the vicinity of weather stations.

It is frequently claimed that warming from the enhanced greenhouse effect has been temporarily dampened by sulfur emissions which have a cooling effect and their impact will diminish with increased pollution controls. However, this does not take into account the fact that reductions in sulfur emissions are accompanied by reductions in soot emissions which have a warming effect. At the very least soot would do much to cancel out the sulfur. Indeed, James Hansen, the NASA researcher who helped father the global warming scare in the late 1980s, estimates that soot may be responsible for 25 percent of observed global warming over the past century.[336] The sulfur theory also receives no comfort from the fact that there has been a warming in the northern hemisphere, where the emissions mainly occur, compared with a cooling in the southern hemisphere.[337]

Detecting warming is one thing, blaming greenhouse gas emissions is another. A graph that appears prominently in publications of the International Panel on Climate Change (IPCC) is the "hockey stick" which shows temperature levels as fairly flat throughout the last millennia until the beginning of the twentieth century when they rise significantly. This conveys the idea that temperature levels do not normally vary much and that there must be something abnormal happening in the last century, the prime candidate, of course, being greenhouse gas emissions. The way this was devised using tree ring proxy data has recently been subject to what appear to be fatal blows from its critics.[338] It also flies in the face of considerable evidence of climate variability over the last thousand years or so. There appears to have been a warm period from the 8th to 13th century when temperatures were at least as high or even higher than present levels. It was a time when grapes grew in southern England and the Vikings settled Greenland. This was followed by what has been called the Little Ice Age which lasted until the middle of the 19th century. This suggests a cyclical movement with the current warming at least in part being due to an ongoing emergence from the Little Ice Age and that any future warming will not be as far from the normal range as the hockey stick suggests.

Another major area of uncertainty is the effect of clouds. High level cirrus clouds amplify any warming while low level cumulus clouds dampen it. How global warming would affect the absolute and relative prevalence of these two types is quite unclear. One theory presently being assessed claims that ocean warming leads to a reduction in overhead cirrus clouds, and this could do much to dampen the greenhouse effect. It has been dubbed the iris effect because it can be compared to the way the iris of the eye opens and closes in response to changing light levels.

Another source of uncertainty is the extent that CO2 emissions translate into higher concentration levels in the atmosphere. This depends on the workings of the so-called carbon cycle which is far from well understood. CO2 is held in the atmosphere, the biosphere and the oceans and there are constant exchanges to and fro between them. The atmosphere contains around 800 gigatonnes of carbon (GtC), the land biosphere (plants, animals and soil) 2000 GtC, surface ocean 1000 GtC and intermediate and deep oceans 38,000 GtC.[339] The level of CO2 in the air could be just as affected by natural variations in the exchanges between the atmosphere and the biosphere or oceans as by emission levels. Another wildcard here is the extent that warming or increased CO2 levels lead to either positive or negative CO2 feedbacks. On the one hand, increased CO2 can lead to increased plant growth which leads to greater uptake of CO2. On the other hand, changed climate conditions may lead to increased decomposition and greater release of CO2.

Alarmism

The picture is made even murkier by the alarmism encouraged by various elements in society. First, there is the environment movement which has a penchant for discovering catastrophic consequences in everything that humans do. Secondly, among scientists in the area there is a tendency towards alarmism because those who shout "big problem" either out of honest belief or opportunism tend to get more than their due share of research funding. And then to top it off, we have the media which is more than happy to be fed sensational stories of looming disaster.

Two prominent global warming advocates have openly admitted that it was OK to exaggerate in order to get the ball rolling. According to Professor Stephen Schneider of Stanford University, drawing attention to global warming required "getting loads of media coverage. So, we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have."[340] James Hansen the NASA scientist who kicked off the greenhouse scare in 1988 at a US Congressional hearing, admitted in 2003: "emphasis on extreme scenarios may have been appropriate at one time, when the public and decision makers were relatively unaware of the global warming issue."[341] Possibly others consider there is still sufficient urgency to justify embellishment, particularly given the limited level of effective government response.

A lot of the alarmism is about how serious and nasty climate change is already happening in the here and now. We are presented with smoking guns or asphyxiated mine canaries that can only prompt us to think the worst about what is in store for us. We are being told that the small amount of warming that has occurred in the last 100 years is melting our ice caps and glaciers and causing more extreme weather events. Even increased plant growth from higher CO2 levels can be given a gloomy spin.

We're Melting!

One of the most popular shock horror genre involves stories of melting glaciers and ice caps. While climate science predicts a very slow melting over many centuries if warming is sustained, the more zealous climate worriers would have us believe that our cities are just about to be inundated. For example, Greenpeace's spokesperson on global warming claimed not long ago that by 2080 Manhattan and Shanghai could be underwater if we did not rapidly cut back on greenhouse gases.[342]

Antarctica

We have been told that Antarctica, the largest store of ice, is warming and this is responsible for stressing out penguins and causing ice shelves to crash into the sea. However, warming reports have relied excessively on the Antarctic Peninsula which is only 1/50th of the entire continent. In this area there has been a warming of about 4.5 degrees Fahrenheit (2.5oC) since 1945 and the melt season has increased by 2 to 3 weeks in just the past 20 years. However, if you look at the Antarctica as a whole the story is quite different. Meteorological data indicates that there has been an overall net cooling on the continent between 1966 and 2000.[343] This cooling was during summer and autumn, the only seasons when a temperature change can have an effect on ice formation or melting.

Back in 1998 a very large iceberg broke away from the Ross Ice Shelf. This of course was attributed to global warming by many despite the fact that the temperature record shows no warming in the region. According to Antarctic researchers from the University of Wisconsin, "The breakage is part of the normal iceberg formation or 'calving' that comes as thick layers of ice gradually slide down from the high Antarctic plateau, and is not related to climate changes or global warming."[344] Furthermore, studies show that the glaciers feeding the Ross Ice Shelf are actually getting bigger. So there is no reason to believe that it will contribute to rising sea levels any time soon.[345]

Notwithstanding popular perceptions, climate scientists expect that any global warming during the 21st century will cause the Antarctic to make a negative contribution to sea level because the warmer, wetter atmosphere will lead to more snow over the continent.[346]

Greenland

The other main potential source of melting land-based ice is Greenland. However, at least for the moment it does not look like much is happening. According to Thomas et al., there is an overall balance with some regions thickening and others thinning.[347] While Krabill et al. estimate that there is an overall balance above 2000 meters but an overall thinning at lower elevations which is sufficient to raise sea level by 0.13 millimeter per year.[348] That is much less than the thickness of a tooth pick. So to have any effect at all on sea level this century, any ice melting will have to get a move on. Even the IPCC expects that Greenland will contribute little if anything to sea levels over the next century. They see a contribution ranging from -0.02 to 0.09 meters.[349]

If we are going to see Greenland ice sheets crashing any time soon we will also need to see quite a lot more signs of warming in that region than we have seen to date. According to Krabill et al, for the period 1900-95, the highest summer temperatures were in the 1930s while the last 15 years of the period were about half a degree colder than the ninety-six-year mean. According to Hanna and Cappelen, temperatures in the southern coastal region have dropped 1.29 degrees C since 1959; and the nearby sea surface has seen a similar downward temperature trend over the same period.[350] Chylek reports that the Greenland coastal stations data have undergone predominantly a cooling trend since 1940, and at the summit of the Greenland ice sheet the summer average temperature has decreased at the rate of 2.2 dgreees C per decade since the beginning of measurements in 1987.[351]

Mountain Glaciers

While a comparatively small source of melting ice, mountain glaciers are a veritable treasure trove of inaccurate claims of global warming in action. A number of examples should give the general flavor. One of the more renowned is the case of Mount Kilimanjaro. Claims repeated by various prominent people such as Senator John McCain and Sir David King, the UK's chief scientist, that global warming has caused snow and ice loss on Mount Kilimanjaro have been a media attention grabber. However, it has been melting since the end of the 19th century and the melting over the last 30 years represents the slowest rate of decline since 1912.[352] And it is hard to blame warming even for that given that temperatures don't vary much around the annual average of -7.1 degrees C and satellite data since 1979 show a slight cooling over the mountain and surrounding region. The culprit appears to be the fact that it has been drier over the last century. "With less snowfall to replenish the glacier and less cloud cover to shield it from solar radiation, Kilimanjaro lost glacial mass even during periods of global and regional cooling."[353]

In early September 2001, NBC had a report, claiming that the melting of glaciers in Montana Glacier National Park is due to warming. It is true that there has been a 3.5oF warming if you only go back to 1950. However, if you examine the entire temperature record over the last century or more there is no upward trend, with current average summer temperatures being much as they were at the beginning of the record.[354]

On July 9 2001, the Washington Post published a story claiming that glaciers were receding in Peru because of global warming. It reported the claims of a local glaciologists that this was due to rising temperatures. However, a look at the records indicates that there has been no warming in the region over the last two decades. Furthermore, Peru's glaciers have been receding for at least 150 years.[355]

On March 14 2005 Reuters news agency cites a World Wildlife Fund press release about retreating glaciers in the Himalayas. They were especially interested in the Gangotri glacier, which they said is retreating at an average rate of 23 meters per year. It is true that the glacier has been retreating at an increasing rate and that summer temperatures have increased since 1990. However, the glacier has been retreating over most of the last century and the acceleration in the rate began in the mid 1950s.[356] Also the summer temperature increase since 1990 comes after a dip in the 1970s and 1980s and temperatures are still lower than they were during the 100 years prior to that.[357] This looks more like a long term retreat which has little or nothing to do with warmer weather.

Arctic

While not contributing to rising sea levels, in much the same way that melting ice cubes do not raise the water level in a glass, melting sea ice in the Arctic would still be of some interest if it could be shown to be a smoking gun for global warming.

Greenpeace makes much of a 5.5 per cent decline in Arctic sea ice since 1978.[358] However, to see a human cause is to mislead with statistics. Data from a range of sources indicates no long term trend in Arctic temperatures.[359] Although, what does show up is a temporary dip in the 1970s and a recovery since then to the levels of the 1930s and 1940s. Weather balloon data does show some long term warming in winter, however, this is not going to affect ice cover given that ice does not melt at that time of year.

Studies in 1999 and 2000 of measurements taken by submarines, seemed to suggest that there had been a 42 per cent loss of sea ice thickness over the last 40 years. This is one of the most quoted claims in the IPCC's Third Assessment Report. At the same time, however, other studies attribute some or all of the 42 per cent decrease to the location and timing of the submarine cruises and show either no significant decrease or a more modest 12 to 16 per cent. A panel commissioned by the Arctic Ocean Science Board to review the research accepted the possibility that the observed thinning was the product of sparse data coverage and large inter-annual variability of sea-ice thickness and hoped that new satellite-based measuring methods presently under development would provide data of the quality required.[360]

On August 19 2000, the New York Times reported on page one, that "North Pole is Melting" and that "the last time scientists can be certain that the Pole was awash in water was more than 50 million years ago. The Times based its story on a call from a couple of scientist on board a Russian ice breaker, one of whom was a professor of oceanography and co-chair of the IPCC's Working Group II ("Adaptation and Impacts of Climate change").[361] The ship found itself in open Arctic water and the scientists were convinced that this was a sign of global warming. However, the Times finally issued a not very prominently placed retraction after it received numerous eyewitness accounts and photographic evidence of open water at the Pole in previous years. Presidential hopeful Senator John Kerry obviously missed the retraction because he said on May 1 2001 "…[T]his summer the North Pole was water for the first time in recorded history."[362]

More Frequent and Violent Storms

As a matter of routine, whenever there is a violent storm, the media and the alarmists inform us that such events are becoming more frequent and violent and it is due to global warming. In its 2001 report the IPCC found "no compelling evidence to indicate that the characteristics of tropical and extra-tropical storms have changed" during the 20th century.[363] In the case of hurricanes from the Atlantic which do so much damage when they hit landfall in the US, there has actually been a decline in both frequency or intensity from 1944 to 1995.[364] As for US tornadoes or twisters, there is no upward trend once you allow for the effects of improved monitoring; and in the case of killer tornadoes, categories 3 to 5 on the Fujita scale, there is a slight declining trend.[365]

Even Good News is Bad News

When it comes to doom and gloom, even goods news can be turned into bad news. This is what happened when Bill Clinton's Secretary of Agriculture, during the 2000 election campaign, hyped a report about how increased CO2 was leading to more ragweed pollen and this would cause more hay fever.[366] This was a very odd perspective given that any encouragement to ragweed from increased CO2 also applies to plants generally. It would have made more sense to announce that increased CO2 was leading to more food and forests.

What about Eco-Catastrophes?

If the impacts of a bit of global warming were extremely severe, there would then be a strong case for immediate and drastic emission reductions. This is where prophesies of eco-catastrophes come in. The most well known of these are (1) the melting of the Arctic permafrost subsoil, and (2) the closing down of the Gulf Stream.

Much of the permafrost - permanently frozen subsoil - of the Arctic regions of North America, Europe and Siberia have a surface covering of peat which holds an estimated 14 per cent of the world's carbon. Peat consists largely of organic residues which have not decomposed because of the high moisture environment. Rising temperatures could thaw the subsoil and lead to a lowering of the water table. This would dry out the peat which would begin to decompose and release CO2 into the atmosphere.

As discussed above there has been no warming to date in the Arctic region as a whole and this gives weight to lower warming projections. Also studies of permafrost in Barrow Alaska[367] and northern Quebec showed no signs of thawing.[368]

A number of studies have found evidence that thawing of permafrost can actually be associated with increased carbon sequestration by peat lands. Warmer climate would lead to greater levels of vegetation as would the fertilizing effect of higher levels of CO2. Research also indicates that plants growing in a more CO2 rich environment decompose less readily.[369]

A deep-enough thaw of permafrost could destabilize underlying methane hydrates leading to the release of methane. This could have a major impact if occurring on a large enough scale. However, to have a long term effect it would need to be sustained because methane only has a life of about 10 years in the atmosphere.

It has been claimed that global warming could close down the Atlantic Gulf Stream which pulls warm water from the tropics to the higher latitudes and is believed to provide western Europe with a far milder climate compared similar latitudes in North America. This would require the 30 million people living in the Scandinavian and Baltic countries to adapt like the Eskimos or move south. For most of the continent it would presumably mean having a climate much like Ontario and Minnesota which arguably is not an eco-catastrophe.

According to the scenario, the warmer currents would cease to travel to European waters because melting ice and increasing rainfall in the North Atlantic would switch off the process of thermohaline circulation. This relies on evaporation and iceberg formation making surface water more salty so that it sinks to the deep.

However, according to Carl Wunsch, an authority on ocean currents:

European readers should be reassured that the Gulf Stream's existence is a consequence of the large-scale wind system over the North Atlantic Ocean, and of the nature of fluid motion on a rotating planet. The only way to produce an ocean circulation without a Gulf Stream is either to turn off the wind system, or to stop the Earth's rotation, or both.

. . . The occurrence of a climate state without the Gulf Stream any time  soon - within tens of millions of years - has a probability of little more than zero.[370]

Some scientists have also raised doubts about whether the Gulf Stream actually has a pivotal role in Europe's weather.[371] Their research indicates a greater role for circulation of the air rather than the ocean. Firstly, because the prevailing winds across the Atlantic blow from west to east, western Europe benefits from the fact that the ocean gradually releases in winter the heat it has stored in summer. Added to this is the effect of the Rocky Mountains which influence the flow of winds within the atmosphere. These tend to bring cold winds from the north into eastern North America and warm winds from the south into western Europe.

All sorts of disasters are possible in future ages, human induced or otherwise. The longer the time frame the more likely they are. In fact, one seems to be a dead certainty, namely, the next Ice Age. However, the further humans travel into the future, the greater will be their ability to understand, adapt to and control natural forces.

Business as Usual for Now

With no signs of eco-catastrophe, there seems to be no strong reason to stray far from a "business as usual" approach, at least for the next couple of decades. That of course does not rule out a strong research and development program for emission free technologies plus some assistance to get them operating on a large scale. This will give us a wider range of options further down the track.

It also leaves open the prospect of keeping within a doubling from pre-industrial levels of greenhouse gas concentrations (i.e., 560 parts per million (ppm) measured in CO2 equivalent). Here is one scenario just to illustrate the point. With greenhouse gas concentrations currently at 430 ppm CO2 equivalent and energy's share of emissions at 57 per cent,[372] let us allot energy another 75 ppm of the remaining 130 ppm prior to all emissions falling to one GtC per year. (This is the level that the various carbon sinks can absorb.) We are assuming here that other sources of greenhouse gases are reined in to the same degree.[373] With annual energy emissions contributing 6.53 GtC[374] at the moment and these increasing by 2 per cent per year until 2025, that will add another 89 GtC (42 ppm) if we make the usual assumption that 50 per cent is retained in the atmosphere. However, by introducing a 5.6 per cent annual reduction after 2025 we would get emissions down to well under one GtC by 2060 with the additional contribution to the atmosphere from now until then of 161 GtC (75 ppm).

Adapting to any Climate Change

Cutting back on greenhouse emissions is not the only approach to possible climate change. Adapting to it is another. And the best way to help people in the future to adapt is to increase the rate of economic progress. The higher the level of economic development and know how, the more they can meet any challenges. Air conditioning and better insulation will protect people from any increase in heat waves. (Although, it should be kept in mind that much of any warming would not take the form of higher afternoon summer temperatures. A lot of it would be warmer winter nights.) Better housing and emergency infrastructure such as warning systems, shelter and rescue services can reduce death and misery if global warming leads to more extreme weather conditions such as floods. Better public health, vector eradication, treatments and cures can counter any climate induced movement of diseases such as malaria.

Adapting to any sea level rise should not be a great strain given the long time frame involved. In the 21st century any increase would be confined to thermal expansion and would perhaps be double or triple the 18 cm (9 inch) rise which we had no trouble dealing with during the 20th century and were generally unaware of. The melting of the ice sheets would only have an effect more than a century down the track and would occur over many centuries. People would have plenty of time to either build retaining works or move to higher ground.

Global warming is not expected to have a net negative effect on agricultural production. While some areas could be adversely affected by increased flooding, soil evaporation or drought, other areas would have better farming conditions such as longer growing seasons and more rainfall. And all regions could benefit from the fertilizing effect of CO2 which increases plant growth and tolerance of poor conditions. Any area adversely affected could respond either by increasing food imports or introducing plants and livestock with higher stress tolerance.

If one is concerned about inequitable effects of global warming because of the greater vulnerability of the poor, a focus on economic development including increased aid has the benefit of assisting people right now and not just when climate change hits 50 years or more from now. Even a relatively small proportion of what it would cost developed countries to seriously reduce CO2 emissions over the next few decades would make a huge difference in developing countries, assuming it was accompanied by the kinds of political changes that are required for economic development. In other words it could not be the old routine whereby the World Bank finances kleptocrats and demagogues. (See the discussion in the final chapter.)

A similar approach can be applied to threatened impacts of climate change on the natural environment. In particular there is a concern that natural systems, and particular flora and fauna, will find themselves with new climates to which they are not suited and lack of time or physical barriers, including human habitation and agriculture, prevent a shift to a more suitable location. Over coming decades we can reduce, halt and slow down human encroachment by expanding and improving national parks and other forms of nature protection, and increase the efficiency of agriculture so that more food can be produced from a given area of land. And our descendents with their higher level of economic development and scientific knowledge will have increasing means to protect biodiversity.

Another low pain way of helping people in the future is to fund research and development into technologies with long lead times which will increase their ability to reduce emissions. This is something we are already doing with government supported research and development into solar, wind, geothermal and nuclear energy and also fossil fuel technologies that allow for the capture and storage of CO2. These efforts should include building up a few decades of experience of full scale operation.

CO2 Capture and Storage

Cutting CO2 emissions does not have to mean abandoning fossil fuels as a major resource if economic means can be found to capture and then store the CO2 they produce. This is a burgeoning area of research and is generally referred to as sequestering. For point sources such as power stations, capture would be part of the production process while for diffuse sources such as motor vehicles and heating it would require extracting the gas from the atmosphere. Emissions from these two sources are roughly equal.[375]

Three different technologies are being considered for emission capture from power plants and other point sources. The one that is nearest to being operational would extract CO2 from exhaust gases. These would be bubbled through a liquid solvent which would dissolve the CO2. The solvent would then have to be heated to release it for storage. This process consumes a lot of energy and could increase electricity costs by as much as 70 per cent,[376] although research in progress promises to reduce exhaust capture costs considerably.

Another approach called oxyfuel technology would separate oxygen from air and then burn it with hydrocarbons to produce an exhaust with a very high concentration of CO2, and so eliminate the need for separation. The main challenge is to develop a less expensive way of producing oxygen. A number of new techniques are being tested at pilot scale.[377]

The third approach is called pre-combustion decarbonization where natural gas or coal is converted to hydrogen and CO2. The CO2 is compressed for storage and the hydrogen is available as a fuel which only emits nitrogen and water. This has the advantage of providing emission free fuel for motor vehicles as well as for electricity generation. FutureGen, a partnership between the US, a number of other governments and private industry, aims to develop this technology over the next 15 years at a cost of $1 billion. The plan is to have a 275 MWe coal fuelled prototype plant operating in a decade which captures at least 90 per cent of CO2 and only increases electricity costs by 10 per cent or 0.2 cents per kilowatt-hour. The program then aims to develop further improvements which would lead to technologies that achieve near zero CO2 emissions and do not add to energy costs.

If we want to capture CO2 emissions regardless of their source, including diffuse ones such as motor vehicles and home heating, we need to extract it from the atmosphere. According to the proponents of this idea, the air would need to be exposed to a sorbent, an agent that absorbs CO2. This would require an array of units distributed across the landscape much like wind turbines. The big difference is that the land area requirements for a particular amount of energy produced from fossil fuel would be two orders of magnitude less than that required to produce the same energy with wind turbines.[378] In these units air would be blown by fans onto a flowing sorbent. The CO2 would then be removed and stored, and the regenerated sorbent recycled.

At the moment the only practical sorbent is calcium hydroxide which would combine with CO2 in the air to produce calcium carbonate CaCO3. This would then be heated in a closed vessel to produce calcium oxide and CO2. The calcium oxide would then be returned to the water to regenerate the sorbent. Proponents estimate that it would cost between 20 to 25 US cents per gallon of gasoline.[379] However, they are hopeful that another absorbing agent can be found which requires far less energy at the processing stage. While most of the CO2 from a point source would be more cheaply removed in-house rather than later from air capture because of the higher concentrations, it may not be the case for that final 10 per cent of so. In other words, the least cost approach may be to achieve less than zero emissions and rely on air capture to pick up what was missed. Air capture could also have the advantage of being more readily placed near CO2 storage facilities.

Enhancing nature's very own air capture is another approach. Plants, microbes and soil normally absorb a considerable amount of CO2 from the air. We can plant trees and other vegetation and encourage life that is particularly efficient at absorbing CO2. For example, it has been suggested that aquatic microalgae which have carbon fixing rates that are higher than those of land-based plants by one order of magnitude, could be installed in photobioreactors arranged much like solar panels. They would produce a stable carbon compound ready for storage.[380] Biotechnology might help things along by breeding plants that grow more quickly or are in other ways more efficient at carbon fixing. And in the case of soil, CO2 capture should be improved by the move to conservation tillage which leaves the soil less disturbed.

Once CO2 is captured it can be stored in the ocean or underground, or converted to solid and harmless rock. The underground option is receiving most attention at the moment and includes storage in deep saline aquifers, depleted oil and natural gas fields and deep coal beds. Of these, the aquifers have the largest capacity with estimates ranging from 2,700 GtC to 13,000 GtC.[381] These saline formations are layers of porous rock that are saturated with brine. This is not just the aquifers with structural traps which have a relatively small capacity, perhaps 10 years worth of emissions but also the more extensive open ones which are thought to be suitable as long as the CO2 is injected far enough from reservoir boundaries that it will dissolve in the water or precipitates out as a mineral as a result of reactions with the surrounding rock before migrating more than a few kilometers towards the basin boundaries.[382] The use of aquifer storage is proving successful in the North Sea where CO2 stripped from natural gas produced at the Sleipner gas field by the Norwegian oil company, Statoil, has been injected for the last 5 years into the Utrisa Formation, some 1,000 meters under the sea bed.

CO2 can also be injected into depleted wells to push out otherwise inaccessible oil and gas, or into unmineable coal seams to dislodge coal-bed methane. Geologists estimate that as much as 500 GtC can be locked away in such sites.[383] This is about two-thirds of all the carbon in the atmosphere today.

CO2 can be disposed of by converting it into solid rocks called carbonates through a reaction with certain kinds of minerals. This would be inherently more stable than storage as a gas or liquid, and more compact. Recent research indicates that a process which naturally takes place over extremely slow geological time scales can be accomplished within minutes under certain temperature and pressure conditions.[384] The raw materials required for this process exist in vast quantities across the globe. Estimated mining and mineral preparation costs are currently not prohibitive, but work still needs to be done to reduce the energy required for the process.[385]

Storage in the ocean depths is another possibility. The amount of CO2 that would cause a doubling of the atmospheric concentration would change the ocean concentration by less than 2 per cent. Although 20 per cent as a general rule would eventually return to the atmosphere after a period of somewhere between 300 and 1000 years[386] and the resulting reduction in pH levels may have environmental consequences. These effects would be obviated if the CO2 could be kept in an isolated form, e.g., if injected in such a way that it turns into a carbonate or CO2 cathrates. Indeed, if methane hydrates from the ocean floor are ever exploited, it may prove possible to store captured CO2 as cathrates in the same deposits from which the methane was extracted, given that they are stable under similar pressure and temperature conditions.[387]

Solar Energy in its Various Forms

Solar energy can be either harnessed directly as it strikes the planet or after it has taken on an earthly garb. The latter forms include wind, waves, falling water (i.e., hydropower) and plant biomass. Wind is the horizontal movement of air caused by the sun's uneven heating of the earth's surface, while waves are created by wind blowing over sea water. Hydroelectric power has its origin in the evaporation of water by the sun and its subsequent precipitation on land at high altitudes. Plants convert solar energy through photosynthesis into chemical energy which can then be burnt for heat.

Direct Solar

The heat from the sun can be used to warm water, to heat buildings and to drive electric generators while its rays can be captured by photovoltaic cells and converted into electricity. Other possible future methods of exploitation are the channeling of light into buildings through optic fibers and the use of solar energy to split water to create hydrogen which can then be used as a fuel.

At the moment the most commonly employed means of harnessing solar energy is in domestic water or space heaters attached to the roof. These are large glass covered boxes which absorb heat and then transfer it to water or some other fluid through a system of pipes. According to the World Energy Council (WEC), only 2 m2 of collector area is required to provide 80 per cent of the water heating demands for a family in a Mediterranean climate.[388]

Space heating can also be provided by 'solar architecture' where buildings are designed to capture the sun's heat. Large windows are positioned to maximize intake of radiant heat during the cooler months. Part of the heat warms the inside air while the rest is absorbed by specially designed inner walls which slowly release the heat once the radiant heat begins to decline late in the day. The escape of heat from the building is retarded by well sealed and insulated walls and windows which freely allow solar radiation in but are slow to conduct the heat out again.

At the moment a very small share of our electricity is provided by photovoltaic (PV) technology which is widely used in niche markets such as powering unmanned equipment or isolated homesteads or communities away from the power grid. In the case of households, PV panels are either attached to the roof or arrayed on nearby land. The panels comprise flat crystal cells made of semiconductor material, usually silicon, which absorbs light and then releases electrons which flow through an external circuit to generate electricity.[389] With the current state of the technology, about 10-15 percent of the solar energy that strikes the cell's surface is converted into electricity.

A number of thermal systems of electricity generation have also been developed, although at this stage these are confined to a handful of trial projects. Some systems focus solar energy at a particular point using reflectors and use heated fluid to drive an electric generator. The reflectors track the sun during the course of the day to maximize the sunlight hitting them. These concentrating technologies can be classified into three types. Reflective parabolic troughs focus sunlight on a fluid filled receiver tube running along their front. Reflective parabolic dishes focus heat at the focal point of the dish where a receiver containing heated fluid drives a smaller generator. So-called power towers use a large number of sun tracking flat plane mirrors to focus sunlight onto a central receiver on top of a tower. Another very different system is the solar chimney. Instead of concentrating sunlight it relies on a greenhouse effect. A small facility has been trialed in Spain and a large scale commercial operation is planned for near Mildura in Australia.[390] In the case of the latter, the chimney will be surrounded by a 7 kilometer diameter 'greenhouse' which creates a hot draft that is sucked up the chimney where it drives electricity generators.

Solar lighting is a technology which is near the operational stage. Dishes on the roof, guided by a tracking system, collect the sunlight and 'feed' it along fiber optic cables to supplement electric lighting. Sensors keep the room at a steady lighting level by adjusting the electric lights based on the sunlight available.[391]

The development of methods that use solar energy to split water and produce hydrogen are presently the focus of research. Three technologies are being looked at. The photoelectrochemical solar cell is the closest to being ready for use, although still requiring a lot of work. The other two would appear to be more long term. The photobiological process would use specialized microorganisms which with the aid of sunlight and water produce hydrogen as a by-product of their metabolism. Because existing organisms such as green algae and cyanobacteria do this too slowly, some of the current research effort involves developing a genetically modified one that will be far more efficient at this task. Another approach, a kind of artificial photosynthesis, would adapt the process employed by plants which transform solar energy into chemical energy with the aid of carbon dioxide and water. The hope is that at least one of these methods would be less costly and more energy efficient than using the electricity from solar energy to produce hydrogen by electrolysis. In this process an electric current passing through water splits it into hydrogen and oxygen.

Nature of the Resource

The sun is a very diluted resource so harnessing it will require a lot of capital equipment spread over large areas. It is also highly variable from hour to hour, day to day, week to week, season to season and location to location. When not relied on too heavily, other resources can fill the breech when there is insufficient insolation. However, if solar is to be major source of energy we need to get round this variability by transmitting electricity over the vast distances from regions with a solar energy surplus to ones with a deficit and by converting solar energy to hydrogen which can be stored and transported. Achieving these objectives will require a lot more research and development.

Extent of the Resource

Below is an assessment of the extent of the various sources of solar energy and the extent that they could meet our energy needs.

Deserts

The deserts of the world are often mentioned as an ideal place to install solar panels. It is land that we don't need for other purposes and nature would usually not feel greatly put upon. Deserts take up about 20 million square kilometers which is about 15 per cent of the ice free land area. The Sahara makes up about 45 per cent of this. Other major desert areas are to be found in Australia, the Middle East, Mexico, south west US, Chile and south west Africa.

To give a world with 10 billion people the average per capita electricity consumption presently found in rich countries would require a total output of around 100,000 TWh per annum which is a 6 fold increase over the 2004 level.[392] With the average annual insolation for these desert regions at around 2,300 kWh/m2, with current technology achieving an energy efficiency of 10 per cent and panels taking up twice their surface area to prevent them casting shadows on each other, we would need 4.3 per cent of this area. This does not take into account the energy losses from long distance transmission and from converting electricity into a portable resource such as hydrogen. So a somewhat larger area would actually be required. If we were to produce all the 2300 EJ (639,000 TWh) of primary energy (and not just the electricity) required by 10 billion people consuming the current rich-country average, we would take up 28 per cent of the desert area. There would be transmission and conversion losses here too. However, these may not be far greater than those incurred in the conversion of coal to electricity and oil to refined fuel.

Attached to Roofs and Other Urban Surfaces

Another place for solar panels which avoids conflict with other uses is the space on roofs and various other surfaces in close proximity to electricity consumers such as walls and unused land beside freeways and train tracks. Being close by, there is not the cost or energy loss from long distance transmission.

The extent of this resource will vary from one region to another depending on the level of insolation.[393] Northern and central Europe, Russia and China fare the worst with insolation ranging from 700 to 1,400 kWh/m2. The best placed are the West and mid-West of the US, Australia, the west coast of South America, most of Africa, the Middle East, South Asia. Here insolation is 1,900 kWh/m2 or above

Even in a country such as Holland with low insolation and fairly high population density, PV cells on residential roofs could provide a significant proportion of household electricity needs. According to a study commissioned by Greenpeace, there are 20 m2 of residential roof space per person in that country, and that with an annual insolation level of around 800 kWh/m2 producing 80 kWh of electricity per m2 this would supply 1,600 kWh per person.[394] That would provide 23 per cent of current consumption given the country's population of 16.27 million and total consumption of 112.67 TWh.[395] Coincidently, with 23 per cent of electricity in Holland going to residential use,[396] this would be equal to current household consumption. If the share of electricity going to residential consumption were the same as the rich country average (31 per cent), [397] and the level of consumption were the same as the rich country average of 9,710 kWh per head, [398] the proportions would be quite a lot less - 16 and 53 per cent rather than 23 and 100 per cent.

For areas with the Dutch level of housing density, annual production from residential roofs would suffice for average rich country domestic consumption at an insolation level of 1,505 kWh/m2.[399] That covers the sunnier regions of the world.

Other urban surface areas can also be employed. The Greenpeace study claims that in Holland non-residential roofs cover 96 km2 or 30 per cent of the area of residential ones. To that we can add building walls and land adjacent to airports and running alongside freeways, highways and train tracks. If we conservatively assume that these other surface areas as a whole are 50 per cent of residential roofs, this gives a total area of 30 square meters per person. This would provide 35 per cent of Holland's present total consumption and 25 per cent if consumed at the average rich country level.

It is assumed in these calculations that any mismatch between the supply of sun and the demand for domestic electricity can be evened out by net additions or subtractions from a much larger electricity grid based on other sources of energy and that there is no need to take into account energy losses which would occur if battery storage was used to provide power at night or during cloudy periods. Also not accounted for is the option of devoting some roof space to thermal units for heating and cooling.

Other Areas

When we move beyond deserts and areas of dense human habitation, solar facilities are more likely to conflict with other uses for land, in particular the natural environment and agriculture. Nevertheless, there are still considerable areas other than deserts which are of limited value to farming and to nature. These include grasslands, savannas and semi-deserts which are on a similar scale to the desert regions and include: the Eurasian Steppes, the US prairies, the Pampas of South America (northern Argentina and Uruguay), the vast sheep and cattle runs, and semi-deserts of Australia and the arid areas south of the Sahara Desert.

Wind

Unlike direct solar, wind energy is already being harnessed on a large scale. In 2005, global wind generating capacity was over 51,000 MW and this generated around 100 TWh per annum.[400] This is more than a tripling over the last five years. However, it is still well under 1 per cent of total power generation. The only country that has so far placed significance reliance on wind is Denmark, with Germany being the biggest producer of wind power in absolute terms, while most of the remaining capacity is found in the US, Spain and India.

Wind turbines typically are equipped with three-bladed rotors which are anywhere up to 100 meters in diameter. These are turned in the direction of the strongest wind with the aid of an onboard computer and drive a generator with a rated capacity between 600 kWe and 2 MWe. These are mounted on towers that are generally between 40 and 100 meters high.[401]

There have been two studies which estimate the on-land resource.[402] Both give a total resource base of around 500,000 TWh/year from regions with wind speeds of more than 5 meters per second. The study by the World Energy Council (WEC) assumes that this resource is found in 27 per cent of the ice-free land area (i.e., 36.2 million square kilometers). The Grubb and Meyer study estimated that 10 per cent of this area was available after allowing for accessibility and competing uses and could harvest just over 50,000 TWh, while the WEC study gave a more conservative estimate of 4 per cent and just under 20,000 TWh.

The total resource potential from these areas would increase with improvements in the technology that allow more effective capture of the available energy. Furthermore innovation would increase our ability to more effectively exploit areas with lower wind speeds.

As well as wind on land we also have wind off-shore. At least in theory this is a much larger resource. About three quarters of the earth's surface is covered in oceans and their wind speeds are higher on average. For the moment, however, the exploitable resource is confined to the coastline and relatively shallow water. As the distance from land increases, the cost of transmitting the power back to shore increases sharply and the deeper the water the higher the construction costs. However, building on the know-how from off-shore oil and gas rigs, wind farms in the future will be able to venture into increasingly deeper water and distance from markets will become less of a concern as methods of long distance transmission and hydrogen conversion improve.

A study carried out in 1993-95, estimated an offshore wind potential in the European Union of 3,028 TWh.[403] This assumed that the wind resource can be used out to a water depth of 40 meters and up to 30 kilometers from land. It would not require all that many similar offshore areas around the world to match the on-land resource. With an ability to provide over 40,000 TWh every year, wind energy could meet a significant proportion of the electricity requirements of 9 or 10 billion people living in affluence and be an important although minor player in meeting total energy needs.

If the entire land resource identified by the WEC study were exploited you would have wind turbines dotting a combined area of 1.45 million km2. This is slightly more than the area of Germany, France, Italy and the UK combined. The actual "footprint" occupied by turbines, permanent access roads and other equipment would only be 5 per cent or less of this area, bringing the figure down to less than 72,000 km2. This is small compared with the 15 million km2 we currently make available for crops.

Wind turbines are generally not competing with other uses when set up on barren land or pasture. There may be a small drop in cattle production because of reduced grass area and a loss of amenity value if wind turbines ruin a popular piece of scenery. In the case of forest land or developed areas, there would be considerable conflict. Turbines are inefficient when located near trees and buildings because of the wind turbulence created and clearing trees and buildings for wind farms would not generally be considered the best use of land! In the case of off-shore turbines, the main concerns are sea lanes, restricted military areas, recreational uses and spoiling the view from the beach. Generally speaking, the closer you are to markets for electricity the more likely that wind power will conflict with these other uses. Consequently, wind energy will have the same, if not more, transmission and storage problems than solar power.

Waves

Wave energy is a potential resource with a range of technologies at the trial stage. The energy is derived from the winds as they blow across the ocean surface. The extent that the wind transfers energy depends on its speed and the distance over which it interacts with the water (the fetch). Once created, waves can travel thousands of kilometers with little energy loss.

Because waves continue to collect energy from the wind over a considerable distance with little dissipation, wave energy is significantly more concentrated than solar or wind energy. Waves tend to have tens of kW of energy per meter of crest compared with 100s of watts per square meter facing a solar panel or wind turbine - two orders of magnitude greater.[404]

Areas with the greatest wave strength are the coasts of large ocean basins, including western US, Europe and Australia, and the southern oceans above Antarctica. The power in the wave fronts in these areas generally varies between 30 and 70 kW/m, but with some areas averaging around 100kW/m.[405] For these more favorable areas the World Energy Councils estimates the resource to be in excess of 2 TW.[406] While a preliminary evaluation for a review of wave energy published in 1999 indicated a resource of more than 1 TW.[407] The same review estimated that this resource, using the latest designs of wave energy devices, could produce over 2,000 TWh of electricity annually. At this level of output, it could only be a modest contributor to electricity production - around 12 per cent of current output and 2 per cent of the 100,000 TWh required to give10 billion the level currently consumed in rich countries.

In recent years there has been important developments in wave technology, particularly with respect to devices that can be used further off-shore in deeper waters before the waves are dissipated by hitting the rising seabed and the contrary winds from the landmass. As well as producing electricity, these wave energy converters can also be used to desalinate seawater through reverse osmosis. This is a technology discussed in the previous chapter under desalination.

As with off-shore wind turbines, wave technologies are benefiting from many of the advances in technology and know-how achieved by the offshore oil and gas industry. This is particularly the case with respect to floating mooring systems and sub sea flexible power cables and connectors, pumps and motors.[408] Modern materials and computer technology are also assisting in the development of designs that can react to the changes in sea conditions, and resist the stresses of the marine environment.[409] Advances in remote monitoring should also help.

A range of devices have been developed over the years. However generally they are far less mature than wind or solar technology, and have generally not gone beyond the trial stage and are less than full scale. The technology has to contend with a very corrosive environment and occasional extreme wave conditions that impose huge strains on the equipment.

Most of the devices currently being developed are small units which would be deployed in large arrays. One of the more promising devices is the Pelamis. Named after a sea-snake, although whale-like in size, this device is a series of cylindrical segments connected by hinged joints. As waves run down the length of the device and actuate the joints, hydraulic cylinders incorporated in the joints pump oil to drive hydraulic motors which drive electrical generators to produce electricity. Power from all the joints is fed down a single umbilical cable to a junction on the seabed. A number of devices can be connected together and linked to shore through a single seabed cable. A full scale prototype pelamis has recently undergone extensive sea trials in the North Sea and an order has been placed for three of these units which will be located off the north coast of Portugal. The 8 million euro project will have a capacity of 2.25 MW, and is expected to meet the average electricity demand of more than 1,500 households. Subject to the satisfactory performance of the first stage, an order for a further 30 machines with a capacity of 20 MW is anticipated.[410]

As well as being more concentrated than wind and solar, wave energy also has the advantage of being less variable on an hourly or daily basis and any variability can be forecasted over the time-scales required in the electricity marketplace. As with wind, waves are generally a lot stronger in the winter months. Monthly average energy levels in winter can be three to five times greater than monthly averages in summer. Where peak demand is dominated by winter heating and lighting loads (northern Europe, for example), wave energy has a good seasonal load match.

Hydroelectric Power

In 2004 hydro produced 2,808 TWh of electricity.[411] This was 16 per cent of the total electricity supply and 2.2 per cent of total primary energy. The full potential of the resource has been estimated at 8100 TWh per year.[412] This means there is room for significant expansion. However, at the maximum it would only provide 8 per cent of the electricity required to give 10 billion people the present the rich country levels of consumption. This means a declining role for hydro in the long term and the creation of a gap that will need to be filled by other resources.

Biomass

Biomass provides around 10 per cent of our energy, with most of it being consumed in poorer countries, often on an unsustainable basis.[413] Types of plant biomass include: perennial crops such as trees, bushes and shrubs; annual crops such as sugarcane, cereal straw and grass; agricultural and forestry residues; and urban waste. This can either be burnt for heating or electricity, or converted into ethanol and used as liquid fuel.

The biomass potential from recoverable and unwanted agricultural and forestry residues have been estimated at around 40 EJ per year, while energy from urban refuse may well be around 6 EJ by 2025.

The crops giving the best annual energy yields are trees and sugar cane. For trees in North America and Europe it is over 200 gigajoules per hectare per year. For trees in the tropics with genetic improvement and fertilizer use they range from 100 to 550 gigajoules, with the top end being achieved where water is plentiful. For sugar cane the range is 400 to 500 gigajoules.

If the average figure is 250 gigajoules per hectare, production of the current total commercial primary energy output of around 470 EJ, would require 18.8 million km2. This is larger than the present area of cropland and about half the area of permanent pasture. Even with twice the yield we would still require a dauntingly large area which would compete in many cases with other uses.

A more realistic prospect is biomass being produced on a few million square kilometers at most. Some of this could be in rotation or in tandem with crop growing and grazing where it would play a soil management role. The rest would be in some of the 42 million km2 of forests and woodlands where it would have to compete with timber production and conservation objectives. A few decades from now this area could produce 10 to 20 per cent of our energy needs. However, as energy consumption increases as the century proceeds, biomass's share would decline accordingly.

Other Possible Resources

There are two other solar based resources which may become significant in the future, although at this stage the technology is experimental. These are the energy from ocean currents and from heat stored in the ocean.

Surface currents are driven by wind while deep ocean ones are driven by density and temperature gradients. A number of technologies are being examined including arrays looking very much like wind turbines except they are underwater. Unlike wind, an ocean current is fairly constant and although slow moving its much higher energy density ensures a larger resource from a given area.

Ocean thermal energy conversion systems capture some of the solar energy which is transferred to the oceans every day. They do this by exploiting the temperature difference between seawater at different depths. Cold water is pumped from the ocean depths to the surface and energy extracted from the flow of heat between the cold water and warm surface water. It is suitable for electricity generation, desalination or a combination of both. Deep equatorial waters are the best because they have the greatest temperature extremes.

Summing up on Solar

While the resource is large, its position as a potential major or dominant supplier of energy still depends on technological improvements in a number of areas. PV cells, wind turbines and wave generators will have to continue becoming cheaper and capturing more of the energy. The energy loss in long distance power transmission will have to decline so that sun, wind and wave some distance from human habitation or activity can still supply electricity. We will have to improve our ability to use the energy from solar resources to split water and produce hydrogen and at the same time improve our ability to transport, distribute and store this gas. This can then be used at any place or time to power vehicles or generate electricity.

Given the need to interfere with vast areas of land, it is hard to imagine that any squeaky clean image that sun and wind hold when a hundred or so TWhs are being produced will remain untarnished when production is in the thousands of TWhs.

Nuclear Power without the Phobia

Nuclear power presently generates about 16 per cent of the world's electricity, which constitutes about 6.5 per cent of commercially produced primary energy.[414] All the major developed countries except Italy[415], rely to a significant extent on nuclear power, ranging from 79 per cent in the case of France to around 20 per cent in the case of Japan, UK and US. It is also important in some of the former Soviet bloc countries. For example, the Ukraine receives 49 per cent and Russia 16 per cent.[416] India and China also have some nuclear power.

The industry has its origins in the military programs of the USA and USSR in the 1940s and 50s which produced nuclear weapons and reactors to power naval ships and submarines. The technology is based on the fission process, which produces energy by splitting atoms. The fuel for the process is provided by uranium which is "enriched" to increase the proportion of the fissile isotope uranium‑235.[417]

Presently there are 441 nuclear reactors generating electricity in 31 countries.[418] These come in a number of varieties which are mainly distinguished by their system of transferring heat from the reactor to the power generator. All the reactors in the US and about 90 per cent worldwide are so-called light water reactors.[419] Of these about two thirds use pressurized water while the rest use boiling water. Virtually all the remaining reactors are either a Soviet design using graphite or a Canadian one using heavy water.

After taking off in the 1960s and 1970s, the industry then sunk into a malaise. This can be attributed both to the increasing competitiveness of coal and gas power and to the emergence of a very unfavorable political climate marked by considerable public opposition and a switch in government policy from active encouragement to definite discouragement, including in some countries a decision to phase out the industry. This change of attitude received major boosts from the accidents at Three Mile Island in 1979 and Chernobyl in 1986 which highlighted the risks from radiation.

The industry is not entirely moribund. Improved methods have enabled existing plants to increase their total output and they are generally getting extensions to their licenses beyond their originally expected lifespan. There are presently 27 new plants under construction, including 8 in India, 5 in China and 4 in Russia, while another 38 are planned. [420] A number of countries in Europe are dragging their feet on phase-out plans particularly in the context of reducing greenhouse gas emissions. The US administration is pursuing plans to encourage new construction during the second decade of the century, a policy that has bi-partisan support. Nevertheless, for the industry to maintain or improve its relative position it would need to undergo a major resurgence.

Nuclear power's competitive position may well improve in the future. Increases in fossil fuel prices could have a considerable impact on competitiveness given that fuel constitutes over half the life-cycle cost of a fossil powered plant. In contrast, prices of nuclear fuel have far less of an effect. While the doubling of uranium ore prices would increase nuclear generating cost by only 5 per cent, the doubling of natural gas prices would increase gas-fired generating costs by some 80 per cent.[421]

Nuclear power would benefit greatly from anything that would reduce construction or capital costs. These typically account for 60 to 75 per cent of total generation costs, compared with 50 per cent for coal plants and 25 per cent or less for gas-fired ones.[422] There are a number of factors that could lead to a reduction in these costs. These include standardized large scale production, new plant designs and more rational safety regulation.

If nuclear power plants were built on a large scale various economies could come into play. First there are the economies that come with experience. Once a few plants have been built and commissioned, the experience gained will reduce the costs of future units.[423] Then there are the economies associated with specialized plant and machinery. Producing a large number of any product or component generally allows the investment in specialized production methods that would be too expensive at low production levels but would reduce costs at a larger scale of output. For example, you would not build a production line to produce a few hundred cars. It would be cheaper to make them 'by hand', i.e., with non-specialized machine tools. It is only when the number reaches a critical level that building a large specialized plant becomes cheaper, seriously cheaper. There are also a range of overhead costs, such as design and administration, that can be spread over a large number of units.

Standardization would also reduce the long delays due to the approval process that in the past have doubled the construction time and greatly increased the interest burden. According to new legislation being adopted in the US and elsewhere, once a standardized design has been certified as safe, all plants built to that design would automatically receive approval. Such prior approval could also be harmonized internationally in much the same way as in the aircraft industry. A power company would then only need to receive approval for the chosen site. However, even this may be unnecessary where the unit is to be built on an existing power plant site. Many sites have room for more reactors.

The new designs being considered for future reactors include various features that could possibly make them cheaper to produce. Many new generation nuclear plants including the Westinghouse AP‑600 and 'pebble bed' would operate on 'passive' safety features which rely on natural forces such as gravity, convection, natural circulation, evaporation and condensation.[424] In the case of the AP‑600, this would mean 35 per cent fewer pumps, 50 per cent fewer valves, 70 per cent less cabling, and 80 per cent less ducting and piping than conventional LWR systems.[425] According to the developers of the pebble bed reactor, their design has no need for an expensive containment shell to prevent the escape of radiation in the case of an accident.[426] These and other new designs are also considered to be more suited to factory production and assembly of modules on site than the old generation of plants.[427]

The competitive position of the industry will be most favorable where transport infrastructure is inadequate or distances from fuel sources considerable because nuclear fuel is a fraction of the weight and volume of fossil fuel. This could tip the balance, for example, in India, northern China and western Russia.[428]

Nuclear power may benefit from a move towards a hydrogen economy. Electricity from existing nuclear power plants can be used for the electrolysis of water. Nuclear reactors could also provide the heat for the steam reforming of natural gas, the method that currently produces 95 per cent of hydrogen. Natural gas reacts with water at high temperature to form hydrogen and carbon dioxide. However, this would require a new generation of reactors that have a far higher coolant outlet temperature. An even higher temperature would be required for thermo-chemical water splitting which converts water into hydrogen and oxygen. While this technology is not yet commercially available, a number of steps are being taken in that direction. A pilot project is being planned in Japan. The Americans and the French are also doing development work.[429] Some breeder reactors would achieve temperatures suitable for these processes, as would the pebble bed reactors currently at the trial stage.

The resurgence of nuclear power would require an improvement in the political climate. If nuclear power begins to make economic sense where it did not before this could undercut opposition and strengthen support. The industry could also benefit from the fact that it does not emit greenhouse gases. This would depend on the extent that global warming fears cancel out radiation fears, and competition from other technologies with the same emission claims such as solar and wind.

For nuclear power to continue playing an important role in the second half of the century, there will need to be a large construction program. Just to maintain current output would require the replacement of existing capacity in coming decades. To maintain the current 16 per cent share in the face of the six fold increase in electricity generation that would be required to bring 10 billion people up to current per capita consumption levels of rich countries, there would need to be 2,646 reactors (441 x 5.3), assuming no change in average output. To produce all of this electricity there would need to be 16,537 of them (2,646/0.16). This is one for every 605,000 people or somewhat more than the level for present-day France where there is one for every million people.

Nuclear power could conceivably meet all energy requirements of this population at the current OECD average, through the production of electricity, heat and hydrogen. To provide 10 billion people with the same annual per capita primary energy at current average rich-country levels, we need to produce 2,300 EJ (55,000 mtoe) a five fold increase. This would require 35,243 reactors, or seven for every two million people. [430]

Resources

The current estimate of conventional resources of uranium is 14.4 million tonnes[431] or over 200 years' supply at today's rate of usage of around 65,500 tonnes per year.[432] A third of this is described as known conventional resources and would last almost 70 years at current usage rates while the remaining two thirds are undiscovered conventional resources, based mainly on estimates of uranium that is thought to exist in geologically favorable, yet unexplored areas.[433] This figure is bound to considerably underestimate the ultimate resource. Investment in exploration has been quite low[434] and a number of countries, such as Australia with significant resource potential in sparsely explored areas, have not compiled figures for undiscovered conventional resources.[435] Furthermore, according to Garwin this resource could be stretched by 25 per cent if more costly extraction methods were adopted that leave less of the uranium in the mining waste (tails).[436]

Thorium is another potential nuclear fuel, although currently not used. It is about three times more abundant in nature than uranium.[437] Furthermore, all of the mined thorium is potentially useable, compared with the 0.7 per cent of natural uranium used in existing reactors, so some 40 times the amount of energy per unit mass might be available.[438] The known resource is around 4.5 million tonnes.[439] However, this is bound to be the tip of the iceberg given the limited extent of exploration and the fact that it does not include data from China, central and eastern Europe, and the former Soviet Union. Thorium processing and reactor technology still needs a lot of development before it could become commercialized. India which has more thorium than uranium is in the forefront of research in this area.

There are also unconventional uranium resources to consider. These include about 22 million tonnes in phosphate deposits.[440] The recovery technology is mature and has been utilized in the past; however, costs are somewhat higher than the present price.[441] Then there is the 4 billion or so tonnes contained in seawater which could possibly become a resource.[442] A number of trials have been performed to extract uranium and other valuable minerals from seawater. They use a special absorbent material and the cost at this stage is estimated to be around $300 per kilogram.[443] (At the time of writing the uranium spot price was $122 per kilogram.) Another plausible method is to take advantage of the fact that life forms have the habit of taking up certain elements that are scarce in the nonliving world and concentrating them within their cells. For example, some sea animals accumulate elements like vanadium and iodine to concentrations a thousand or more times as great as in the surrounding sea water. It has been proposed that certain forms of algae could be cultivated to perform this trick with uranium.[444] No doubt seawater extraction would benefit greatly from a few decades of research and development.

When assessing the extent of nuclear fuel resources, it is important to keep in mind the possible adoption of so-called fast breeder reactors which extract around 60 times more energy from each kilogram of uranium. Conventional thermal reactors can only use uranium‑235 which makes up less than one per cent of natural uranium. However, fast reactors, can harness most of the uranium which takes the form of uranium‑238. They can also make very effective use of thorium.

There was considerable interest in this technology during the early years of nuclear power when it was thought that uranium would turn out to be scarcer and the industry a lot larger than proved to be the case. Around 20 plants were built in various countries including the US, France, the Soviet Union and Japan.[445] Most of them were eventually closed due to high costs, teething problems including safety issues and declining support for the industry. However, there are now signs of renewed interest. India, China and Russia have reactors planned. Also, the Generation IV International Forum, representing governments from many of the nuclear power countries including US, UK, Japan and France, selected a number of fast breeder reactors to be among the six systems to be the focus for collaborative research and development. The objective is to make advances over existing systems in areas such as economy, safety, proliferation resistance and protection from attack and to have a number of systems available to be deployed by 2030.

So, to what extent could we rely on nuclear power? The current estimated resource of 14.4 million tonnes would only provide about 5 per cent[446] of 21st century energy production assuming 2 per cent annual growth and no increase in the energy obtained from each tonne. Furthermore, it would be used up by 2090 if the current share of 6.5 per cent were maintained or not much later than mid century if in a few decades time we pushed out capacity to a 20 per cent share.

However, it does not seem too wildly optimistic to envisage nuclear power being able to provide larger shares of this century's energy. Given more exploration and better extraction technologies the recoverable conventional resource could be considerably bigger than present estimates. Moderate increases in the energy harnessed from each tonne of uranium could also make a difference. Of course, the larger the share contemplated the more it would have to rely on the development of new technologies such as breeder or thorium reactors or the extraction of uranium from sea water. With such innovations the resource could become huge and a major provider of energy later this century or in the next.

The Safety of Nuclear Power

Nuclear power is very much under a cloud because of distinctive safety issues relating to its fuel. It is highly radioactive and some of it can be used to make nuclear bombs. This prompts a number of fears: power plants emit small levels of radiation under normal operations and there is always the possibility of a major accident that releases large amounts of highly radioactive material into the environment as happened at Chernobyl in 1986; spent fuel may leak from its disposal site into the environment at some time in the future; and nuclear fuel may be diverted to terrorists for bomb making. The radiation concern is examined first and then the threat of nuclear terrorism where the principle hazard is the explosion rather than the radiation.

Radiation associated with nuclear power consists of subatomic particles that shoot through space at very high speeds. It is called ionizing radiation because it can penetrate our body, damaging cells in the process. In this way it is different from harmless forms of radiation such as radio waves.

Nuclear power reactors are not the only source of ionizing radiation. To begin with there is natural radiation to which humans always have been and always will be exposed. In the United States, people on average receive an annual dose of 300 millirems.[447] This includes radiation from radioactive elements in rocks and soil, from within our own bodies and from outer space.

Radioactive elements in rocks and soil are principally potassium, uranium, and thorium.[448] As well as naturally emerging from the ground, these can be released by human activities such as burning coal, oil, gas and wood, and by mining, plowing, construction and well-drilling.[449] It means that brick, stone, and other building materials are slightly radioactive. Some types of building materials contain more radioactivity than others. For example, it has been claimed that Grand Central Station in New York City, which is a massive granite structure, provides commuters with a level of radiation exposure well in excess of what they would receive from visiting a nuclear reactor.[450] The radioactive material residing in our own body is from naturally occurring substances such as potassium 40 which are vital to our survival. These irradiate our organs including bone marrow, testicles and ovaries. We even irradiate each other at close quarters. This radiation from our bodies delivers an exposure close to a third of what we receive from rocks and soil.[451] From outer space we receive cosmic radiation. Most of it is absorbed by the atmosphere, so we receive a higher than average dose by living at higher altitudes or by flying, mountain climbing or skiing.

As well as natural radiation, another big source of exposure is from medical radiology. This includes x-rays and a whole host of other diagnostic tools. In the US this source accounts for 35 per cent of all radiation exposure and 90 per cent of the total man-made dose.[452] Other sources include TV sets, smoke detectors and airport X-ray machines.

Radiation exposure from all these various sources is fairly low. However, it is still far higher than routine emissions from the nuclear power industry. According to the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR), estimated doses from nuclear facilities account for less than 0.05 per cent of the total dose from natural and medical sources.[453]

Even for those living near a reactor the exposure is a tiny fraction of what they receive from other sources. According to Cohen, it is comparable to what a typical viewer receives from a television picture tube.[454]

Radiation prompts two health concerns. At very high doses, of the kind that only the nuclear industry (civil or military) can deliver, it can cause radiation sickness which burns the skin and damages the central nervous system, internal organs and bone marrow. This damage allows rampant infection. Whether victims die or survive depend on the dosage, their health and age, and the quality of medical treatment.

The other health effect is an increased risk of cancer some time in the future, with the risk depending on the dose. In most cases the latency is 20 years or more. The exceptions are some childhood cancers and leukemia that may occur 3-5 years after exposure.[455]

There had in the past been concerns that radiation exposure could have genetic effects that could be passed on to future generations. However, the available evidence suggests that this is not the case. Research has shown that radiation can cause genetic mutation in plants and test animals including fruit flies.[456] However, these have not yet been detected in people. Studies of the children of Hiroshima and Nagasaki atom bomb survivors show no excess of genetic defects.[457] Nor is there any increased incidence in areas of high natural radiation.[458] Radiation is presumably a weak mutagen for humans, just one of thousands of known mutagens in the environment which, combined, result in about 10 per cent of all new-born children showing some evidence of genetic defects.[459] There is no evidence that radiation causes any other illness and this is in keeping with our knowledge of radiation and the causes of illness generally.[460]

The increased likelihood of contracting cancer depend on the level of radiation. The risk is known with a fair degree of certainty at higher levels of exposure. However, at the lower end there is far less certainty and quite a lot of controversy.

Given the extremely large number of people who contract cancer, it is difficult to determine statistically with epidemiological studies the extent to which radiation could be a contributor. In developed countries about half the population will get cancer from one cause or another and about half of those will die from it. This means that thousands of extra cancer cases in a particular population would cause a quite small increase in the rate and would be difficult to attribute to radiation rather than random variation in cancer rates or other factors that make this population different from others.

At the same token it is difficult to draw conclusions from unusually high or low cancer rates for small groups that have been exposed to higher than average levels of radiation. Small groups can be atypical for a range of reasons that are difficult to take into account.

At this stage our knowledge of how radiation does its nasty work is far too inadequate for us to assess its effect from first principles. More research has to be done into the nature of radiation induced cell damage and how it causes cancer.

In the past there was a general acceptance of what is referred to as the linear no threshold hypothesis (LNTH). This is based on a linear extrapolation from cases where high levels had been experienced and the risk level known with some degree of certainty. Most of the information is provided by studies of cancer incidence among those exposed to radiation from the atom bombs dropped on Hiroshima and Nagasaki. These survivors were exposed to an instantaneous dose of 100s of rems plus subsequent longer-term exposure from fallout. Information has also been gained from the medical records of people subjected to heavy doses of X-rays as a treatment for spinal diseases, a misconceived practice that ceased in the early 1950s.

Based on these studies, scientists have estimated the cancer death risk from a radiation exposure of 100 rems to be 5 per cent.[461] According to the LNTH, this can be extrapolated to much lower doses. So if a 100 rem exposure gives you a 5 per cent risk of developing a fatal cancer, a one rem (1000 millirem) exposure will give you a 0.05 per cent risk. In other words, halve the dose, halve the risk; double the dose double, the risk.

The LNTH also allows us to talk in terms of collective doses or 'person rems'. For every 2,000 person rems, there is one death. This can be achieved by a infinite number of combinations of dose and population. For example, one person receiving 2,000 rems, 2,000 people each receiving one rem or 2 million people each receiving one millirem will all lead to one death. This is an effect that does not apply to most things we are exposed to and there is generally a threshold below which exposure is harmless. For example, 30 sleeping pills taken at once may be enough to kill an individual, however, that does not mean that if you take one tablet that you have a one in thirty chance of dying or if 30 people each take one that one of them will die.

The following examples should give a good idea of the kinds of risks implied by the LNTH. The background exposure of 300 millirems per year received by the US population of 300 million people would result in 45 thousand deaths per year. If we assume the same level of exposure for a world population of 6.5 billion people, this would result in 975,000 cancer deaths per year. As part of their background exposure, the average American receives about 31 millirems of radiation per year from cosmic rays.[462] This would kill about 4,650 Americans per year. The levels of radiation naturally in our bodies is about 39 millirems. This translates into about 125,000 deaths annually worldwide.

It did not take long for a general dissatisfaction with LNTH to emerge. Most scientists think it overstates the risk. In other words a dose that is for example 50 per cent lower than another dose will have a more than 50 per cent lower cancer risk. Furthermore, below some level the cancer hazard is zero or so low that it is effectively zero. The general view among scientists is that there is a lack of conclusive evidence of low level radiation effects below total annual exposures of about 5 to 10 rems.[463]

It has been suggested that a threshold exists because up to a certain level our body has a capacity to repair a whole range of different kinds of damage. It is only when the attack reaches a certain intensity that the repair systems starts to be overwhelmed and the system is increasing degraded as the dose increases.

On the other hand a small number of researchers believe that LNTH understates the risk for low level radiation. They supports the supra-linear hypothesis that more damage is caused per rem at low doses than at high doses. They theorize that perhaps low doses weaken and damage cells (which live on to damage other healthy cells), whereas high doses simply kill cells.[464]

While we need to keep in mind the problems with epidemiological studies, proponents of the prevailing view have a large amount of evidence which at least on the face of it supports their position. This includes the experience of Chernobyl, plant workers, medical patients, Japanese atomic bomb survivors who received relatively light exposure and the effect of differences in natural radiation levels.

In the case of survivors of the Hiroshima and Nagasaki bombings, those who received instantaneous radiation doses of less than 20 rems have not suffered increased cancer rates.[465]

A UN study 14 years after the Chernobyl accident concluded that up to then, there have been no increase in deaths from leukemia even among recovery workers who received fairly high doses of radiation, despite its latency period of only 5-10 years after radiation exposure.[466]

Extensive studies by radiation protection bodies have been unable to detect any sign that workers dealing with radioactive material have cancer mortality rates which are higher than those for the general population.[467] A study of over 20,000 men who took part in the UK atomic bomb tests in Australia and the Pacific in the 1950s showed no detectable effect on their life expectancy or on the incidence of cancer or other fatal diseases.[468] A study of the mortality rate among 30,000 persons exposed to radiation while working with nuclear ship propulsion systems was lower than the mortality rate among another 30,000 persons in a control group who received a more normal amount of radiation per year.[469]

There is no sign that regions with higher levels of natural radiation have higher cancer rates. These are at a higher altitude and more exposed to cosmic rays and/or have higher than normal uranium content in their soil. The cancer death rate in seven western states in the US is 15 per cent lower than in the rest of the continental US even though the level of radiation is almost twice as high.[470] In some parts of India and Brazil the natural background is over ten times the world average, due to the presence of radioactive rocks, but the population shows no signs of being affected.[471]

It is even possible that small radiation doses are beneficial. An explanation offered for this is that low radiation stimulates the body's repair mechanisms.[472] Experiments indicate that the irradiation of mice by gamma rays increases their survival rate by one week per rem, and that the irradiation of salmon eggs increased the number of viable eggs and the rate of return of the adult fish to their birthplace to breed.[473] There are also statistical studies of human exposure that support this proposition. Twenty years ago in Taiwan, recycled steel, accidentally contaminated with radioactive cobalt-60, was used in the construction of more than 180 buildings which were occupied by about 10,000 people for between 9 and 20 years.[474] With seven cancer deaths, the cancer mortality rate for this population was 3.5 per 100,000 person-years compared with the rate in the general population of Taiwan over these 20 years of 116 persons per 100,000 person-years. Assuming that the people concerned were fairly typical of the population at large in terms of factors such as income and age - and this needs to be confirmed - the experience of these people seems to suggest that long-term exposure to radiation, at a dose rate of the order of 5 rem per year, greatly reduces cancer mortality. This is more than 10 times what people are presently receiving.[475]

Plant Safety

Normal Operations

During the normal operation of a nuclear power plant, gaseous and liquid discharges containing very low amounts of radioactive material are released into the environment. The extent of these emissions (both absolutely and per unit of generated electricity) has been reduced considerably since the early days of the industry by the use of improved technology, and this is continuing.[476]

Government regulations place limits on these emissions which keep exposure to a minute fraction of the natural background radiation levels that people already experience. In many countries permissible levels are set so that a hypothetical person who stood at the boundary fence, drank the plants cooling water and consumed food grown nearby, would not receive more than some minute increase in their normal exposure level.[477]

In reality, of course, no one experiences even that minute increase. No one lives at the reactor fence and even if they did they would not experience the maximum exposure because most plants keep their emissions well below this level.[478] The average lifetime exposure for people living in the nuclear power regions such as North American and Europe is next to nothing and less than the increased natural radiation exposure from a long plane trip.[479] On an annual basis, people living near to a nuclear power plant receive about one millirem of extra radiation exposure.[480]

Studies of populations surrounding nuclear reactors also suggest no health effects. A US survey sponsored by the National Cancer Institute studied cancer deaths in 107 counties with nuclear facilities within or adjacent to their boundaries.[481] Each county was compared to three similar 'control counties.' Their report, published in 1990, found 'no evidence to suggest that the occurrence of leukemia or any other form of cancer was generally higher in the study counties than in the control counties.' Studies in other countries generally supported the NCI's findings, including ones in France and Canada.[482]

Reactor Accidents

A much bigger concern than routine emissions is the threat of reactor accidents that release large amounts of radioactive material into the environment and endanger the health of thousands.

Any serious accident would have to be the result of a string of mishaps. In the first instance there would have to be a breakdown in the cooling system which leads to the melting of the reactor core - a "meltdown". This in turn would have to cause an explosion which spreads radioactive material into the environment.

Fortunately the risk of such accidents occurring is quite low. This is really most clearly in the truly remarkable safety record for nuclear reactors over the past half century. In the case of the water reactors used in most of the world's nuclear power plants, there has been over 12,000 reactor years of service without an accident endangering the public. There has also been a similar amount of service by research and marine reactors with an equally unblemished record. The US navy alone has accumulated more than 5500 reactor years of operational experience with its nuclear submarines, icebreakers and aircraft carriers.

There has been one serious accident and that was at Chernobyl in the Ukraine in 1986 during a misconceived experiment. However, that said more about the state of the Soviet Union in its dying days than it does about the safety of nuclear reactors. The accident would not have happened but for a string of procedure violations. According to the Soviet investigators, there had been six separate contraventions of procedures by the reactor operators during the experiment. If any one of these had not been committed, the accident would not have happened.[483] There was a strong culture of disregarding safety rules and a complacency encouraged by the fact that past accidents and mishaps were kept secret from those in the industry as well as the public at large. This was reflected in the fact that an electrical engineer with limited knowledge of reactor operations was in charge of the "experiment," and there was no one in the control room who understood the risks they were taking. Furthermore, the Chernobyl reactor was of a Soviet design that was far more vulnerable to criminal negligence and incompetence than types used elsewhere. The reactor had graphite as a moderator instead of water, so that the loss of water coolant can increase the chain reaction and resulting heat, whereas with a light water reactor loss of coolants brings the chain reaction to a halt and limits the heat that can be reached by the reactor core. Also, the reactor did not have the massive containment structure common to most nuclear plants elsewhere in the world. According to some analysts, this would have withstood the steam pressure that caused the explosion.

Generally speaking a disaster in a reactor is remote because of a range of emergency safety features and the need for a number of unlikely and unrelated mishaps. Of particular importance are the back up arrangements for the cooling system which prevents the reactor core from overheating. There are backup pumps and massive flywheels that keep water circulating even if the power is cut. If the main cooling system fails, emergency core‑cooling systems which are independent of the primary core-cooling system come into operation. In some cases this system is a pressurized water tank that does not need pumps but simply dumps large amounts of water on the reactor. It is very unlikely that both the primary core and emergency core cooling systems would fail.

The rods that control the chain reaction also have a number of emergency features. Firstly there is a number of independent clusters of control rods that can be inserted by gravity into the core to stop the reaction. Any one of the clusters would be enough to achieve this.[484] In the case of a power failure, the control rods are immediately released because they are only held above the reactor by electromagnets.[485]

If there is damage to the reactor core as a result of a failure in the cooling system what happens then? In the first instance, radioactive fuel has to escape from the steel pressure vessel into the containment dome and then set off a steam or hydrogen explosion that breaches the dome encasing the reactor. This would have to be quite a powerful explosion given that the dome is made of steel-reinforced concrete about a meter thick.

Experiments suggests that it is not easy to set off a steam explosion. For example, in 1980 scientists at the Sandia Laboratory in New Mexico unsuccessfully attempted to create a large one by dropping molten uranium into water.[486] Nevertheless, the inside of the protective dome is equipped with water spray nozzles or refrigeration systems that will condense the steam and reduce the pressure.[487] Some reactors have large volumes of ice on hand.[488]

The concern about hydrogen stems from the fact that it may be released by a chemical reaction if extremely hot steam comes in contact the fuel-casing material.[489] However, according to Cohen, the research seems to indicate that even if all the hydrogen that could possibly be generated by core damage were to explode all at once, the force would not be powerful enough to break most containments.[490] In nearly all scenarios the hydrogen would be produced gradually and ignited in a series of small explosions or fires caused by sparks from various sources such as electric motors. This is assisted in some cases by the installation of devices that constantly create sparks. Some reactors have an inert gas in the containment so depriving any hydrogen of the oxygen needed for an explosion.

If the dome can hold out for a few hours from the initial release of radioactive material, a lot of it will either become stuck to the dome walls or equipment, or be removed by various systems in place for that purpose. The latter includes ventilation systems and water sprinklers for removing particles from the air.[491]

As well as the prospect of reactors exploding radioactive material into the air and surrounding landscape, there is also a concern about groundwater contamination if the fuel melts through the thick concrete floor. This has been colorfully dubbed the 'China syndrome'. However, according to Cohen, if molten fuel were to come into contact with groundwater it would flash into steam which would build up sufficient pressure to keep the rest of the groundwater away.[492] Once the fuel cooled it would be in the form of a glassy insoluble mass. If there were a problem, measures could be taken in good time to prevent any ongoing contact.

Various studies suggest that nuclear facilities would withstand external impacts such as World Trade Center style attacks. An aircraft may look quite solid but it is actually fairly light and flimsy. Given the small size of the protective dome the worst that could happen is to be hit by one of the engines. Studies also show that spent fuel storage pools are also able to withstand such attacks and experimental evidence proves that dry storage and transport casks would retain their integrity.[493]

Earthquakes are another concern. In the popular imagination there are visions of plants splitting in two or being swallowed up by cracks in the earth. The reality is very different. Like many modern structures, nuclear power plants in or near earthquake prone regions are built to withstand the worst expected earthquakes. They also have equipment that monitors seismic activity constantly and would be shut down in the event of an earthquake. [494]

The Three Mile Island (TMI) semi-meltdown is the only serious incident involving a water reactor. However, there was no loss of life nor significant radiation emissions that could cause future health problems. Nevertheless, it was still a major mishap because the facility was completely disabled, and some argue that it could have easily turned into something a lot worse. Furthermore, it gained added significance from the fact that it was perceived to be a lot worse than it really was and contributed to a growing opposition to nuclear power.

Investigations revealed that the problems were mainly in the way the plant was operated rather than in the technology.[495] This has generally lead to better training, improved controls and instrumentation and practices such as always having an engineer on duty. By closing off the TMI route to meltdown, these measure have made reactor operations safer.

Serious technological and management failings cannot of course be ruled out. This is exemplified by the recent incident at the Davis-Besse plant in Ohio, US where the pressure vessel had been badly corroded by the boric acid in the cooling water. This represented both a technological chink in the armor given that it arose from an unanticipated problem and a management failure in that a flawed inspection regime failed to pick it up. The Nuclear Regulation Commission deemed it to be a serious incident in that it involved a serious loss of defense in depth capability. According to the Commission, a worst-case failure scenario would have been a high-pressure leak of slightly radioactive primary cooling water (as steam) into the reactor containment building. The plant operators have replaced local management and spent large amounts of money on repairs and improvements. Other plants have been inspected to check for similar structural degradation but nothing has been found.

What about the increased risk from a growth in the number of reactors, in the event of a resurgence of the industry? What may seem like a low risk when there are only 441 reactors, may be looked at differently when there are thousands of them. This concern should be allayed by the fact that future reactors will be even safer than present ones. There will be greater reliance on safety systems that employ perfectly reliable 'passive' natural forces such as gravity, natural circulation, convection, evaporation, and condensation.[496] In the case of the pebble bed design, each fuel pebble is surrounded with its own outer shell that traps all radioactivity inside and if the helium coolant completely leaked out of the core, the fuel would not get hot enough to melt the uranium oxide within the fuel pebbles.[497]

The next question is - if a reactor blows how likely is it to lead to some incomparable disaster rather than one of more normal proportions? Certainly to achieve the more disastrous outcomes require less likely events or combinations of events. We need a failure by emergency workers to stem the emissions. This could be followed by evacuations being blocked by floods or snow storms. The situation could then be made even worse by an atmospheric temperature inversion concentrating radioactive material over a large trapped population down wind of the reactor. While we can conjure up such mega-death scenarios, they should not necessarily influence our actions if they are no more likely than other risks of disaster that are an inevitable part of life and of doing the things we want to do. For example, the government does not prohibit major sporting events because of the minute risk that the stadium will collapse from a construction fault or be hit by a falling Boeing 747.

What can Chernobyl tell us about the possible impact of a nuclear reactor disaster? According to the 2000 report by the UNSCEAR, there were 134 confirmed cases of radiation sickness among reactor and emergency workers who were on the scene at the time of the accident and received very high radiation doses. Of these 29 died within four months of the accident. A further 11 died between 1987 and 1998. The survivors from this group have a range of illnesses and a raised risk of cancer in future years.

Among the 240,000 recovery workers exposed to fairly high doses in the initial cleanup phase, there has to date been no raised cancer rate. This is surprising in the case of leukemia which usually emerges within a few years of high radiation exposure and gives weight to the anti-LNTH position. On the other hand, there is still the prospect of a raised rate for other cancers in the future given that it takes 20 years or more for radiation to have its effect. With the doses received by this group, and assuming LNTH, we can expect to see a thousand or so extra cancer deaths in the future. Among the few hundred thousand recovery workers who arrived more than a year after the accident, the radiation dose was much lower and any increase in cancer deaths would not be high enough to be detected in any epidemiological study.

Beyond the reactor site and its immediate surrounds, radioactive contamination mostly affected an area of 150,000 km2 with a population of about five million people. However, the only public health impact of radiation within this area that is in evidence 20 years after the accident are 1800 mostly treatable cases of thyroid cancer due to childhood exposure to radioactive iodine. The thyroid gland of young children is particularly susceptible to the uptake of radioactive iodine which has a half-life of 8 days and was a major component of the fission products released from the reactor. Indeed, many of these cases could have been avoided. A low iodine diet made the children more susceptible. Also the authorities could have been more effective in their distribution of stable iodine to prevent the uptake of radioiodine by the thyroid and in restricting the consumption of milk and fresh leafy vegetables in the vicinity of Chernobyl. People who were children at the time of the accident continue to have an increased risk of thyroid cancer, especially those who were under five years old. In the case of other cancers, no statistically noticeable rise in death rates is expected in the future because of the low radiation dose received.

As we can see this is a tale of human tragedy and hardship but it is not Armageddon. If anything in this sorry business is in line for that title it is the psychological and medical trauma caused by the gross over-reaction to the accident. Hundreds of thousands of people have had their lives disrupted by being relocated from regions where the radiation level had been raised to levels that were still less than the natural level in many parts of the world. This has lead to high rates of unemployment, depression, hypochondria and stress-related illnesses such as heart disease and obesity. Then there is a grossly exaggerated fear of getting cancer which is way out of proportion to what is actually a fairly small risk. Anti-nuclear fear mongering must take some of the blame for this state of affairs.

So, where in the spectrum of nuclear accidents does Chernobyl belong? It was certainly a very bad accident. The roof was blown off and large quantities of radioactive material were scattered over the surrounding region. One can imagine how it could easily have been worse. For example, the reactor could have been in a more heavily populated location and the weather crueler in where it delivered rain from radiation filled clouds.

However, these factors would appear to be overshadowed by the range of ways that it could have turned out a lot better if it had not been for the circumstances particular to the Soviet Union. To begin with the reactor was a type that used graphite as a moderator rather than water. This type is not used in the West. Burning graphite proved to be an excellent means of distributing radioactive material into the atmosphere. Emergency workers were poorly protected by western standards and the performance of emergency measures was not always "best practice". Evacuations were delayed because the authorities did not want to admit to a serious accident until absolutely necessary.

However, even if there were a good chance of an accident turning out worse than Chernobyl, it would have to be a lot worse to be a disaster of unusual proportions. Events worse than Chernobyl on a fairly regular basis would probably compare favorably with the deaths from coal mining and fossil fuel emissions or from motor vehicle accidents. Possibly, the deaths from Chernobyl will turn out to be less than what would result from installing millions of wind turbines or solar panels.

Nuclear Waste

A common arguments against nuclear power is that there is a "waste problem." It is claimed that we are unable to safely dispose of the radioactive waste created both in mining and uranium processing, and in reactor operations.

Mining and Uranium Processing Waste

The main form of radioactive waste from the 'backend' of the process is the ore body after the removal of the uranium. This is called tailings and there is about 400 tonnes of if for every tonne of uranium[498] and it is 50 to 100 times more voluminous than all other radioactive wastes combined.[499]

Its radioactivity is similar to that of natural uranium, however it potentially constitutes more of a hazard because it is on the earth's surface rather than in the ground and in a pulverized form. It generates radon gas and radioactive particles that can get into the air or contaminate streams. However, emissions from uranium tailings would still only be a mere fraction of natural emissions from the soil and considerably less than that released by tillage of the soil by farmers.[500]

Tailings emerge from the uranium milling process dissolved or suspended in water. This liquid is pumped into ponds or dams which have to meet certain design specifications to prevent contaminating the ground underneath. The process water is then decanted into a settling pond while the remaining tailings dry out leaving what looks like piles of sand. This is covered over with enough rock, clay and soil to reduce radiation to levels naturally occurring in the region and then a vegetation cover is established.[501]

Various processes are employed to remove chemicals and radioisotopes from the decanted water. These are retained as a sludge that settles on the bottom of the pond. The water evaporates or is released according to stringent rules on radiation and other contaminant levels. The sludge is collected and disposed of when the site is decommissioned.[502]

There are some problems with mine and mill waste from a time when procedures were less strict. These still require efforts to stabilize, protect or relocate the waste.[503] Past mining practices were also a hazard to miners, with high radon exposure leading to higher incidence of lung cancer. However, this has not been a feature of uranium mining for some decades.[504]

Reactor Waste

The other form of waste results from reactor operations - 'front end' waste. It is a much smaller quantity, however, it includes medium and high level waste.

In terms of volume most of it is low level waste that decays fairly quickly, with most of it fading to background levels within months or years.[505] This waste includes filters and the radioactive material that they have collected from air and water in the reactor, and things that have been contaminated by contact with radioactive material, for example, gloves, clothing, pipes and valves. Not all of the low-level radioactive waste is from the nuclear industry. A significant proportion is from other users of radioactive material such as hospitals and research laboratories.

Intermediate-level wastes include chemical processing resins, fuel rod casings and metal from spent fuel assemblies[506] and makes up less than 20 per cent of reactor waste by volume.[507]

While only comprising 5 per cent of the total volume, high level waste contributes 95 per cent of the radioactivity. This is the spent fuel from the fission process. The entire US nuclear power industry produces about 2,000 tonnes annually[508] and has produced about 50,000 tonnes since the industry began.[509] This is equivalent in weight to a medium sized cruise ship.

Storage or Disposal of Waste
Current Arrangements

Presently most high level waste is kept in temporary storage facilities at the plant site. It is placed in specially designed containers and stored in pools of water that keep it cool and prevent radiation emissions. In those countries with fuel reprocessing plants, their high level liquid wastes are stored in cooled multiple-walled stainless steel tanks surrounded by reinforced concrete.[510] In both cases temporary storage is designed to do its job for many decades to come. Most intermediate waste is also kept in temporary storage.[511]

Low-level waste is typically stored on-site, either until it has decayed away and can be disposed of as ordinary trash, or until amounts are large enough for shipment to a low-level waste disposal site in special containers.[512] The waste is typically packaged in steel drums and buried in shallow trenches at licensed burial grounds.[513] Some countries use engineered facilities such as concrete lined trenches or vaults and there is some move towards deep disposal.[514]

Permanent Disposal

Temporary storage of high level waste has proven quite effective and is designed to last indefinitely. Furthermore, it would be easier to dispose of a few decades down the track, when the heat and radioactivity has dropped to a small fraction of the level when it was first removed from the reactor.

The view that we need permanent and inaccessible storage that requires no action by future generations seems to be based on the following two premises. Firstly, the waste is a burden we should not pass on to future generations. We enjoyed the benefits so we should bear all the costs. Secondly, future generations may regress to Mad Max barbarism or 'advance' to a low tech utopia of 'simple living' and be incapable of dealing with the waste. (By the way, it is easy to imagine how the latter could quickly degenerate into the former.)

The first premise fails to recognize the huge debt that our descendents will owe us for the their inheritance of accumulated capital, and technical and scientific knowledge. Looking after some ancestral waste is a small recompense. Furthermore, any burden will be greatly reduced by the onward march of science and technology which will provide increasingly cheaper methods of storage or disposal.

It is hard to worry about the second premise. If we revert to barbarism or feudalism, radiation exposure in some areas would be a small problem compared with all the other sources of increased death and misery accompanying this new state of affairs. Furthermore, at least in the case of medium and low radiation doses, there would be less impact in a society that has regressed to a life expectancy of 35 years or so. Most people would have died of something else before any increased risk of cancer had had time to kick in. And in the case of high doses, people would soon learn to stay away from its source and incorporate it into their myths and legends.

Also, by not going down the inaccessible permanent disposal route, we would be retaining what may turn out to be a valuable resource if available for use in future reactors that make full use of the uranium and not just uranium‑235. Another argument for continued accessibility which should appeal to the worriers is that it would allow future generations to make disposal super super super safe and not just super super safe:

The ability to monitor and gain access to waste once it is in a permanent disposal site is seen as increasingly important to public acceptance of disposal plans. This would allow future generations to determine whether the site is still safe. Maintaining some access to the site could be useful for two reasons related to public acceptance. First, it would make it easier to correct problems if they arise. Second, it would allow future generations to apply new methods of waste disposal.[515]

Despite the strong case against it, inaccessible permanent disposal is the policy in ascendance. Given this, ocean dumping should be the preferred method because it is cheap as well as safe. The waste would simply have to be converted into an insoluble form and placed in containers designed to last for thousands of years. In the unlikely event of a canister failing, any radiation would be released slowly and be diluted in the ocean where it would be scarcely noticed given that the ocean already contains 4 billion tonnes of uranium and other radioactive elements.[516] Besides, ocean dumping is done by nature all the time. Uranium ore is continually being eroded into rivers and finally discharged into the sea.

Even cases of accidental or rogue dumping indicate that concerns are overblown. Russia has dumped sixteen complete nuclear reactors from old submarines and ships into the Kara Sea north of Siberia. Six of them still contained spent fuel. These were not encased in concrete or carefully buried in the ocean floor. They were just dumped. However, despite this rather insouciant manner of disposal, researchers have been unable to detect any measurable radiation from these reactors anywhere in this fairly small area of water.[517] Over time they will be buried by the silt which is delivered in great quantities by the Yenisey River. In 1968 a US B52 bomber armed with plutonium fueled nuclear bombs crashed off the coast of Greenland. The recovery team were only able to retrieve around 90 per cent of the plutonium with the rest dispersed into the shallow coastal waters. Subsequent research indicated no increase in plutonium concentrations suggesting that it had been encased by the sediments on the sea floor. Seven nuclear submarines are currently sitting safely on the sea floor. One of the them, the Soviet submarine K-8, it is feared left 20 nuclear mines at the bottom of the Gulf of Naples before sinking under tow in the Bay of Biscay. The Soviet wreck which sunk in 1989 did, however, require 'repair' work to prevent radiation leaks. Of course, as things stand ocean disposal is politically impossible because it pushes all the phobia buttons and is now even prohibited under the London Dumping Convention. This leaves geological disposal.

A number of countries have identified potential underground storage sites and have conducted geological and geophysical tests to determine their suitability. These include Belgium, Canada, Finland, France, Germany, Spain, Sweden, Switzerland and the United States. Possibly the first cab off the rank will be America's site at Yukka Mountain in Nevada, although when it will finally open is still unclear. Compared with ocean disposal this method is appallingly expensive, although not prohibitively so given that it still would be only a couple of percent of the cost of nuclear power.

The primary safety concern with underground storage is that the waste would eventually be dissolved by ground water and carried by it into wells, rivers, and soil. This could then get into human stomachs through drinking water supplies or through food plants that have picked up contaminated water in the roots.[518] The chance of exposure through inhaling contaminated dust is far less because groundwater only occasionally breaks the surface and 95 per cent of the dust we inhale is filtered out by hairs in the nose, pharynx, trachea, and bronchi and removed by mucous flow.[519] Direct irradiation by radioactive materials in the ground would not be a problem because rock and soil are excellent shielding materials that radiation cannot penetrate.[520] Other concerns relate to possible disturbances such as earthquakes, erosion, volcanic activities, mining and meteor impact.

Geological disposal is based on the strategy of multiple barriers, working from the innermost to the outermost. Firstly, the waste is in a form, possibly glass, that is not readily dissolved. Both archaeological and experimental evidence suggests that dissolving glass is an impossible task.[521] Secondly, the waste is sealed in corrosion-resistant containers. In the case of Yukka Mountain, these will have an outer layer of titanium. Tests have shown that this metal would prevent water penetration for thousands of years when immersed in a very hot and abnormally corrosive solution, while under more normal groundwater conditions, containers would retain their integrity for hundreds of thousands of years. [522] So the containers alone provide a rather complete protection system even if everything else fails. Thirdly the containers are surrounded by a backfill of clay that would swell if wet and form a tight seal keeping any water flow away from the package.[523] The clay would also insulate the waste from minor earth movements.[524] The fourth barrier is provided by placing the waste in a suitable geological environment or geosphere. Any waste which overcame the first three barriers would need to encounter conditions which would not provide opportunities to travel to the surface in groundwater. This means low rainfall to limit the means of transmission and a poor transport medium such as impervious rock with no fractures. It also means the depository being well above any water table which in turn would need to be long way from lower lying ground where it can come to the surface. The chosen sight would need to be in a region that was unlikely to be subject to future volcanic eruptions and it would have to be sufficiently deep so that neither meteorite impact nor surface erosion would expose the waste.

Cohen argues that virtually any deep ground storage would provide all the protection you would need in the totally unlikely event that the first three barriers were breached. [525] He points out that even if exposed waste were surrounded by ground water it would take an extremely long time to reach the surface, if ever. Firstly, groundwater near the waste would take a thousand years or so to emerge because it moves very slowly and travels horizontally following the rock layers and hence typically must travel many miles before reaching surface land at a lower altitude. Secondly, the radioactive material will move far more slowly than the groundwater because it would constantly be filtered out by the rock material. Furthermore, it may well become a permanent part of the rock.

Finally, one needs to ask whether it would matter much if radiation from a particular location got into the food or water supply. If it was at a level that caused concern it would be quickly picked up by routine monitoring programs and fairly simple countermeasures could be taken such as refraining from growing crops, grazing animals or drinking the water. Furthermore, future advances in medical science will greatly reduce and possibly eliminate the threat posed by radiation. So, at some point radiation exposure may cease to be a health concern.

Transporting Radioactive Waste

The specter of radiation being released from nuclear waste while in transit is made much of by the radiophobes. The record to date, at least in the US has been incident free. Over the past 40 years the US industry has managed to move more than 3,000 shipments without a single radiological release.[526]

Certainly movements will increase considerably once centralized geological storage facilities are brought into operation and/or greater use is made of reprocessing plants. So any risk, if there is one, will increase. However, a serious radiation leak in transit is a very remote possibility. This is ensured by the tight regulations governing the activity, particularly concerning the form the waste must take and the method of containment.

The containers used are subject to tests to assess their ability to deal with a range of accidents or attacks. These include ensuring that they can withstand the effects of a high speed truck or train crash, burning jet fuel and the high pressures of deep water. Even if these containers were somehow breached, contamination would be greatly limited by the fact that the waste takes a solid form. It is unable to leak out like a liquid or a gas. Significant radiation exposure would be limited to people who chose to linger in the immediate vicinity of the accident.[527]

Nuclear Terrorism

There is a concern that having more nuclear power reactors would increase the risk of terrorists getting their hands on the material required to make a nuclear weapon and so cause the level of death and misery achieved with the Hiroshima and Nagasaki bombs.

Achieving such a result would face a number of hurdles. Firstly, they would need to get together a small group of physicists, engineers, chemists, metallurgists and explosive specialists. They would not have to be experienced nuclear weapons makers but could rely on what is available in the open scientific literature. However, the more relevant their background the more smoothly the operation would proceed. There would then be the job of obtaining all the required equipment. Some of this would be difficult and in some cases would likely arouse suspicion

And finally there is the acquisition of the required nuclear material. Either highly enriched uranium or plutonium would fit the bill. Virtually all nuclear power reactors use only lightly enriched uranium and as long as plutonium produced in the fission process remains in the spent fuel, the level and type of radioactivity prevents it from being diverted to bomb making. Plutonium only exists separately where it is awaiting to be reprocessed into new fuel. This is carried out in Britain, France, India, Japan, and Russia, while the US at least up until now has opposed it because of proliferation concerns. Highly enriched uranium and plutonium are only used as a fuel in fast reactors. At the moment there are only a few in operation and a similar number are planned for China, India and Russia. Relatively large amounts of material would be required and any theft is unlikely to go unnoticed. The massive manhunt would mean that the time between the theft and the detonation would need to be fairly short.

The risk can be reduced in a number of ways. At the political level, the US is presently pursuing a course in the Middle East which should undermine the position of Jihad fascism, the main terrorist threat. At the time of writing, mainstream Islamists have already been brought into a democratic political process in Afghanistan, Iraq and Lebanon, with Egypt not far off. At the same time, the US induced Israeli withdrawal from the West Bank is now an inevitable event just waiting to happen. At the regulatory level, it is a matter of ensuring that there is an adequate reviewing process which will detect any weaknesses in the internationally agreed arrangements for the storage and handling of nuclear material. New technologies can also play a role. For example, there is talk of reactors with tamper proof fuel which is returned to special facilities for storage, disposal or reprocessing.

Concluding Comments on Nuclear

Nuclear power could play an important although not dominant role in energy production during this century by simply relying on conventional uranium resources and moderate increases in the amount of energy extracted from each tonne of uranium. Playing a major role both in this time frame and in the longer term will depend on the adoption of new technologies such as breeder or thorium powered reactors and seawater extraction of uranium.

Given the health-giving qualities of economic growth and affluence, there is a limit to how much heed we should take of remote risks from nuclear power, if it otherwise makes economic sense.

Geothermal Energy

Beneath a relatively thin cool outer layer, the earth is a furnace with a central core as hot as the sun. This is due mainly to the initial heat from gravitational collapse, when the earth was formed some 4.5 billion years ago, and to the on-going radioactive decay of potassium, thorium and uranium. The amount of heat beneath our feet is so great that the ability to exploit even a small fraction of it would belie any doubts about our ability to vastly increased the level of energy consumption.

To date, exploitation of the resource has been confined to its hydrothermal and geoexchange forms. The hydrothermal resource is the subterranean store of heated water and steam, and is the more important of the two. It is more readily exploited, the closer it is to the surface and the higher the heat gradient, i.e., the increase in temperature with each unit increase of depth. It is mainly located near where the tectonic plates meet, resulting in considerable volcanic activity and the placement of magma at higher than usual levels. The main areas are located in New Zealand, Japan, Indonesia, Philippines, the western coastal Americas, the central and eastern parts of the Mediterranean, Iceland, the Azores and eastern Africa.[528]

Hydrothermal electricity generating capacity is about 9,000 MWe.[529] This modest contribution is equivalent to 10 to 15 coal or nuclear power plants. Six countries are responsible for over 80 per cent of capacity, with the USA and the Philippines well out in front.[530]

Where the water is above 150oC steam is created which can be directly fed into a turbine connected to a generator. If the temperature is between 100oC and 150oC, electricity can still be generated using binary plant technology. The steam heats, through a heat exchanger, a secondary working fluid (isobutane, isopentane or ammonia), which vaporizes at a lower temperature than water. The working fluid's vapor turns the turbine and is condensed before being reheated by the geothermal water, allowing it to be vaporized and used again in a closed-loop.[531]

Undertaken on a similar scale to electricity generation is direct use of the heat.[532] This is primarily for space heating. For example, in Reykjavik in Iceland pipes carry hot water for tens of kilometers to homes and other buildings. The resource can also be put to a range of industrial uses such as drying food crops, lumber, and bricks, heating fish ponds and greenhouses, and pasteurizing milk.

Geoexchange systems or heat pumps also provide space heating by taking advantage of the fact that the ground immediately under the surface stays at a fairly constant temperature all year round even while the temperature above changes with the seasons. In the temperate regions the ground temperature stays between 10 to 16 degrees Celsius (50 to 60 degrees Fahrenheit). By circulating water or some other fluid through pipes, thermal energy is extracted from the ground during the coldest times of the year and deposited in the ground during the hottest times. Pipes can be buried vertically, if the ground is not too rocky, or, if space permits, horizontally in shallow trenches a couple of meters underground. While the system requires electric power, this is only needed to move the heat rather than produce it. As a result it delivers 3 to 4 times more energy than it consumes.[533] Currently the use of this technology is quite limited. Just over half a million systems have been installed of which about half are in the US.[534]

While the hydrothermal and geoexchange resources have the potential to grow and continue to be a significant sources of energy in some regions, they could never be a major player. If the nether regions are to perform that role, we will need to exploit the much larger sources of heat. At this stage hot dry rock in the earth's crust is technologically within reach. Further down the track we should be able to tap into the pockets of extremely hot molten rock - or magma - which are widespread throughout the earth's crust and also the area beneath the crust called the mantle which begins 5 to 10 kilometers beneath the sea and 20 to 70 kilometers beneath the continents.

The approach being developed to exploit hot dry rock involves creating a man-made reservoir by drilling a deep well bore down into high-temperature, low-permeability rock and then forming a large heat exchange system by hydraulic or explosive fracturing. Water is then injected into this original well and retrieved from one or more production wells after circulating through the fractured rock. As with the hydrothermal resource, the hot water or steam can be used to generate electricity or to supply combined heat and power systems.

The technology has been mainly developed at the Hot Dry Rock Test Facility in Fenton Hill, New Mexico. A hot dry rock reservoir was successfully created which generated thermal energy continuously at a rate of about 4 MW in two test phases lasting 112 and 55 days. About 10 per cent of the power produced was consumed by the injection and production pumps.[535]

While trials such as these have proven the concept, a great deal of development would still be required to make it commercially viable on a wide scale. These include: (1) the development of inexpensive high-temperature hard-rock drilling techniques, (2) improvements in three-dimensional rock fracturing, (3) mastery of methods for maintaining low-impedance fluid circulation through the fracture system and (4) improvements in power generation methods appropriate to water at temperatures considerably lower than that in fossil or nuclear powered plants.

Drilling costs account for one third to one half of the total costs of a geothermal project[536] and the cost of reaching between 5 and 10 kilometers has to be reduced significantly for hot dry rock to compete with other energy sources. At the moment costs shoot up dramatically as those depths are approached. Basically, drilling has to be faster and less prone to break downs under increasingly hostile conditions. The prospects for improvements seem quite good.

To begin with, the sharp end of the system can be improved in a number of ways. Drill bits can be made of new harder materials that allow them to operate at much higher rotary speeds and weight-on-bit loads. Or a bit that simply tries to grind through hard rock can be replaced by one that shatters it, possibly assisted by applying heat to create thermal stresses. Down hole motors can be developed which apply more power to the bit than the more traditional rotary power transmitted from the surface. Improvement can also follow from basic research into the physical and chemical processes associated with penetrating rock.

The development of so-called smart drilling should also make a big difference. This will involve a high-speed broadband data link to the drill bit where sensors will report in real-time on the conditions around and ahead of the bit and so enable the operator to avoid problems and maximize the drilling rate. Real time knowledge of drilling conditions such as the strength and composition of the rock will allow appropriate changes to be made in weight on bit and drill speed. Knowledge of the precise location of the drill bit will mean it can be steered around undesirable zones. And information on the state of the entire drilling unit, including wear of tools, state of other mechanical components and the flow of coolant would allow timely corrective action. Expected advances in computer science and miniaturization should be able to provide this technology.

The energy content of hot dry rock is huge. It is everywhere under the earth's surface, although more accessible in some places than others because of the different thermal gradient.

While the average thermal gradient is around 25oC/km,[537] about 11 per cent of the land area is classified as high grade with gradients substantially above normal.[538] In these areas rocks hot enough for electric power generation - usually taken to be at least 150oC but preferably higher - can be found at depths of less than 5 kilometers. Lower grade resources would need anywhere up to a depth of 10 kilometers. Mining for direct uses such as space heating can start at much lower depths.

Armstead and Tester have identified an energy resource of 105 million quads.[539] This is their estimate of the resource in rock with temperatures greater than 85oC, to a depth of 10 kilometers and lying beneath the 100 million square kilometers of land area not covered in ice or mountain ranges.[540] Of this resource, 26.5 million quads are moderate to high grade (a gradient higher than 40oC/km) while 78.5 million quads are low grade.[541] It is a bit under a quarter of a million times the 2004 level of energy production of 445 quads (or 470 EJ). Current production is the equivalent of the average energy beneath 400 square kilometers, in other words a square with 20 kilometer sides.

It is important to keep in mind that energy losses would be larger when using geothermal rather than fossil resources. This would be the case both in the direct use of hot water for washing and heating and in the creation of secondary forms of energy such as electricity and transport fuel. In the case of electricity generation, there would be lower thermal efficiency because of the lower temperatures at which the conversion takes place. Until we can easily extract heat at depths greater than 10 kilometers, the temperature will always be far lower than that created by the burning of fossil fuel. In the production of hydrogen as a transport fuel, via electricity production or some other method, the energy loss will always be far greater than that in the conversion of crude oil or gas to the refined fuel.

So, how long would the hot rock last? If we magically switched to 100 per cent reliance on it tomorrow and our energy consumption increased annually by 2 per cent, it would last almost 400 years on the assumption that two quads were required to replace one quad from fossil fuel because of the greater energy losses. If three quads are required, the resource would last 370 years. Employing the two quad assumption, the resource would last over 17,000 years if consumption increased annually by 2 per cent until 2100 (providing a 6.7 fold increase in annual output) and then remained constant. (It is over 11,000 years with three quads.) Using the two quad assumption again, just 1 per cent of the resource would last over 160 years, with a 2 per cent growth rate. (It is 140 years with three quads) This would reduce the average temperature by 0.5oC, given that a 1oC cooling provides 0.00215 quads of energy from every cubic kilometer.[542]

The area that would need to be exploited at any time will depend not only on the output but also on the draw down rate. So, if, for example, the regions being exploited were cooled annually by on average of one twentieth of a degree Celsius to a depth of 10 kilometers, a total of 413,953 square kilometers would be required to provide 445 quads per year. [543] This is equivalent to a square with 643 kilometer sides and is less than 3 per cent of the area of crop land. (Here no allowance is made for the greater energy conversion losses compared with fossil fuels.)

Until drilling to 10 kilometers is a routine and low cost exercise, the exploitation of hot dry rock would be confined to regions where the resource is high-grade. While a large resource, it is not evenly distributed. For example, in the US it is confined to the western regions of the country. Eventually, we will be able to drill below 10 kilometers and tap an even larger and hotter resource. As a result the heat that can be extracted from below a given area of ground will be greatly increased.

Water supply may prove to be a constraint in some areas. While there is a closed loop, with cooled water being re-injected, there is some leakage from the system necessitating an on-going demand for water. Some of the water is absorbed into micro-pores of the reservoir rock and nooks and crannies at the periphery of the reservoir. Although this tends to decline with time as these fluid sinks become saturated.[544]

Geothermal energy has advantages over resources such as solar and wind in that it is available anytime without energy wasting storage and the quantity can be adjusted quite quickly to meet changes in demand.[545]

A number of environmental concerns have been raised. Cooling of rock could cause some shrinkage and result in subsidence. However, this tendency would be counteracted by the high pressure water injection. If there is some slight subsidence, we would just need to avoid that small proportion of places where this poses a problem. Cooling of rock and water pressure could both cause seismic shocks. However, these take the form of many scarcely detectable micro earth quakes. There is no build up of stress that would cause a major earthquake. Any cooling would have negligible effects on temperatures in the surface regions where roots, burrowing life forms and ground water are to be found because the heat removal is occurring at great depths and rock is a very poor conductor of heat. Releasing waste heat into the atmosphere during power generation is a practice shared with fossil and nuclear power. The main difference is that individual power plants would tend to be considerably smaller. Apart from that, all that needs to be said is that it is microscopic compared with the heat of the sun and its variability.

While the surface 'footprint' of hot dry rock facilities would be a tiny proportion of the area exploited, they could still be a significant user of land, given the power generating equipment, wellheads, pipe distribution systems and transmission lines. Whether they would take up more or less land than fossil fueled power is unclear. On the one hand there is no need for strip mines, gas or oil pipelines, or waste repositories and heat generation takes place underground and uses no surface space. On the other hand, the fact that geothermal plants are expected to have significantly smaller output will mean various space consuming diseconomies. For example, there would be more transmission lines, more roads, and a bigger generating plant footprint for a given amount of output. If we eventually move to exploit magma and the mantle, footprints will become less of an issue as each facility will produce far more energy.

Energy Overall

After reviewing the resources on hand, it is clear that there are no insurmountable obstacles to providing a world of 10 billion people with the per capita energy levels that have already been reached in the rich countries. To achieve this by the end of the century would simply require the growth rates in energy production that were fairly normal in the 20th century.

An annual energy growth rate of 2 per cent which is slightly less than the average rate of the last 30 years would provide a 6.5 fold increase if sustained for the whole century. If, as expected, energy consumption in the rich countries is at a slower rate, more of any increase would go to the poorer countries.

If rich countries were to continue increasing energy consumption by 1 per cent per year and their population remained static, while overall energy grew annually by 2 per cent and the population of the poor countries increased by 60 per cent, by the end of the century, rich country per capita consumption would increase from 5.5 to 14 toe and poor country per capita consumption from 1.1 to 6.8 toe. This would bring the poor countries as a whole almost up to present US per capita consumption levels and shrink the disparity between rich and poor countries from five to one to two to one.

With the large resources of coal and gas, fossil energy could well remain a major player into the 22nd century, with CO2 capture if deemed necessary. Sun and wind resources are vast and non-depletable and could provide indefinitely the level of consumption anticipated by 2100, although it would require gathering the resources from quite large areas. With current and easily foreseeable technology nuclear power could play a larger but not dominant role during this century. If innovations such as breeder and thorium powered reactors, and ocean extraction of uranium, prove feasible and economic, nuclear power could provide a growing level of energy for many centuries. Hot rock could become a massive resource once we can drill routinely to 10 kilometer depths. The further we travel into the future the more we will be able to rely on energy sources that are presently either infeasible and unforeseeable. Fusion energy, solar power from space, and heat from magma and extreme depths are among those in the former category.

Minerals and Other Raw Materials

Some materials are available in such large and readily accessible quantities that there is little argument about them being a limit on growth. This category includes limestone, gypsum, sulfur, nitrogen for fertilizer, clay, sand, gravel and silicon. Other resources have present reserves of sufficient size to sustain reasonably healthy growth rates through at least until late this century.[546] These include potassium, phosphate, iron ore and bauxite (for aluminum).[547]

Some minerals are a significant proportion of the earth's surface which suggests considerable abundance even after allowing for the fact that ore bodies vary in accessibility and cost of processing.[548] Silicon is the most abundant element after oxygen in the earths' crust. Aluminum makes up 8.2 per cent of the earths crust while iron makes up 5.6 per cent.[549] While copper and zinc are each less than one part in 100,000 of the earth's crust, in both cases it is equivalent to many millions of years of current output. [550]

As mentioned in the last chapter, the potassium and phosphate required for fertilizer are both quite abundant. The estimated total potassium resource is over 8,000 times present annual consumption. [551] The present phosphate reserve would last well into the next century. Further exploration should lead to the discovery of extensive new deposits and new technologies will open up the vast resources that have been identified on the continental shelves and on seamounts in the Atlantic and Pacific Oceans.[552]

While plastic is presently derived from petroleum, its fate is not tied to it. Plastic can be made from coal and plants. It is very much a "natural" or "organic" product. Animal horns, tortoise shells and shellac from insects are all plastics which can be molded when heated.

Some metals with limited reserves sometimes raise concerns. These include silver, gold, tin, lead, tungsten and nickel. As with energy resources, it needs to be kept in mind that present reserves are no real indication of what is ultimately available. They may well be just the tip of the iceberg with future reserves being augmented by further exploration and improved methods of extraction. The oceans and ocean floor are a major longer term source for many metals. These metals often also have a range of substitutes which can reduce our reliance on them. Gold has ample substitutes in jewelry; and in electrical and electronic products alloys and thinner coatings can be used. The need for silver in photography is being significantly reduced by the move to digital technology. Tungsten has a range of substitutes for cutting tools and for lighting.[553] Plenty of other metals can do the jobs done by tin. Technological advance ensures that the scope for substitution increases over time as new materials are developed and old uses for existing materials decline in importance. Phosphate, potash, nitrogen and sulfur for fertilizer have no presently foreseeable substitutes, however, they are all plentiful.

Tidying the Nest - Our Effect on the Environment

There is widespread anxiety about the effect on the natural environment of economic development and increasing population. Our impacts can be put into three broad categories - emitting greenhouse gases such as CO2 into the atmosphere, polluting air and water, and physically destroying the natural environment through measures such as land clearance.

Greenhouse emissions have already been discussed above in the section on fossil fuels. There, we concluded that the issue was clouded by scaremongering, that lack of action to curb emissions in the next two decades would at most only lock future generations into a doubling of pre-industrial levels of CO2 in the atmosphere and that with their increasing wealth and scientific savvy our descendents will be able to adapt to any climate changes. Degradation of the soil required for food production was discussed in chapter two and was judged to be a challenge to be overcome rather than an insurmountable obstacle.

Below we examine the other main areas of concern: first, air and water pollution, and then the destruction of forests and the extinction of many of their plant and animal species.

Air Pollution

Most air pollution comes from combustion by motor vehicles, power plants, domestic fires and various industrial processes. Common pollutants are: particulate matter (PM), which includes smoke and soot; carbon monoxide (CO); sulfur dioxide (SO2); nitrogen oxides (NOx) and lead.

Particulates, SO2 and NOx can cause, or contribute to, lung and cardiovascular disease, and aggravate allergic reactions and asthma. Lead is a dangerous poison, with emissions best known for affecting the intellectual capacity of children. CO is of most concern for people who suffer from cardiovascular disease such as angina.

When it comes to the natural environment, the primary concerns have been acid rain and ozone or smog. Acid rain occurs when NOx and SO2 react with water in the air to create sulfuric and nitric acid. When this precipitates it contributes to the acidification of soils, lakes, and streams. Lower atmosphere ozone or smog is formed by the interaction of sunlight with NOx and other pollutants. This is known to reduce the growth and survivability of plants, and can affect both ecosystems and crop yields.

In developed countries, there has been remarkable reductions in emission levels over recent decades and a corresponding improvement in air quality. In the US, between 1970 and 2004, GDP and vehicle miles traveled almost tripled and energy use increased by almost 50 per cent, yet emissions of the six principal pollutants declined by more than half.[554] SO2 emissions have declined by 51 per cent, NOx 30 per cent, CO 56 per cent, particulate matter (not originating from precursor gases such as SO2 and NOx) 80 per cent, volatile organic (i.e., carbon-based) compounds (VOCs) 55 per cent and lead almost 99 per cent. Between 1983 and 2002 SO2 concentrations in the air declined by over a half, NO2 by over 20 per cent and CO by 65 per cent.[555] Between 1988 and 2002, concentrations of particulate matter 10 micrometers or smaller in diameter (PM10) fell by 31 per cent.[556] Emissions of pollutants referred to as air toxics declined by 30 per cent in the 1990s.[557] These are pollutants that have been specifically identified because of particular concerns about their effect on health or on the environment. Lead and some of the particulates and VOCs are in this category. Also included are dioxin, asbestos and metals such as cadmium, mercury and chromium. In the European Union in the 1990s, SO2 emissions fell by 60 per cent, NOx by 27 per cent and VOCs other than methane by 29 per cent.[558]

Government regulation has played a big part in these reductions by mandating the adoption of cleaner technologies or measure to capture emissions. Emissions should continue to decline with tighter requirements as technologies improve and old motor vehicles and facilities are phased out. For example, European Union countries plan by 2020 to reduce year 2000 emission levels by the following amount: SO2 by 82 per cent, NOx by 60 per cent, VOCs by 51 per cent, ammonia by 27 per cent, and primary PM2.5 (particles emitted directly into the air) by 59 per cent.[559]

Air pollution will decline even further in the long term as we develop ever cleaner ways of using fossil fuels and eventually reduce our reliance on these sources of energy.

Water Pollution

The sea, streams, lakes and groundwater are all subject to water pollution. This can be point-source as in the case of emissions from industrial, sewerage and drainage facilities, oil releases from shipping, and leaching from landfill and toxic waste sites; or it can be from a more diffuse source such as runoff and leaching from farms or from air pollution that eventually returns to the earth's surface.

Human health can be affected through the presence of bacteria, viruses and toxic substances in drinking water or the accumulation of heavy metal or other substances in the fish we eat. The living aquatic environment can be degraded through oxygen depletion, heat, the blocking of sunlight and poisoning.

In the developed countries there have been significant reductions in some forms of water pollution. The disposal of untreated sewage into rivers and coastal waters has been greatly reduced or eliminated. Industrial emissions, and landfill and toxic waste sites have become subject to heavy restrictions. The discharge by industrial facilities of pollutants into waterways is becoming a thing of the past, toxic waste sites are subject to expensive cleanups, and the design standards for landfill and waste storage have tightened considerably to prevent leeching into groundwater.

The cut backs in air pollution referred to above are reducing the level of water pollution from this source. Less air pollution is precipitating directly onto lakes, or onto land where it can be washed into waterways and lakes, or leached into groundwater.

There has been less success with pollution from farms. This includes pesticide, fertilizer, sediments, pathogens from animal waste and salts. These are picked up by rainfall, snowmelt and irrigation water, and either run off into lakes and waterways or leach into groundwater. The fertilizer can lead to excessive levels of nutrients in streams and coastal regions causing eutrophication[560] and the pesticide may cause health problems if it reaches sufficient levels in drinking water. Sediments can block sunlight to underwater plants and clog the gills of fish. Improvements will come from the adoption of less hazardous pesticides, and farm practices that use chemicals more judiciously and reduce soil erosion.

Another concern is the contamination of urban rainwater by a host of domestic and industrial substance which are picked up prior to flowing into the drainage system. These include motor oil, pesticides, herbicide, rust and nutrients such as fertilizer, plant matter, organic wastes and motor vehicle cleaners. During heavy storms it may also receive sewerage overflow. Cities vary considerably in the extent that they treat drainage water and their ability to cope with heavy storms.

Reductions in pollution have lead to some notable signs of improvements. In the middle of the 20th century, the Thames River around London was effectively dead from the combined result of sewage effluent, industrial discharges and thermal pollution from power stations and gas works. There was virtually no oxygen in the summer months and no established fish population in this part of the river. However, by 1974 the situation had improved sufficiently for salmon to begin returning for the first time in 150 years.[561] Their life cycle requires good environmental conditions in the sea, the estuary, the river and its tributaries and this makes them a good indicator of environmental quality. A lot of other fish have also returned and in recent years seals have been spotted fairly frequently. In 2000 salmon also returned to Europe's Rhine River.[562] While they are not yet independent of human help and stocking exercises, they already reproduce naturally in several tributaries, and the target is to achieve stable wild populations by 2020. New York Harbor has seen a vast water quality improvement over the last three decades, due mainly to the clean up of sewerage and drainage.[563] The resulting increase in oxygen levels has lead to the return of many fish species as well as their winged predators. Oysters are slowly returning, and there are occasional sightings of dolphins, manatees and sea turtles. The Great Lakes of North America are also on the mend. While the top predator fish still pose a risk to any wildlife or people eating them, the contaminants responsible have declined dramatically since the 1970s and are still declining in most cases.[564] Between 2002 and 2004 the Federal Administration spent $1.3 billion on measures to improve water quality and the Great Lakes Interagency Task Force was established in 2004 to take charge of the on-going cleanup. Measures include increased clean up of contaminated sediments, measures against invasive species and increased meeting of air emission targets by regions up wind of the lakes.[565]

The proportion of beaches classified as polluted and unfit for swimming has declined dramatically. For example, in the European Union during the 1990s the figure fell from over 20 per cent to 5 per cent.[566] Harmful toxins in sea food taken from the coastal waters of the US and western Europe are also declining.[567] Oil spills that pollute sea lanes and adjoining coastal regions have fallen sharply. Major spills of more than 700 tonnes averaged 25 a year in the 1970s but only 8 a year in the 1990s. In the naughties so far (2000-05), the average is less than four.[568] For spills over 7 tonnes, the quantity of oil spilt has declined from an annual average of 314,200 tonnes in the 1970s to 27,167 tonnes during 2000-05.

Pollution and Development

In developing countries pollution is reminiscent of what it used to be like in developed countries in earlier times. Drinking water is heavily polluted. Sewage is generally untreated. Air and water emissions from factories and power plants go unchecked. Motor vehicles are old and their exhausts unregulated. Smoke from burning garbage and other rubbish fills the air.

In industrializing countries such as China and India, the rivers are open sewers and repositories of all kinds of factory emissions. China's Yangtze River in 2003 received 35 percent more waste water and sewage than 5 years prior,[569] while the lower reaches of the Huaihe River are considered too toxic even to touch.[570] In the case of the Ganges River in India there is the added problem of poorly cremated human remains.[571]

Not surprisingly there is a serious lack of safe drinking water. In China a third of the rural population is affected,[572] while most large cities do not have drinking water that meets government standards.[573] A recent study showed that only half of that country's river water and 65 percent of its ground water was drinkable.[574]

The greater level of air pollution in developing countries is reflected in World Bank estimates of PM10 concentrations in 1999 for cities with populations over 4 million.[575] For developing country cities, the median and average levels are just over three times greater than for their developed country counterparts, while their worst city has a level almost five times greater than the worst in the rich group.

At the present stage, countermeasures are limited and poorly enforced, and tend to be swamped by the growth in polluting activities. So one can expect the situation to get worse before it gets better. As these countries develop they will begin to turn the situation around much as the rich countries have done. There will be the resources available and the political pressure to do so. The process will be assisted by the fact that technologies just keep getting cleaner.

One of the more serious forms of air pollution in poor countries is the indoor variety, which even the early stages of development should remedy. Billions of people - particularly women and children - are exposed to high levels of particulates and CO through the widespread practice of cooking and heating with solid fuels on open fires or traditional stoves.[576] The solution lies in better stoves and ventilation where traditional fuels continue to be used, and the greater adoption of modern energy sources such as electricity, gas and kerosene.

Pollution Scares

As with anything to do with the environment, pollution has its share of alarmism and exaggeration. Some of the more renowned in recent times relate to the effects of oil spills, acid rain and chemicals.

Oil Spills

When Saddam Hussein dumped 6 to 8 million tonnes of oil into the Persian Gulf in 1991, it was the biggest oil spill ever.[577] Greenpeace described it as an unprecedented disaster causing vast damage to the local ecosystem. Large scale maritime extinctions were expected and there was a general pessimism about the future recovery of the Gulf. Bahrain's Health Minister claimed that the slick was "the biggest environmental crisis in modern times" and could spell "the potential end of wildlife in this area." However, a study in 1992 showed that the oil had been substantially degraded and the water contained similar levels of oil residue as coastal stretches in the US and UK. By 1995 marine scientist could report that the region was well on the way to biological recovery.

In 1989, a spill in Prince William Sound in Alaska had also been dubbed an eco-catastrophe. While 25 times smaller than the subsequent Gulf spill, it was the biggest ever in American waters. It caused a heavy oiling of 200 miles of coastline and light oiling of another 1,100 miles, and the number of individual animals killed ran into thousands. According to the National Oceanic and Atmospheric Administration, the region is now well along the road to recovery.[578]

Acid Rain

During the 1980s there was much ado about "acid rain" destroying forests. This is rain that is made more acidic than normal by NOx and SO2 emissions reacting with water to form nitric and sulfuric acid. It hit the headlines when a group of German scientists blamed it for severe forest death in Europe in the late 1970s and early 1980s. However, large scale and expensive research in North America and Europe over the next decade came up negative. This included experiments that exposed trees to various concentrations of acid rain. At the same time there was a reassessment of the damage to European forests which prompted the alarm in the first place. It was determined that the extent of forest death had been exaggerated and that much of it was due to direct smoke pollution from local emissions.[579] And in the late 1990s it was possible to report that the predictions made by many in the 1980's of widespread death of European forests had not eventuated.[580]

Studies did, however, confirm that acid rain was polluting lakes. Damage to buildings and monuments was also confirmed, although it only brought forward required repair work by a number of years. Of course, whatever the effects of acid rain, there are, as discussed above, plenty of other good reasons to reduce NOx and SO2.

Cancer Epidemics

It is claimed that there is a cancer epidemic and that chemicals in our water supply and food are a major cause. It is certainly true that the rates of cancer incidence and deaths have risen considerably in developed countries over the course of the last century. For example, in the US, cancers deaths per 100,000 in 1998 were over three times greater than in 1900 and almost one and a half times greater than in 1950.[581] However, because cancer is predominantly an illness of old age, the figures need to adjusted for the fact that the population is aging. After making this adjustment, there is a slight increase in the cancer death rate from 1950 until 1990.[582] However, this increase is smoking related. If you adjust for smoking there is a decline in the cancer death rate. A decline of almost 30 per cent in cancer mortality.

Lest one think that the decline in mortality is simply due to better cancer treatment, incidence rates for cancer have also been improving. For all types combined, age-adjusted incidence rates have been stable since 1992.[583]

Claims have also been made about rising rates for specific forms of cancer. Breast cancer incidence rates in the US increased by 0.3 per cent per year between 1987 and 2002.[584] However, this is generally attributed to increases in already recognized risk factors. These include women putting off having their first child and having fewer of them, and increased obesity levels.[585] The jump by almost 4 per cent per year between 1980 and 1987 is due largely to increased screening and increased early detection.[586]

There has been an increase in the age adjusted incidence of prostate cancer. However, this appears to be due to increased and earlier testing. This has meant that the disease is being detected in people who would otherwise never be diagnosed with it because they die of something else before it is picked up.[587] The National Cancer Institute in a study of childhood cancers in US determined that the incidence had been fairly stable during the 70s, 80s and 90s except for a once off jump in the late 70s and early 80s which they attributed to improvements in diagnosis.[588] At the same time death rates among children have steadily declined because of better treatment.

Endocrine Disruptors

With cancer fears losing their punch a new phobia took off in the mid 1990s. The chemicals we were using were accused of being 'endocrine disruptors' with all kinds of dire effects. They were acting like hormones and disrupting the endocrine systems of both humans and animals and thus leading to reproductive and other health problems. It really got going with a book called Our Stolen Future which even earned a forward from Vice President Al Gore.[589]

The evidence was based on reports of reproductive problems among wildlife exposed to chemical emissions, the side effects of an estrogenic compound dispensed to pregnant women between 1940 and 1970 and on claims of a decline in sperm count and semen volume among men.

In the Great Lakes area there had been a relatively high organochlorine contamination of wildlife populations and evidence of reproductive problems among birds and fish. Levels of the relevant pollutants have fallen considerably in recent decade with the banning of their use. This has been accompanied by a decline in the reproductive problems and an increase in wildlife population.[590]

A story that received a lot of coverage was the case of high chemical exposure by alligators in Lake Apopka Florida.[591] Among the reproductive problems were decreased penis size. However, it does not seem appropriate to draw any conclusions from this about general risks given that the reptiles had been exposed to particularly high levels of pollution, namely, a major spill from a local chemical plant plus sewage effluent and runoff from agricultural chemicals.

Diethylstilbestrol, a potent estrogenic compound, used between 1940 and 1970 to prevent spontaneous abortions, had caused reproductive illnesses among the offspring. The drug was administered at high therapeutic doses and the exposure to endocrine disruptors was thousands of times higher than any exposure by trace levels of chemicals in our food or water supply.[592]

Hormone disruption by chemicals was also blamed for a purported decline in sperm count and semen volume among men. Naturally this received considerable publicity. According to a 1992 analysis which looked at 61 studies in various locations over the period 1938 to 1990 there had been a 40 per cent drop in sperm count and a 20 per cent drop in semen volume.[593]

However, this study was shown to be totally unsound because it did not take into account the fact that sperm count and semen volume vary from one place to another at any given time and the change over time also varies with some places experiencing falls, other rises and other staying the same. So any trend in studies over time will depend on the places included. And it just so happened that the earlier period of "high" sperm count relies overwhelmingly on a small number of New York studies. This is a location that has always had an above average sperm count, the cold winters considered to be a possible reason.

Furthermore, any drop in sperm or semen levels would at least in part be explained by the significant increase in sexual activity over this period, with our without partners. There is known to be a strong positive correlation between these levels and the period of abstinence from ejaculation.[594] The case is also weakened by the fact that there has been no drop in semen quality among domestic animals over recent decades.[595] Finally, if we were having reproductive woes you would expect male infertility rates to rise. However, this has not been the case. In the US it has remained fairly constant over the last 30 years while in the UK it has declined.[596]

Loss of Forests

The state of our forests is a common concern. Forests cover almost 4 billion hectares, or 30 per cent of the ice-free land area.[597] The proportion of forest cover varies considerably from one country or region to another. South America and the Russian Federation are both 50 per cent forest. France, Germany, Poland, Spain and Italy are all near 30 per cent while the UK, Ireland and Holland are much lower at around 10 per cent. For Sub-Saharan Africa and North America it is a third. For Australia and Asia it is around 20 per cent. Even "crowded" India is a quarter covered in forests and woodland. In some countries the proportion is particularly high. For example, Brazil, Sweden, Papua New Guinea, Japan, South Koreas, D. R Congo, and Malaysia are at least 60 per cent forest.

Tropical forests are the primary concern. While forests in other areas are increasing in size, they are shrinking. Furthermore, they have far greater biodiversity. A large proportion of species are found in tropical rainforests. In terms of tree species, the proportion is particularly high. They have thousands of them while forests in the temperate and boreal regions have very few. For example, Iceland has three.

Tropical forests make up about half the forest area[598], of which South America has over 40 per cent, Africa 33 per cent and Indonesia around 5 per cent. The Amazon comprises 53 per cent of tropical rain forests. The loss of tropical forest is in the order of 11 million hectares per year.[599] If that level of loss were to continue until 2050 the current area would be reduced by almost a quarter. At its current level of depletion Africa would lose 30 per cent of its forests by 2050. In South America (i.e., mainly the Amazon) the figure is 22.5 per cent and for Southeast Asia 57 per cent. For Indonesia which is presently responsible for almost 70 per of forest loss in Southeast Asia, a continuation of the current level of depletion would leave very little by mid-century.

The situation is different once we move away from the tropics to the forests of North America, Europe and East Asia. In 1920, forest cover in the US was 70 per cent of what it was in 1600. Re-growth since then has brought the figure up to 75 per cent.[600] While there is very little ancient undisturbed forest in Europe, new forests have grown considerably in recent times. Leaving out Russia which has been static, forests in Europe have grown in area by 7 per cent between 1990 and 2005[601] and by a third since 1950.[602] China claims to have increased forest area by 25 per cent between 1990 and 2005. It is certainly true that as a whole these non-tropical forests do not have the biodiversity they had when we were hunters and gatherers, nevertheless, we can still conclude that the situation overall in these regions is not getting worse and in some cases is getting better.

The prospects for ending wholesale forest destruction in the tropics depend very much on the prospects for economic, social and political progress over the next 50 years. The children of subsistence farmers need to be brought into the modern economy. Forest related industries such as logging and cattle industries need to decline in importance and influence. People need to be affluent enough to be concerned about forest conservation. For poor people it can be something of a luxury item. More effective international action is also important This includes restrictions on the international trade and investment in products resulting from excess exploitation of rainforests and greater contributions by rich countries to the cost of conservation.

Mass Extinctions

Many species are threatened by our destruction of their habitat and introduction of exotic competitors and predators. Some people talk of mass extinctions. There a around 1.5 million recorded species and the total number is thought to be between 5 and 10 million.[603] These are a mere fraction of all the species that have ever existed, with most having gone extinct long before humans appeared on the scene.

Since 1600 there have been just over a thousand documented extinctions.[604] These include around 100 each of mammals, birds, fish and insects; a couple of hundred mollusks and almost 400 vascular plants. These are extinctions that were sufficiently visible for the absence of the specie to be noticed and some time was spent looking for it wherever it may have existed. So we can be quite sure that the real number would have been a lot higher.

The primary concern is with the tropical rainforests which are thought to be home to at least half of all species. The extent of extinctions depends on the level of forest area loss and its relationship to species loss. Some biologist see a very strong relationship. Typical is the WWF biologist Thomas Lovejoy who surmised that the destruction of half the rainforests would lead to the extinction of a third of their species.[605] If rainforests have half the species, that is a species loss of 16.5 per cent. If we take the total number to be 6 million, that is a loss of almost one million. The average annual figure then depends on how long it takes to reduce the forests to half their current size. For example, 25 years would give an annual figure of around 40,000.

There are reasons for placing bounds on our pessimism. As already discussed above there are reasons for hoping that the deforestation will slow down in coming decades and we will still be left with large areas. Also the relationship between area loss and extinction may not be as great as is often feared.

From the tropical forest loss to date of somewhere around 20 per cent, there has been no clear evidence of large numbers of extinctions. The Atlantic Rainforest in South America has been reduced to less than 10 per cent of its original size. Nevertheless, there have been no identified animal or plant extinctions. There are indeed a lot of endangered species, however, there is some optimism about managing the region to avoid or minimize future species loss.[606] Measures include increased levels of protection, identification of key biodiversity areas and linking forest fragments which are otherwise too small for long term specie survival with corridors that are being reforested or put to other biodiversity-friendly uses.

The experience of the US is also instructive. The destruction of 99 per cent of its eastern primary forest over the last 200 years lead to the known extinction of one forest bird[607]; while the stresses placed on the forests of the Pacific Northwest during the post-war period lead to no known vertebrate extinctions. [608]

Rowan Martin, a wildlife expert with 35 years extensive knowledge of the on-land vertebrate population in southern Africa, knows of no extinctions during that period.[609] Given that the region is 3 per cent of the total world land area and that part of it is in the tropics you would expect that region to have well over 3 per cent of all species and something near its share of extinctions. Also worthy of note is the fact that endangered species lists have not been good predictors of extinctions, suggesting a greater than expected resilience.

Documented extinctions worldwide also suggest a lower rate. Fairly reliable data for mammals and birds suggests that their extinction rate has been about one per year in recent times.[610] So if other taxa were to exhibit the same liability to extinction and there were a total of 10 million species the annual rate of extinction would be just over 700 species per year.[611] This is high but nothing like the figures that often get bandied about. It would take 100 years to lose one per cent of species at that rate.

As the human race progresses we will be in a position to become increasingly biodiversity-friendly. We will be able to apply more resources and better know-how to our conservation efforts. Eventually we will be able to positively contribute to biodiversity by creating new species and by spreading life beyond the planet where it can adapt and evolve. And when the next ice age comes, people (or other intelligent beings) who are the product of the present industrial and technological trajectory will be in a position to rescue threatened species. This is a job well beyond people living "close to nature".

We should also keep in mind that human settlement is not necessarily at odds with biodiversity. Our cities are full of flora and fauna. We have birds, small mammals, pets, gardens and parkland. Weeds have no trouble popping through cracks in concrete and asphalt, and if you lift a rock something is sure to crawl out from under it. There is no shortage of flies, moths, spiders, ants, beetles, slugs and worms.

The biosphere's history of resilience is something else to consider. There have been some large scale extinctions in the past and nature has always managed to bounce back. These past episodes seems to suggest that lost diversity is eventually restored after 10 million years or so.[612] This is not a long time in the life of the biosphere. Furthermore, each phase of extinctions has been followed by even greater diversity.[613]


4

CAPITALISM,
THE TEMPORARY TOOL OF PROGRESS

Introduction

As the discussion so far makes clear, the flow of abundance ultimately depends on only one resource ‑ us ‑ our effort and ingenuity. So reducing obstacles to its fullest and most effective use has to be our primary concern. In poor countries this means eliminating obstacles to the full development of capitalism. In the rich countries where this has already been achieved, capitalism itself has become the primary obstacle. There, it is becoming an increasing impediment to economic progress as technology transforms work from a chore into an activity worth doing for its own sake. Under these conditions we need a system where the means of production are collectively owned by those who do the work, an arrangement that will prove both more economically efficient and more congenial - but more on that later after a discussion of the poor countries.

The following are a few of the more notable impediments to capitalist development often found in these countries:

·        their economies are dominated by firms owned by the government or by the cronies of political leaders. These receive preferential treatment and are protected from internal and external competition. Recently many government firms have become crony ones through carefully stage-managed privatization programs;

·        they often have governments that continually verge on bankruptcy, because they borrow for consumption or bad investments and raise insufficient tax revenue. This leads to restricted access to credit for anyone except government and the biggest firms, and the theft of savings through inflation and currency devaluations;

·        business is deterred by burdensome regulation and associated corruption, and by a judicial system that lacks independence and is invariably corrupt and ineffective. Laws often greatly restrict foreign investment; and

·        there is a failure to give sufficient priority to raising the general level of education.

These capitalism-unfriendly conditions will significantly delay affluence for many countries, especially the poorest ones which may have to wait more than a century to substantially overcome their backwardness. For more on growth prospects see the discussion at the beginning of chapter 3.

Among the countries affected there are two groups worthy of special comment. Firstly, there are the countries of the former Soviet Union, such as Russia and the Ukraine, where economic problems are misconstrued as due to too much capitalism rather than not enough of it. Secondly, there are the countries of Sub-Saharan Africa which make up the least developed region of the world. Here extreme internal backwardness and misconceived outside 'help' have created a toxic mix that poisons most economic activity. We will look briefly at the former Soviet Union and then at more length at Africa. After that we will turn to the rich or developed capitalist countries and the prospects for collective ownership.

Soviet Hangover

The collapse of the Soviet Union can be best described as an incomplete process and many of the hangovers from that old regime have much in common with other less developed societies. Most large industries continue to be inefficient, protected monopolies whether state or privately owned. Businesses have very insecure property rights. The law has not kept up with the new forms of ownership, and do not provide the basis for legally enforcing contracts and ownership rights. At the same time there is an epidemic of organized crime and corruption which brings rampant theft, fraud and extortion. Indeed, most businesses are thought to pay protection money. Firms face a daunting array of fees and delays for countless licensing and permit requirements. These can often be avoided but only with the payment of a bribe. Organized crime often plays the role of middleman in these situations, facilitating transactions between businessmen and corrupt government officials.

Organized crime and corruption is very much a legacy of the old regime. The underworld had extensive connections to Soviet officialdom and played a critical role in distributing scarce goods and resources (often stolen state property) to those with money and influence. At the same time it was normal for bureaucrats and other state employees to sell favored treatment. Goods arriving at retail stores were set aside for preferred customers who paid extra and those who controlled the distribution of motor vehicles, housing etc were often in a position to extract additional payments from consumers. The black market, bribery or favor swapping were an almost daily experience for the average Soviet citizen.

Legal reform in the area of trade practices and property rights plus deregulation and privatization of the economy are needed if the economy is to be stirred into action. However, for the moment the ruling-elites in the successor countries appear unable or unwilling to carry out these changes.

Africa: More Capitalism Please

The most notable feature of Sub-Saharan Africa is that the ruling cliques who control most of the wealth are nothing remotely like a capitalist class. Their primary aim is the consumption of capital rather than its accumulation. They have found a host of ways for ensuring that funds that ought to have been devoted to economic development are wasted. In this they are reminiscent of the rulers of ancient Rome or Egypt, and nothing like a modern bourgeoisie. Like the ancients that they emulate, they have both domestic and external sources of wealth. Locally they engage in the time honored practice of screwing the peasant through either heavy taxation or compulsory crop acquisitions at below market prices. From the outside world, instead of tribute from vassals, they receive aid, loans and resource royalties. Funds are then spent on palaces and luxuries, on prestige projects that make no economic sense or diverted into Swiss bank accounts and various offshore investments. The term 'kleptocrat' has been coined to describe this class of people.

The wealth looted by some of the rulers has been staggering. People like Mobutu in Zaire, Moi in Kenya, and Babangida and Abacha in Nigeria, all amassed fortunes worth billions of dollars. Typical in extravagance was Mobutu who ruled Zaire from 1965 to 1997. He built about a dozen palaces and even linked some of them with four-lane highways. He also acquired grand estates and chateaus in Belgium, France, the Ivory Coast, and Spain, as well as vineyards in Portugal.[614]

Like the emperors of Rome and Pharaohs of Egypt, they also wasted vast resources on monuments to enhance their prestige and impress the populace. The continent is littered with grand conference halls, new capitals and show airports. President Felix Houphouet-Boigny of the Ivory Coast in the 1980s built a $360 million basilica to match the best in Rome[615], while Nigeria's generals wasted billions of dollars of oil revenue building a brand new capital at Abuja.[616]

Delusions of grandeur has also motivated a lot of what were ostensibly productive investments. They have tended to be big showy symbols of development that make no economic sense under the backward conditions into which they were introduced. They were not accompanied by the necessary development in other areas such as transport and power infrastructure, management and education. Factories were built that never produced anything. Vast amounts of sophisticated agricultural machinery were imported and then abandoned in the field when they broke down.[617]

Then we have the billions of dollars spent every year on the military to provide the wherewithal for competing parasitic elites to fight over the loot. These are often from different tribes or ethnic groups.[618]

These kleptocrats are not totally spendthrift because they also squirrel away billions. However, even these funds have not been put to productive local use but instead exported for investment in the rich world - Swiss bank accounts being a favorite destination. The amount of capital which has fled the continent is staggering. The capital held by Africans overseas could be as much as $700 billion to $800 billion.[619] This exceeds the more than $500 billion in foreign aid pumped into Africa between 1960 and 1997.[620]

The techniques for siphoning off wealth are many and varied. The least imaginative is to pay yourself an horrendously large salary. In the case of Mobutu, his personal budget allocation was more than the Government spent on education, health and all other social services combined.[621] This is not including the revenue from diamond exports which went directly into his pocket.

Contracts for major projects do not go to the lowest bidder but rather to whoever offered the largest bribe. In one notable case in the late 1980s, a contract for a hydroelectric dam in Kenya costing hundreds of millions of dollars was cancelled and bidding reopened when the winner refused to pay a bribe to a leading crony of President Arap Moi, the ruler of Kenya from 1978 to 2002. The eventual winner paid the bribe but put in a bid much higher than the original one.[622] When kickbacks on public contracts do not supply enough cash, politicians award themselves fake ones.[623]

Padding the cost of projects is another method of achieving the same outcome. For example, the Nangbeto dam project in Togo was costed at around $28 million. However, this was increased to $170 million so that funds could be diverted into the pockets of the ruler and his cronies.[624]

Anything requiring government approval can be the source of bribes. These include permits to do just about everything, and most importantly licenses and concessions for mining natural resources. In Nigeria, Abacha, who ruled until his death in 1998, ensured that no oil deal or decision was made without his approval and that always required a 'fee',[625] while Ghana's Minister of Trade in the 1960s used to charge a commission for import licenses equivalent to 10 per cent of the value of the imports.[626]

Owning businesses can be particularly profitable when you are a ruler or high official. The numerous business concerns of President Arap Moi always managed to win huge government contracts and charge the state exorbitant fees and prices. He also owned a cinema chain with monopoly control over movie distribution in Kenya.[627]

Having privileged access to goods can be a good earner when you sell them on the black market. In Rwanda, President Habryimana ran lucrative rackets in everything from development aid to marijuana smuggling. He also operated the country's sole illegal foreign exchange bureau in tandem with the central bank. One dollar was worth 100 Rwandan francs in the bank or 150 on the black market. He took dollars from the central bank and exchanged them in the exchange bureau.[628] In Zimbabwe top government officials used their influence to buy trucks and cars at the artificially low official price from the state-owned vehicle assembly company and then sold them on the black market for enormous profits.[629]

Fictitious external debt has reaped vast funds for corrupt officials, with 'repayments' going into their overseas bank accounts. At one time over $4.5 billion of Nigeria's external debt was discovered to be fraudulent.[630]

The economic impact of kleptocrats is not confined to ripping off most of the wealth that should have gone into development. They also make it difficult for everyone else to be productive. Most horrific of all is the destruction from their civil wars which have ravaged much of the continent. In 1999, a fifth of all Africans lived in countries battered by wars, mostly civil ones.[631]

Whatever limited infrastructure such as roads, schools, hospitals, power telecommunications is not destroyed in wars has often been allowed to deteriorate. Once bribes have been extracted at the construction stage, kleptocrats do not care if the resulting infrastructure falls apart through lack of maintenance. Besides, money for maintenance is money that can go into their pockets. Zaire (now Congo) was a classic in this respect. Agricultural produce intended for market often rotted on the ground because the transportation system had broken down. While the country had 31,000 miles of main roads at independence from Belgium in 1960, by 1980, only 3,700 miles were usable.[632]

Just registering a business in Africa is an ordeal. In a typical developed country it generally takes a day or two and costs a few hundred dollars, but in Africa it involves long delays and high costs. In Congo for example, it takes about 7 months, costs close to nine times the average annual income per person, and firms must start with a minimum paid-up capital of more than a third of that exorbitant fee.[633] Furthermore, the poor legal framework in which businesses try to operate adds to the cost and unpredictability of running a business. Soldiers and police often feel free to impose their own impromptu forms of 'taxation' and trucks carrying goods are constantly stopped at road blocks by police who help themselves to some of the load. Governments can behave in all sorts of capricious and discriminatory ways. Property might be seized or a firm's license to operate suddenly revoked because the president of the country dislikes the owner's political views or ethnicity; or you might discover one day that you are now in competition with a business run by the ruler or one of his cronies. Contracts are not readily enforceable because the courts are slow, expensive and frequently corrupt. As a result there is a strong tendency for businesses to deal only with people they know.[634]

The extremely low level of private foreign investment in Sub-Saharan Africa is a stark indication of how economically unattractive the region is. It gets about 1-2 per cent of the funds flowing into developing countries, and most of that goes to South Africa.[635] And of course the fact that the kleptocrats do not invest in Africa tells the same story.

Given the current state of Africa, modest progress is perhaps the best we can hope for over the next quarter century. Nevertheless, there are two positive developments particularly worthy of note that may bode well for the future: a number of countries have achieved some degree of democracy and political accountability; and there has been a drop in the extent of civil warfare.

In the 1990s a dozen political leaders were peacefully voted out of office - a previously unheard of method of departure. In 2004, 16 out of the 48 countries in Sub-Saharan Africa had governments that were described as democratic with elections and a level of civil rights for opponents of the government. Although in some cases the changes lose some of their shine when you take into account election rigging and the lack of independence of the judiciary and media, and neutrality of the armed forces.

In the last five years there has been a significant decline in the number of civil wars. The fighting has stopped in Sierra Leone, Angola, Liberia, Burundi, Sudan and Senegal. The war in the Congo is over, although marauders still plague the east of the country, and the one in northern Uganda appears to be spluttering out. These wars have been horrific in terms of deaths, devastation and duration. The death toll in the Congo was 3 million and in Sudan 2 million. The wars in Sudan, Senegal and northern Uganda lasted for two decades while in Angola even longer. This decline in internal conflict has been due to a mix of exhaustion, one side winning and external pressure. Where there is a victor, they are usually no worse than the vanquished and sometimes better. Whether this decline is temporary or long term remains to be seen given the potential for old conflicts to be rekindled and new ones ignited.

On the economic front, there is also the occasional bright sign. For example, a South African company has taken over the debt ridden and rundown government-owned railway line that runs from Kampala in Uganda to the Kenyan port of Mombasa. They plan to invest $322 million over the next 25 years overhauling the line and its rolling stock.[636]

If these tentative developments prove to be more than a false dawn, they can be attributed in part to the end of the Cold War during which the superpowers propped up those despots that aligned with them or bankrolled equally appalling rebel armies for the same reason.

Any progress will also be assisted by a transformation of the whole aid and lending regime. Historically, it has done nothing to help Africa. Not only have the funds been misused in the ways already discussed, but they have also helped to keep the kleptocrats in power. External funding makes it is easier for governments to be unaccountable and to withstand popular opposition. Luckily the World Bank was not around at the time to prop up bankrupts like Charles I of England and Louis XVI of France! When programs fail the usual response has been to attempt to salvage them with injections of even more funds, rather than to do a critical reassessment. In the case of lending this is made possible by the fact that the World Bank and other development banks are not commercial institutions that will go out of business if their loans do not perform. They are simply provided with more funds by rich country governments. Improving the situation has to mean a strong connection between aid and the quality of governance, and a far greater focus on helping agriculture, the main form of economic activity.

Democratic institutions need to be established that are more than a facade - there needs to be constitutions, elected government, civil liberties, the rule-of-law and the separation of powers. Dismantling some of the machinery of corruption is also important. This includes increased transparency in financial dealings and getting rid of regulations that only exist so that officials can be bribed to ignore them. The less progress in these areas the less aid.

The cancelling of external debts can be better handled in this context.[637] The regimes that are more likely to simply accumulate new and equally unsustainable debts are given less chance to do so. Where they are deprived of all new funds the extra scope for waste is limited to the debt repayments avoided which they now get to keep.

Agriculture is the predominant sector of Africa's economy and is critical to its development. The sector has to retain the economic surplus it needs to introduce improvements and to move from communal (i.e., feudal or pre-capitalist) to private ownership of land. Africa would also benefit from the US, EU and Japan opening up their markets more to agricultural imports. At present they are heavily protected. With the opening up of trade, it would not take long for some farmers to move into producing the crops and livestock desired by these markets. Furthermore, the elimination of the high protection of processed food could see food-processing industries develop in Africa. A lot of humanitarian aid can also have the added longer term benefit of assisting agriculture. Efforts to reduce the impact of AIDS, malaria, TB and malnutrition not only reduce death and misery but ensure more people are well enough to perform productive farm labor. The necessary measures include new medical treatments, strengthened healthcare systems, higher yielding and more robust seed varieties and the development of farmer advisory services. Unfortunately, even programs directed specifically at the rural poor are not immune to corruption as evidenced by the many cases of medicines being diverted onto the black market. And no matter how well run, an assistance program has the problem that it may be freeing up local funds that would have been used for the purpose and which can now be wasted.

The dire state in Sub-Saharan Africa naturally prompts a 'we must do something' response. At the moment there is a rock star led campaign calling for debt forgiveness and large increases in aid.[638] There are also various studies and proposals circulating among rich country governments. While some of this concern is informed by the considerations expressed above, it could also rekindle the tendency to simply throw money around and repeat past disasters.

Capitalism Outgrows Itself

Now let us look at the rich or developed countries. In their case mature capitalism is creating the groundwork for its successor, a far more dynamic and congenial economic system where the means of production are collectively owned by those who do the work. Capitalism has this effect because the machines that result from capital accumulation not only create increasing abundance but also make work less and less like work and more and more like an activity that could be performed for its intrinsic value.

Under these new emerging conditions, we no longer need profit driven capitalists controlling the means of production and forcing us to work for them. And as we will discuss shortly, the combination under collective ownership of self-motivated and highly accountable workers and production for use rather than profit would ensure a far greater rate of economic progress than capitalism.

Our greatest achievement so far in taking the work out of work has been to eliminate a lot of the really hard and dangerous jobs. These include swinging a pick and shovel as they used to do down the mines, and in the construction of buildings, sewers, drains, roads and railways; and also lifting heavy loads in manufacturing and transport.

At the same time there has been a large increase in the proportion of people with professional and managerial jobs. In the US, over 30 per cent of workers belong in this category. This includes teachers, workers in business and financial operations, healthcare professionals and managers who each comprise between 4 and 5 per cent of the workforce. The expansion of this kind of work is also reflected in the increasing levels of education in the US. There, 29 per cent of people aged between 25 to 29 years in 2004 had a bachelors (4 year) or higher degree.[639] In 2003, 38 per cent of 18 to 24 years olds were enrolled in degree granting institutions[640] while 57 per cent of 25 to 29 year olds had completed at least some college.[641] In the same year just over 40 per cent of Americans in their 30s and 40s had been enrolled in a career or job related part-time or short course.[642]

It is true that a lot of routine and menial work still remains. However, it is not hard to envisage much of it disappearing over the next quarter century. Most factory work will vanish once we develop a new generation of robots that can do finicky assembly work. These will need to be better able to distinguish between different kinds of objects and find them wherever they are rather than simply being pre-programmed to pick up something from a particular location. Then they will need the dexterity of the human hand to manipulate and assemble these better understood objects. At the moment robots are mainly confined to fairly simple tasks such as spot welding, spray painting and moving things around.

Likewise, most of the unskilled jobs created with the expansion of the retail and hospitality sectors will go. Virtual shopping will be a big job killer in retailing. Web sites will get better at graphically displaying their wares and become easier to use. Customers will be able to make better choices as they can easily access product and price information from a host of suppliers and third parties. Perhaps shoppers will be able to upload a body scan to on-line clothing shops which can then display a virtual 'you' wearing different garb. This will give you a much better idea of what you will look like. You can ensure the best off the shelf size or even ensure a perfect fit through an alteration service or one-off production. Online grocery orders will be filled at a warehouse rather than a supermarket by shelf picking machines. The boxed groceries will be either picked up at local centers by the consumer or home delivered. Labor can also be reduced in conventional shopping with the addition of in-store computers providing information about products to customers and automated check-outs.

Bar service and coffee making can already be provided without humans. The machinery just needs to become cheaper or the labor more expensive. Coming up with a technology to provide automated table service should not be a daunting challenge. You could place an order directly to the kitchen through some electronic device on the table or your own hand-held computer or mobile phone. The order could even be made before you arrive. Maybe machines on the ceiling would lower the food onto your table. In the case of kitchen workers, they are generally doing work that is just as amenable to automation as tasks performed in manufacturing.

Making an appointment to see a doctor, dentist, physiotherapist, accountant or tax adviser will be done online much like we now make hotel and airline flight bookings, except the website will present you with unfilled timeslots to choose from. When you visit an office the computer at the front desk will validate any necessary ID, announce your presence and provide any necessary directions. When you put in an order to a supplier for components or materials the information will be sent directly to the robot in the warehouse which will select it from the shelves. At the same time you will automatically receive an electronic invoice which requires no handling or filing. In many cases even the decision to put in the order can be left to a computer which monitors stock levels and rate of usage.

Snail mail surprisingly still survives but it must go eventually, and with it will go those responsible for handling and delivering it. The typist is another anachronism who will vanish as executives who cannot type retire and voice to text software improves. Then there is that other dinosaur, the bank clerk, who is there to help the old and confused and will disappear with, if not before, the arrival of electronic money.

The jobs we have mentioned make up about a quarter of the total in the United States, and can be broken down as follows.[643] The clearly menial and readily automated marketing and sales occupations make up around 6 per cent. Waiting, bartending and other food and beverage service occupations are another 5 per cent. Short order and cafeteria cooks plus dishwashers are just under 3 per cent. The less skilled machine operators and process workers whose jobs are the most amenable to automation make up between 4 and 5 per cent. The types of office and administrative support jobs mentioned above are 5 per cent of all jobs and almost 30 per cent of all jobs in that category.

Automation will also impact on more skilled work. However, generally speaking the greater the intellectual content of a job the harder it is to automate and the more likely that at least initially any impact will be confined to the more routine aspects of the task. For example, you still need a surgeon for keyhole surgery but there is less cutting and sewing up.

There is some concern that as the average intellectual content of work increases, a large number of people with less natural ability will be left out in the cold with fewer and fewer jobs that they can perform. This is a rather pessimistic view when we look at what the great previously-unwashed have managed to achieve in recent times and what we can expect in the future. Education levels are a good indicator of the current general achievement. In developed countries school leavers who fail to finish high school are a shrinking minority. In the UK, Finland, Norway, Switzerland and Sweden the figures is less than 10 per cent while it is in the low teens in the US, France and Germany.[644] Just living in a modern industrial society seems to make people smarter as they are confronted by increasingly brain nourishing activities. A few examples will illustrate the point: applying for a job, buying a house, dealing with the healthcare industry; organizing your retirement; cutting through the retail hype to choose a new car, home entertainment system or air conditioner; renovating your house; organizing a holiday on the Internet; trying to figure out how a new electronic appliance works; playing video games; putting in a tax return and deciding who to vote for. Even routine jobs can be more demanding. For example, they generally require you to read and write, carry out a range of verbal interactions with other human beings and be able to use a whole range of machines and appliances without special training. IQ tests seem to confirm that people are getting smarter.[645] We can also expect improved performance in the future as a lot of the conditions that cause stunted development change for the better. These include lack of family support, peer pressure to be an idiot and an inadequate education system. We will also benefit from an increasing understanding of human development and what causes learning difficulties. And over the longer term we can expect to see artificial improvements through mind-enhancing drugs, genetic engineering (induced evolution) and brain link ups to computers.

What about the "Communist" Countries?

Any case for collective ownership, of course, has to address the experience of the mislabeled "communist" countries. These are the regimes that used to exist in the former Soviet Union and Eastern Europe and the ones still hanging on in China, Vietnam, Cuba and North Korea. The conclusion generally drawn is that socialism is inherently flawed, and you are bound to end up with economically inefficient police states where the old capitalist exploiters were simply replaced by 'socialist' ones.

However, all that we can really conclude is that it is inherently very difficult or even impossible to successfully establish socialism in societies that are poor and backward, that are more feudal than capitalist. As the experience of other backward countries shows, even getting capitalism off the ground under these circumstances is hard enough, let alone socialism.

The socialist transformation achieved was very limited. True, the capitalists were expropriated. However, the position of workers was as you would expect in the early phases of industrialization. Most work was arduous and repetitive manual labor and the education level and background of typical workers left them ill-equipped for involvement in the mental aspects of production.

Both these factors made for a sharp division of the workforce into a large group who were simply operatives and a minority who did the thinking and deciding. These were the managers, engineers and officials - generally referred to as 'cadres'. With their different role came the need for higher incomes that raised them above the prevailing poverty. To do their job they needed motor vehicles, telephones, and freedom from normal hardships that would hinder their work. Then there were morale boosters such as good food and alcohol, and rejuvenating trips to holiday resorts. On top of that were performance bonuses that widened the gap even further.

It is not hard to see how under this class structure, the revolution was prone to getting stuck and then diverted along the wrong track. Members of the elite had a vested interest in entrenching their privileged position and were unlikely to encourage an invasion of their domain as workers became more skilled and educated and industry more mechanized nor to willingly start to take upon themselves a share of the more routine forms of labor.

Once career, wealth and position are the primary impulse, economic results take a second place to empire building, undermining rivals, promoting loyal followers, scamming the system and concealing one's poor performance from superiors. The opportunity for workers to resist these developments was limited by the lack of a democratic culture, a condition inherited from backward pre-revolutionary society. Then there is the culture of subordination which drains away confidence and initiative. This can be very strong even in the absence of political tyranny as we can see in any liberal capitalist society. At the same time, one can imagine that any rank and file worker with special abilities or talents would tend to be more interested in escaping the workers' lot by becoming one of the privileged rather than struggling against them.

Mao Zedong referred to this process, once heavily entrenched and endorsed at the top, as capitalist restoration and those encouraging it as revisionists and capitalist roaders. The Chinese Cultural Revolution that he led in the late 1960s is the only attempt to beat back this trend. However, the capitalist roaders were able to seize power in China after his death in 1976.

Notwithstanding their real nature, the Soviet regime and its satellites in Eastern Europe had no trouble characterizing themselves as socialist. Socialism was equated with state ownership and its present task was simply to achieve economic development and provide a certain level of economic security while such matters as eliminating the division between routine and elite forms of labor were relegated to the far distant pie in the sky future. In China, where such a regime still survives, keeping up the pretense must be more difficult given that they scrapped the communes, reintroduced the profit motive in state enterprises and allowed capitalists to set up businesses. They call it 'socialism with Chinese characteristics'.

Economic Calculation without Capitalism

It is generally accepted that the efficient resource allocation in a complex modern economy requires decentralized decisions based on present and expected future costs or prices. There is also a general misconception that such a decentralized system of allocation requires market exchanges and as a result socialism could not make use of it and would instead have to rely on some cumbersome and grossly inefficient system of centralized allocation.

The first point to make is that socialism will still make use of some markets. In the initial phase, there will be a strong connection between the individual's entitlement to consumer goods and the amount and quality of work performed. In other words society will exchange these goods for work. As the connection loosens it is less of an exchange and more a simple allocation or entitlement. The latter would be denominated in what is better referred to as tokens rather than money because it is no longer facilitating a market exchange but simply putting a limit on the individual's overall entitlement. The movement of goods across political borders will also generally involve a market exchange of exports for imports because collective ownership for most if not all things will end at political borders. Exceptions would be goods owned at an international level, disaster relief and aid from richer to poorer regions. Over time as the level of economic and social development converges and economic integration increases the number of political borders will diminish.

Where you will not find markets under socialism is in the transfer of intermediate inputs between different production units within political borders. These inputs are the raw materials, energy, services, buildings and machinery that go into making the final goods and services purchased by consumers. This transfer does not involve a market exchange because there is no transfer of ownership. Both the supplier and the user of the input have the same owner, the society of producers. However, this does not rule out decentralized pricing and costing, and the use of this information to make decentralized decisions as to what to produce and what inputs to use.

Just as under capitalism, those given the responsibility for making production and supply ordering decisions will act on whatever information or expectations they have about the demand for the products under their charge and the costs of alternative inputs. The process can be described in the following way. Estimates are continually being made of the quantity demanded, at present and into the future, at different prices for all final consumer good and services. These would be based among other things on past consumer behavior, consumer surveys and demographic predictions. The level of demand then determines the maximum that producers of the consumer goods and services can bid for the intermediate inputs they require in production. Within that constraint they would choose inputs that minimize costs. In turn their input suppliers are estimating the demand for their own output. This will be affected not only by the demand for the final consumer product but also by expectations about changing technologies and substitute inputs. A few examples, should make this clear: producers supplying fuel, power or raw materials may find demand is sensitive to changes in the price of alternatives; production units using obsolete and costly technology will only be needed for demand that newer plants cannot meet; and those providing spare parts for a technology that is being phased in will see demand increase accordingly. With demand determined, these suppliers also chose production methods to minimize costs. These then have suppliers who in turn have suppliers and so on down the line. Each has to make similar economic decisions.

Because the transfer of intermediate goods and services from producers to users does not represent a market exchange, there is no transfer of money to the producer unit. The revenue (and profit if demand exceeds supply) is simply a book-keeping entry and does not belong to the unit. It is not a fund from which anyone can draw an income. Nor is it "retained earnings" entitling the unit to obtain fresh production inputs. The funds available to the unit for future inputs will be continually adjusted in line with changes in demand and supply conditions. This process is no more complex or mysterious than what happens under capitalism, when new funds are provided, either from elsewhere within the firm or from the capital market, for operations that need to expand, and when management reduces funds or does not seek fresh loans for operations that need to contract.

The price/cost mechanism we have described does not provide perfect outcomes. People's preferences can change at short notice and even if they did not there is a limit to the accuracy of demand predictions. Also what people are willing to pay for a given quantity of a good will depend on the prices of other goods which substitute for or complement it or compete for the consumer's limited budget. These are all uncertain to varying degrees. And the further into the future you are planning production capacity, the more room for error. At the same time costs to producers can be affected in unexpected ways. For example, there can be: unanticipated technological breakthroughs or failures; new production capacity coming on stream earlier or later than expected; and abnormally severe weather conditions. So, output is not what it would be with perfect knowledge and foresight. One consequence of this is that some goods will face insufficient and others excess demand at their cost of production. The former will have to be allocated at a price below cost and the latter, above cost.

Collective Ownership will be More Efficient

Identifying why collective ownership will be more efficient than capitalism is a fertile, if presently abandoned, field of research. So the following taxonomy of reasons is incomplete and the details of each of the suggested categories needs to be delved into in far more detail. However, it is still a reasonable start.

The five categories are: (1) the greater productivity of more motivated workers, (2) the greater accountability of individuals and organizations, (3) the elimination of unemployment, (4) the better flow of information and (5) the elimination of various resource wasting activities associated with wheeling and dealing, and with the activities of government.

Being Motivated

Workers who collectively own the means of production are going to be far keener about what they are doing than employees who "just work here". It will make a difference knowing that their efforts meet the needs of their equals rather than make a few people rich or richer, and will not be frittered away through avoidable inefficiencies. Even the more routine work will seem less irksome under these conditions, especially when it is shared more equitably. Attitudes to innovation will also change. Knowing that surplus labor will be reassigned rather than thrown out onto the street will remove a current disincentive for workers to come up with and support labor saving improvements to production processes.

Supervision, Inspection and Accountability

Even with far greater self-motivation there will still need to be a high level of supervision and accountability. Some people will still be inclined to shirk and there will be the temptation to misuse resources for one's own personal benefit. Some people will attempt to protect or promote a particular project or technology in which they have a lot personally and emotionally invested even though it has turned out to be uneconomical or otherwise lacking in sufficient merit. Workers will have to be assessed to determine whether they should be appointed to or retained in a particular position. Individuals and organizations will require feed back on how they are doing their job so that they can improve; and the way tasks are performed and organizations function needs to be continually under scrutiny so that they can be continually redesigned.

While capitalism is limited to top-down supervision, socialism is able to also employ the horizontal and bottom-up modes. The horizontal mode refers to workers at the same level mutually assessing each other's work. This category includes individuals or groups redesigning their own jobs to increase efficiency. Under capitalism workers generally have no desire to perform this kind of supervision and would invite hostility if they did, given the antagonistic nature of production relations under capitalism, including the threat to people's livelihood. The bottom up mode refers to workers assessing those at a higher level. This scarcely happens under capitalism because of the tyrannical powers the latter have over their subordinates, and why should people care anyway.

Also top down supervision will be more effective than under capitalism. Those in leading positions can expect greater cooperation and less of the passive resistance often found in the present relationship between leaders and the lead. Also top down supervision from outside an organization will be more effective. This includes better supervision by users, be they other industries or final consumers. Organizations will have no ownership walls to hide behind. There will be no such things as commercial secrecy or confidentiality.

Socialism will also retain the positive features of competition. Different production facilities will have to match each other's price and quality. New entrants with cheaper production methods or a better product should have no trouble receiving approval from a funding agency. And, ultimately, if nobody wants to use a product because there are better or cheaper alternatives, production will cease and resources be re-assigned. In the case of large one-off products such as a production process or construction project, tenders might be invited if there are alternative providers. There would also be design competitions for major projects. Funding earmarked for research challenges such as a cure for a new disease threat or a better way to turn water into hydrogen, might be allocated to a number of separate organizations. This would help keep people on their toes; and having different approaches to a problem increases the likelihood of success. Even where organizations are not competing, their performance can be compared - e.g., providers of services in different locations will be expected to learn from the industry leaders.

Eliminating Unemployment

Socialism will not have the unemployed labor which is endemic to capitalism. Its main causes are: lack of market demand for output, mismatch between supply and demand of different types of labor, welfare disincentives and the battle over income share between labor and capital.

Under socialism unified ownership can always ensure sufficient effective demand for the output of a fully employed economy, while a capitalist government is mainly confined to the very blunt instruments of fiscal and monetary policy; and with these it has to rely on economic information that is skimpy, inaccurate and out of date.[646] This limited power of economic policy is sometimes described by economists as pushing on string. At the same time a socialist government does not have separately owned companies acting at cross purposes to each other. In a capitalist economy this appears to be a major problem in times of crisis or economic downturn when companies cut back on purchases and call in their debts as they try to shore up their financial position.[647]

Where a change in output or technology leaves some workers with skills that are in low demand, there would be far greater commitment to the various adjustment measures required. Leaving people to rot is acceptable under capitalism but not under collective ownership. Measures include retraining allowances where necessary and wage subsidies or lower wages while learning on the job. At the same there will be greater ability and willingness to learn new skills helped along by the elimination of the narrow division of labor which relegates many workers to restricted functions defined by others. The dole as we know it in most developed capitalist countries will be abolished and individuals will not be left to sink into an unemployable torpor.

People who for whatever reason cannot keep up with expected education and skill levels, can have their wages subsidized so that more primitive technologies that require their less skilled labor can be retained or re-adopted.[648] Where the subsidy is less than 100 per cent of the wage, there would still be a net gain to society of not leaving their labor idle. Beyond that point it is pretend work, which still may be appropriate in extreme cases. This should be a declining problem as learning ability improves with each generation .

The struggle between labor and capital over wage share can affect unemployment in a number of ways. On the one hand, governments will adopt economic policies aimed at slowing economic growth to prevent threats to profits from wage increases that generally occur in a tight labor market. On the other hand organized labor may manage to push wages above market clearing levels through industrial action and political support for minimum wage laws, trading off some unemployment for increased income for workers as a whole. Once there are no capitalists, there will be no struggle over "share". Wages will equal the full social product. Only then are taxes and levies deducted for investment and social spending such as pensions. And these deductions will be the result of a political process that has general support.

Better Information Flow

Under socialism the information available to economic decision-makers will be far better than it is at the moment. This includes information about investment and production decisions, scientific and technical knowledge, and cost and price data.

Under capitalism vast amounts of knowledge and information are subject to secrecy or deception. Firms try to conceal whatever they can about their investment plans from competitors and this can contribute to under or over investment by the industry as a whole. Product designs are commercial secrets and experience gained in production such as overcoming difficulties or improving methods are not openly shared. At the same time customers will often be denied information relevant to their choosing the product that best meets their requirements.

Price and cost information can only help producers make efficient decisions to the extent that it is accurate and available. For example, choosing the lowest cost method of production requires accurate cost information about alternatives. However, under capitalism prices are distorted and cost information obscured in various ways. Below is a list of the more important ones.

Monopoly pricing Firms are always keen to overcharge if their market position allows them to. This includes both long term market power resulting from having a dominant position in an industry and also the temporary market power that comes with being first in with a new product or process. They will do whatever they can to create and maintain these conditions.

External costs Capitalist firms unless required to by regulation fail to consider 'external' costs that are not covered in market exchanges. The primary example is the cost of pollution and other forms of environmental degradation. Present attempts by governments from outside the market to rectify this problem are costly and give results that are far from optimal. This is particularly the case where government policy is captured by the 'environment industry' which is more than happy to place obstacles in the way of economic progress. Socialism could make use of far cheaper and more effective self-regulation based on a determination to do the right thing from the point of view of society and the absence of any benefits to anyone from doing otherwise. There would also be the greater transparency to outside supervision.

Overheads or fixed costs A major problem is that posed by the unwillingness of economic players to reveal the value they place on a good in the presence of overheads or fixed costs. A high proportion of costs come into this category. These are incurred regardless of the actual level of output, and include designing the product and production process, computer software development, factory lighting and heating, and security.

If firms try to cover these in a uniform unit price, output will be less than optimal. This is because there are costumers who will not purchase at this price but still value the product sufficiently to pay at least the extra (i.e., marginal or incremental) costs that would be incurred in providing them with it. In other words the benefits of provision match or outweigh the cost. These costs include raw materials and in some cases wear and tear on machinery.

To avoid the problem there needs to be a system of variable pricing, so that those who place the highest value on the good pay a higher contribution towards fixed costs than those with a lower valuation. (This is referred to in the economic literature as 'price discrimination'.) Producers under capitalism can usually obtain some idea of the difference in valuations, from whatever they know about the intensity of demand for their customers' own products and the extent that they can substitute some other input. However, this information will be of a much poorer quality than what they would receive if customers were forthcoming with their own assessments. However, a capitalist enterprise is not going to volunteer such information if it will lead to them being charged more. In a socialist economy, however, those responsible for putting in a bid or valuation would be guided by overall economic efficiency rather than the profit of the particular enterprise or plant using the product. Also under socialism, a price discrimination regime would not be undermined by either competitors or low-price recipients offering the product at a lower price to those users being charged the higher price.

So-called public goods are an extreme form of the overhead or fixed cost problem. With these the marginal (incremental) cost is zero, or extremely low compared with average cost. In the case of intermediate goods, the most important of these are information goods, particularly computer software and the results of research and development. With these only one unit of the good has to be produced and this can be consumed by an infinite number of users. Use of the information in one production process does not prevent its use in others. There is no limit to the number of copies which can be made on the appropriate dissemination medium. The only marginal cost is associated with the production of these copies, and this is very low. A journal article or research paper downloaded from the internet costs virtually nothing. Software is disseminated in the same way or by CDs costing less than a dollar.

The more basic research is normally funded by government or philanthropy and is generally made freely available except where military secrecy is a factor. However, funds are not always well directed because of the excessive influence of entrenched academics and research institutions and the tendency of politicians to placate noisy lobbyists and support vote catching fads.

Commercial or applied research and the vast bulk of computer software is subject to intellectual property rights and made available under license. Licensing would be fine if it underpinned an efficient charging system which did not exclude any firm by charging them more than they valued the product's use. However, for the reasons discussed above, this is not the case.

Surprisingly, if you believe all the rhetoric about the dynamism of entrepreneurial capitalism, government has played a critical role not only in basic research but also in the development of most of the major product innovations over the last half century or so. This has been mainly through military and space programs, which rely on war or the threat of it. These technological developments include jet aircraft, rockets, satellite communications, the Internet, computers, transistors, micro-chips, integrated circuits, bar codes, nuclear power and a vast array of new materials initially developed for military or space use. Even the mass production of consumer goods after World War II owes much to the diffusion of mass-production methods during the period of war production. Then we must not forget the role in the development of computer software of individual enthusiasts who work for the fun of it rather than profit.

Undersupply of cost information Firms are just as keen to limit competitors' knowledge of their current and expected future costs as they are of matters relating to production activity and technology. This deprives both customers and competitors of information which could otherwise help them make better decisions about future technologies and choices of alternative inputs.

Even if secretiveness were not a problem, cost data would still be undersupplied because it is a public good. Firms develop estimates of their future costs relying on whatever they can glean about the future costs of inputs and on their in-house knowledge of their own operations. On the basis of this information and projections of demand for their products, they can decide on whether to expand or shrink their operations. The more detail and precise the information the better. However, the better it is the greater the cost involved and so beyond a certain point the increasing costs outweigh the benefits. However, if you bring into account the value to other players, it would be worth spending more for better information. However, like any information it has a marginal cost of zero and this implies the undersupply problem we have just referred to.

Removing Burdens on the Economy

Socialism will be free from a range of burdens that consume a great deal of resources under the present system. These include wheeling and dealing, government policy driven by vested interests, bloated law and order and inefficient tax collection.

Wheeling and dealing In a capitalist economy a lot of resources are devoted to activities that can for want of a better term be referred to as wheeling and dealing. These include, among others, the securities "industry" and advertising and marketing.

The purchase and sale of financial assets such as stocks and debt in the securities market seems like an excessively resource consuming and roundabout way of simply allocating funds to where they are needed. The effectiveness of the mechanism is also marred by speculation and various forms of chicanery.

Advertising and marketing is a source of considerable waste. Under socialism the production of goods and services will be accompanied by a flow of information that will make potential customers aware of their existence and suitability for various purposes. However, we will not need to be constantly reminded of a beverage that has been on the market for the last 50 years.

Bloated legal system and the cost of crime Capitalism requires an expensive judicial system to settle contractual and other disputes between businesses, to deal with criminals and to encourage the law-abiding to remain that way.

In the US there are two and quarter million law enforcement officers, including police, private security and prison guards. There are also judges, lawyers and other workers in the judicial system who make up another million people.[649] Combined this is over 2.5 per cent of the employed workforce.

Then there is the horrendous impact on victims. A 1996 US report on the cost to victims of crime estimated these to be $450 billion annually or five to six per cent of GDP.[650] Violent crime (including drunk driving and arson) accounted for 95 per cent of total costs. The tangible costs which comprise medical care, lost earnings, and public programs related to victim assistance came to $105 billion. The pain, suffering, and the reduced quality of life were given a monetary value of $345 billion. Rape and sexual assault were 35 per cent of these intangible costs. Some high impact crimes were not included in the study, notably many forms of white-collar crime (including personal fraud) and drug trafficking.

Socialism would provide less fertile ground for the breeding of a criminal class. With nobody denied a job, crime is not required as a means of earning a living. And, the fact that work would be virtually mandatory would discourage the establishment of a criminal sub-culture of idlers and provide less opportunity to engage in crime. At the same time, transparent collective ownership should make it harder to set up front organizations for the movement of illicit goods or the laundering of money. A socialist regime would also be in a better position to crack down on the criminal element. It can mobilize people in problem neighborhoods to combat their influence. It can also claim a mandate in its early days to implement emergency measures to facilitate convictions, if necessary. For example, being a hoodlum could be an offence, obviating the need for water tight evidence of a particular crime. Such a mandate could be claimed because it is one-off and effective - those convicted are not simply replaced by a fresh crop - and it is not excessively punitive. Conviction, except for diehards and those known to have committed heinous crimes, would lead to retraining and guaranteed work.

Then there is the on-going epidemic of domestic violence perpetrated by outwardly normal people. Capitalism certainly does not help matters given the brutalizing and esteem lowering effects of workplace stresses and the general dog-eat-dog nature of capitalist society. It seems plausible that changing these conditions would also reduce the prevalence of sexual assault, although making the case for that is beyond the scope of the book and the author's competence.

Another serious problem is robbery and theft by drug users funding a habit made very expensive by its illegality. It is hard to imagine a revolutionary government doing a worse job of choosing the degree of legality or illegality which minimizes harm. Socialism would also be far less conducive to drug use. With better life options people will be less drawn to self-destructive behavior and those that do will be more likely to get the support they need to control their habit and to live a functional life.

Vested interests Government policies can be seriously affected by the demands of politicians, officials, workers and capitalists pushing their own individual or sectional interests. Common forms of government action corrupted in this way include restrictions on competition such as trade protection and vote-buying expenditures by politicians. These can lead to considerable misallocation of resources.

To some extent sectional interests can be fought off under capitalism and this has been one of the aims of so-called microeconomic reform and deregulation. However, the pressure for special treatment will always be a problem, given that benefits from such measures are concentrated while the losses are dispersed throughout the population at large.

By eliminating capitalist ownership, socialism eliminates the vested interests stemming from that quarter. The interests of workers will become far more in line with those of society given that the need to eliminate specific jobs does not lead to people being thrown on the scrap heap. Although there will still be some divergence. There will still be the problem referred to earlier in the discussion on accountability where people have a lot invested in a particular project, in terms of skills, personal prestige and sense of worth. They may be tempted to use corrupt influence and disingenuous argument to press for more resources than may be economically justified and resist its closure if it proves to be a mistake or obsolete. This, as already suggested, would require a very transparent decision-making environment.

A similar problem requiring a similar solution may arise in the location of industries or facilities. People will tend to want their preferred work to be near where they currently live. Or they might oppose a development because of negative local amenity effects despite the fact that from society's point of view it is the best location.

Inefficient taxation system Capitalist countries all have very inefficient taxation systems. The US Internal Revenue Service has a budget of just under $11 billion which is equal to about 0.1 per cent of GDP or $36 per person. Estimates of compliance costs incurred by taxpayers are unreliable but they are generally believed to be at least a few per cent of GDP. It is certainly possible that capitalist countries could come up with better tax-systems, however they could never be as efficient as the tax-system under socialism.

Under socialism workers would receive the total income of society. Out of this income they would then pay a uniform head tax and also a land rent on their places of residence. These imposts do not distort prices and have low collection and compliance costs. A head tax could also conceivably be introduced under capitalism but as Margaret Thatcher's attempt in the UK shows there is bound to be a major backlash. Under capitalist conditions where incomes are very unequal and precarious, a large number of people would be either unable to pay or seriously burdened. Under socialism where income is secure and more equal (see below) these problems do not arise. As well as being non-distorting, it is the only tax that imposes an equal burden on everyone. Unlike an income tax, it would not favor those who work less. You could have some flexibility in the timing of tax payments, particularly to cater for people who take an extended period off from work. You could have a delayed payment with an interest penalty or maybe even discounted prepayment.

Another source of revenue would be rent on land. All land would be collectively owned and residents would pay a rental based on its value. So a house with a river view would incur a higher land rent than one next to a cement works. This arrangement could be in place regardless of the form of tenure over the dwelling which could range from short term rental to something similar to the existing form of private ownership.

The revenue from tax and land-rent would then be spent on (1) net investment to increase the productive capacity of the economy, (2) maintenance and expansion of the supply of collective goods such as medical, sporting, education and transport facilities, (3) pensions and wage subsidies, (4) funding goods and services that society may choose to subsidize or provide free, such as healthcare and education and (5) the provision of administration.

Efficiency Accompanied by Greater Equality

As well as being more efficient than capitalism, collective ownership is more equitable in the distribution of what is produced. To start with, of course, there will no longer be rich capitalists or overpaid executives nor people on reduced incomes because of unemployment or underemployment.

Over time there will be less and less reliance on financial incentives and more on intrinsic motivation. So the more productive people won't resent the less productive receiving a similar hourly wage rate or respond by reducing their own productivity. These includes people who have a particular brilliance or flair and make an especially large contribution to human productivity through innovations. As we discussed earlier, technological progress causes a continual decline in low skilled jobs. This means the bulk of the workforce becomes less spread out in terms of training and skills. So to the extent that this effect wages, most workers will not be far apart.

Excess supply or demand for certain kinds of labor may still be a source of pay differentials. However, whenever excess demand emerges it would usually be a short term problem solved by increased training and by giving priority to automating less popular tasks. As for excess supply due to some particular activities being very popular, the problem would diminish as an increasing proportion of work takes on a rewarding character. Also when choosing between more or less equally congenial activities, workers should need little inducement to choose the one on which society places the highest value.

Leaping into the Unknown

When it comes to predicting when and how collective ownership will supersede capitalism, we are confronted with a very murky crystal ball. There is certainly little support at the moment for such a project and nothing resembling a credible political trend espousing it. Once such a trend does emerge it will have quite a lot on its plate. It will have to: (1) gain a popular following; (2) ensure that its opponents are divided and isolated; (3) be able to form a government; and (4) possess a clear-headed understanding of its mission once in power. It is important to appreciate that such a movement will have nothing in common with the present pseudo left which is clearly part of the problem rather than the solution. This trend is hideously reactionary. It flirts with the greenies in their opposition to modernity, e.g., their nature worship and hostility to modern science and industry. It enthusiastically embraces the anti-globalization movement which opposes the industrialization of developing countries. It supports the fascist "resistance" in Iraq and Afghanistan. And it has a vision of socialism that is all about government hobbling growth and innovation, and supervising what people can consume.

The main thing working for change is the fact that capitalism's obsolescence becomes harder to ignore. With rising education levels and new technology continuing to take more of the irk out of work, capitalism is a mounting hindrance to people getting on with their working lives. Furthermore, the new work tasks are harder to supervise and so require a level of self-activated commitment which this system has difficulty engendering. Then, as discussed in detail above, there is the increasing inefficiency of pricing under capitalism as fixed costs such as research and development and overheads grow in importance.

Establishing a new government, and new laws and institutions is only the first step. It will be followed by a protracted process of transformation and consolidation. Part of this will involve people getting over the habit of being subordinates and having others take the initiative. This includes resisting those in authority who want to turn social ownership into state capitalism. Hopefully, this process of transition will not be as protracted and painful as the one from backward agricultural societies to capitalism. It took centuries in Europe and is still going on in the developing countries today.

The prospects can best be summed up as: the future is bright but the road is tortuous.


ABBREVIATIONS

AIDS                          acquired immune deficiency syndrome

BST                            bovine somatotropin

Bt                              bacillus thuringiensis

CaCO3                       calcium carbonate

CGIAR                        Consultative Group on International Agricultural Research

CIA                            Central Intelligence Agency

CIMMYT                     Centro Internacional de Mejoramiento de Maiz y Trigo (International Maize and Wheat Improvement Center)

CO                             carbon monoxide

CO2                           carbon dioxide

CSIRO                        Commonwealth Scientific and Industrial Research Organisation (Australia)

DOE                           US Department of Energy

ED                             electrodialysis

EIA                             (US) Energy Information Administration

EJ                              exajoule

EMI                            electromagnetic induction

EPA                            Environmental Protection Agency

EU                             European Union

FAO                           Food and Agricultural Organization

FDA                           Food and Drug Administration

GDP                           gross domestic product

GIS                            geographical information system

GM                            genetically modified

GPS                           global positioning system

GtC                            gigatonnes of carbon

H                               hydrogen

HIV                            human immunodeficiency virus

HYV                           high yield variety

IEA                             International Energy Agency

IPCC                          International Panel on Climate Change

kWh                           kilowatt hour

LNTH                          linear no threshold hypothesis

LWR                           light water reactor

LWR                           light water reactor

MAS                           marker assisted selection

millirem                      one thousandth of a rem

Mtoe                          megatonnes oil equivalent

MW                            megawatt

NAS                           National Academy of Sciences

NASA                         National Aeronautics and Space Administration

NOx                           nitrogen oxides

OECD                         Organization for Cooperation and Development

OPEC                         Organization of Petroleum-Exporting Countries

PM                             particulate matter

ppm                           parts per million

PST                            porcine somatotropin

PV                              photovoltaic

RO                             reverse osmosis

rem                           (Roentgen Equivalent Man) the dosage of ionizing radiation that will cause the same amount of injury to human tissue as 1 roentgen of X-rays

SO2                           sulfur dioxide

TB                              tuberculosis

tcm                            trillion cubic meters

TMI                            Three Mile Island

TWh                           terawatt hour

UN                             United Nations

UNSCEAR                   United Nations Scientific Committee on the Effects of Atomic Radiation

USDA                         US Department of Agriculture

USGS                         United States Geological Survey

VOC                           volatile organic compound

WEC                          World Energy Council

WHO                          World Health Organization

WWF                          World Wildlife Fund

 

 

 


REFERENCES

Abebe, Tilahun et al. 2003. Plant Physiology, April 11.

Abraham, Spence. 2002. Remarks Prepared for Delivery, Secretary of Energy Spence Abraham Global Nuclear Energy Summit The Cosmos Club Washington, D.C. February 14.

AgBioWorld.og. 31 Critical Questions in Agricultural Biotechnology. http://www.agbioworld.org/

Alberta Chamber of Resources. 2004. Oil Sands Technology Roadmap, Unlocking the Potential. January 30.

Alexandratos, N. (ed.). 1988. World Agriculture: Towards 2010. An FAO Study. Food and Agriculture Organization and John Wiley & Sons.

American Nuclear Society. 2001. Health Effects Of Low-Level Radiation. Position Statement 41. June.

Armstead H. C. H. and Tester J.W. 1987. Heat Mining. London: E & FN Spon.

Australian Coal Association. 2004. Coal 21 National Action Plan. March.

Avery, Dennis T. 1995. Saving the Planet with Pesticides and Plastic: the Environmental Triumph of High-Yield Farming. Indianapolis: Hudson Institute.

Avery, Dennis. T. 1999. We Are All Environmentalists Now. American Outlook. Summer: pp. 35-37.

Avlonitis, S.A., K. Kouroumbas and N. Vlachakis. 2003. Energy Consumption and Membrane Replacement Cost For Seawater RO Desalination Plants. Desalination 157 pp. 151-153

Ayittey, George B. N. 1992. Africa Betrayed. New York: St. Martin's Press.

Ayittey, George B. N. 1998. Africa in Chaos. New York: St. Martin's Press.

Ayittey, George B. N. 2002. Why is Africa Poor? in Julian Morris (ed.) 2002. Sustainable Development: Promoting Progress or Perpetuating Poverty? London: Profile Books.

Ayittey, George B. N. 2004. Corruption, the African Development Bank and Africa's Development. Testimony before the Senate Foreign Relations Committee. September 28.

Ayodele, Thompson et al. 2005. African Perspectives on Aid: Foreign Assistance Will Not Pull Africa Out of Poverty. Cato Institute Economic Development Bulletin. No. 2. September 14. http://www.cato.org/pubs/edb/edb2.pdf

Bailey, Ronald (ed.). 1999. Earth Report 2000: Revisiting the True State of the Planet. McGraw-Hill.

Beck, R. W. 2002. Demineralization Treatment Technologies for the Seawater Demineralization Feasibility Investigation. St. Johns River Water Management District. http://sjr.state.fl.us/programs/acq_restoration/res_devel/demin_treatment_tech.pdf

Borlaug, Norman E. 1997. Feeding a World of 10 Billion People: the Miracle Ahead. Lecture delivered at The Norman Borlaug Institute For Plant Science Research of De Montfort University, 31 May.

BP. 2006. BP Statistical Review of World Energy. June.

Bruinsma Jelle (ed.). 2003. World Agriculture: Towards 2015/2030: an FAO Perspective. London: Earthscan.

Bundesministerium fur Wirtschaft und Arbeit. 2002. Reserves, Resources and Availability of Energy Resources 2002. short version.

Bunger, James W. et al. 2004. Is Oil Shale America's Answer to Peak-Oil Challenge? Oil & Gas Journal. August 9.

Buros, O.K. 2000. The ABCs of Desalting. Second Edition. International Desalination Association.

California Energy Commission. undated. Membrane Filtration System Cuts Water Use and Eliminates Wastewater Discharge. www.energy.ca.gov/process/pubs/oberticasestudy_2.pdf

Cato Institute. 2005. Cato Handbook on Policy. Washington DC:

CGIAR and Global Environment Facility. 2002. Agriculture and the Environment: Partnership for a Sustainable Future.

Chen, W.L. et al. 2004. Is Chronic Radiation an Effective Prophylaxis Against Cancer? Journal of American Physicians and Surgeons. Volume 9. Number 1. Spring.

Chernobyl Forum. 2003-2005. Chernobyl's Legacy: Health, Environmental and Socio-Economic Impacts and Recommendations to the Governments of Belarus, the Russian Federation and Ukraine. Second revised edition. Vienna: International Atomic Energy Agency.

Chylek P.; J.E. Box and G. Lesins. 2004. Global Warming and the Greenland Ice Sheet. Climatic Change. March. vol. 63. no. 1-2. pp. 201-221.

CIMMYT. 2000a. 1998/99 World Wheat Facts and Trends: Global Wheat Research in a Changing World: Challenges and Achievements. Mexico, D.F.

CIMMYT. 2000b. Wheat in the Developing World. October. http://www.cimmyt.org/Research/Wheat/map/developing_world/wheat_developing_world.htm

CIMMYT. 2001a. Research Highlights of the CIMMYT Wheat Program, 1999-2000. Mexico, D.F.

CIMMYT. 2001b. World Maize Facts and Trends 1999/2000: Meeting World Maize Needs: Technological Opportunities and Priorities for the Public Sector. Mexico, D.F.

CIMMYT. 2002. People and Partnerships to Build Sustainable Livelihoods: Medium-Term Plan of the International Maize and Wheat Improvement Center (CIMMYT) 2003-2005+. Draft plan. September. Mexico, D.F.

Cohen, Bernard L. 1982. Exaggerating the Risk. In Kaku and Trainer (eds) 1982.

Cohen, Bernard L. 1990. The Nuclear Energy Option: An Alternative for the 90s. New York And London: Plenum Press.

Cohen, Bernard L. 1998. The Cancer Risk from Low Level Radiation. Radiation Research 149.

Colborn, T. et al. 1996. Our Stolen Future: Are We Threatening Our Own Fertility, Intelligence, and Survival?-A Scientific Detective Story. Dutton Books.

Congressional Budget Office. 2003. The Economics of Climate Change: A Primer. Congress of the United States.

Conway, G. 1997. The Doubly Green Revolution: Food for All in the Twenty‑First Century. Ithaca: Comstock Publishing Associates a division of Cornell University Press.

Cosgrove, William J. and Frank R. Rijsberman. 2000. Making Water Everybody's Business. London: Earthscan for the World Water Council.

Council for Biotechnology Information. 2004. Conservation Tillage: Biotech Crops Help Promote Soil and Fuel Conservation. http://www.whybiotech.com/index.asp?id=1813

Crosson Pierre and Jock R. Anderson. 2002. Technologies for Meeting Future Global Demands for Food. Discussion Paper 02-02 Resources for the Future.

Crosson, Pierre and Jock R. Anderson. 1992. Resources and Global Food Prospects, Supply and Demand for Cereals to 2030. World Bank Technical Paper No. WTP184 Oct. 1992.

Daley, Michael J. 1997 Nuclear Power: Progress or Peril? Lerner Publications Company Minneapolis.

De Freitas C. R. 2002. Are Observed Changes in the Concentration of Carbon Dioxide in the Atmosphere Really Dangerous? Bulletin Of Canadian Petroleum Geology. Vol. 50, No. 2. JUNE. pp. 297-327.

Desalination and Water Purification Technology Roadmap. 2003. A Report of the Executive Committee: Discussion Facilitated by Sandia National Laboratories and the US Department of Interior, Bureau of Reclamation. Desalination & Water Purification Research & Development Program Report #95.

Doran, Peter et al. 2002. Antarctic Climate Cooling and Terrestrial Ecosystem Response. Nature. 13 January.

Duchane, D. V. 1996, Geothermal energy from hot dry rock: a renewable energy technology moving towards practical implementation. Earth and Environmental Sciences Division, Los Alamos National Laboratory.

Dyson, T. 1996. Population and Food: Global Trends and Future Prospects. London and New York: Routledge.

Easterbrook, Gregg. 1995. A Moment on the Earth, the Coming Age of Environmental Optimism. Penguin Books.

Edwards, Brenda K. et al. 2005. Annual Report to the Nation on the Status of Cancer, 1975 - 2002, Featuring Population-Based Trends in Cancer Treatment. Journal of the National Cancer Institute, Vol. 97. No. 19. October 5 Special Article. pp. 1407-1427. http://jncicancerspectrum.oxfordjournals.org/cgi/reprint/jnci;97/19/1407.pdf

EIA. 2005. International Energy Outlook 2004. Energy Information Administration. US Department of Energy. July.

EIA. 2006. International Energy Outlook 2006. Energy Information Administration. US Department of Energy. June.

Ellis, Curtis. 2001. Fresh Water on Tap. Popular Science. September. p. 25.

Energy and Geoscience Institute University of Utah. 2001 or later. Geothermal Energy: Clean Sustainable Energy for the Benefit of Humanity and the Environment. University of Utah.

Enting, I.G., T.M.L. Wigley and M. Heimann 1995. "Future Emissions and Concentrations of Carbon Dioxide: Key Ocean/Atmosphere/Lande Analyses" CSIRO Division of Atmospheric Research Technical Paper no. 31.

Essex, Christopher and Ross McKitrick. 2002. Taken By Storm: The Troubled Science, Policy and Politics of Global Warming. Key Porter Books.

European Wind Energy Association and Greenpeace. undated. Wind Force 12: A Blueprint to Achieve 12 Per Cent of the World's Energy from Wind Power by 2020. http://www.proventi.cz/files/windforce12.pdf

FAO. 1997. State of the World's Forests 1997. Rome: Food And Agriculture Organization of the United Nations.

FAO. 2002. World agriculture: towards 2015/2030, Summary Report. Rome: Food And Agriculture Organization of the United Nations.

FAO. 2003. Unlocking the Water Potential of Agriculture. Rome: Food And Agriculture Organization of the United Nations.

FAO. 2004. Hybrid Rice for Food Security. http://www.fao.org/rice2004/en/f-sheet/factsheet6.pdf.

FAO. 2005. State of the World's Forests 2005. Rome: Food And Agriculture Organization of the United Nations.

FAO. 2006. Global Forest Resources Assessment 2005: Progress towards Sustainable Forest Management. Rome: Food And Agriculture Organization of the United Nations.

Fawcett, Richard and Dan Towery. 2002. Conservation Tillage and Plant Biotechnology: How New Technologies Can Improve the Environment by Reducing the Need to Plow. Conservation Technology Information Center http://www.ctic.purdue.edu/CTIC/BiotechPaper.pdf

Fox, Jeffrey L. 2003. Resistance to Bt Toxin Surprisingly Absent From Pests. Nature Biotechnology. September. Vol. 21. No. 9. pp. 958 – 959.

Fredriksson, Gina. 2003. Power from Ocean Waves. Umea University, Applied Physics and Electronics.

Fumento Michael. 2001. Fear Not the Farms and the Fertilizer. May 11. http://www.fumento.com/fertilizer.html

Fumento, Michael. 2003. BioEvolution: How Biotechnology is Changing our World. San Francisco: Encounter Books.

GAO. 2000. Report to the Honorable Pete Domenici, US Senate Radiation Standards, Scientific Basis Inconclusive, and EPA and NRC Disagreement Continues. United States General Accounting Office.

Garwin, Richard L. & Georges Charpak. 2001. Megawatts and Megatons: a Turning Point in the Nuclear Age? New York. Alfred A. Knopf.

Gatehouse, A. M. R., N. Ferry and R. J. M. Raemaekers. 2002. Trends in Genetics. Vol.18. No.5. May. pp. 249-51.

Gleick, Peter H. et al. 2002. The World's Water 2002-2003: The Biennial Report on Freshwater Resources. Island Press.

Glueckstern, Pinhas. undated. Desalination: Current Situation and Future Prospects. http://www.biu.ac.il/Besa/waterarticle1.html

Goklany, Indur M. 2000. Richer is More Resilient: Dealing with Climate Change and More Urgent Environmental Problems. In Bailey (ed.) 1999.

Goldemberg, Jose (ed.). 2000. World Energy Assessment: Energy and the Challenge of Sustainability. UNDP, UN-DESA, World Energy Council. http://www.undp.org/energy/activities/wea/drafts-frame.html

Grimston, Malcolm C. and Peter Beck. 2000. Civil nuclear energy : fuel of the future or relic of the past? London: Royal Institute of International Affairs, Energy and Environment Programme.

Grubb, M. J. and Meyer, N. I. 1994. Renewable Energy Sources for Fuels and Electricity (Chapter 4, Wind Energy: Resources, Systems, and Regional Strategies) Washington DC: Island Press.

Gruhn, Peter, Francesco Goletti, and Montague Yudelman. 2000. Integrated Nutrient Management, Soil Fertility, and Sustainable Agriculture: Current Issues and Future Challenges. 2020 Brief 67. September. International Food Policy Research Institute.

Guin, Karen A. undated. Biotechnology at UC Davis. Division of Biological Sciences http://www.biotech.ucdavis.edu/Documents/biotech.pdf

Hagerman, George. 2001. Southern New England Wave Energy Resource Potential. Center for Energy and the Global Environment, Virginia Tech Alexandria Research Institute.

Hanna, E. and J. Cappelen. 2003. Recent Cooling in Coastal Southern Greenland and Relation with the North Atlantic Oscillation. Geophysical Research Letters. 30: 32‑1 - 32‑3.

Haupt, Arthur and Thomas T. Kane. 2004. Population Reference Bureau's Population Handbook. 5th Edition. Washington DC: Population Reference Bureau.

Henao, Julio and Carlos Baanante. 1999. Nutrient Depletion in the Agricultural Soils of Africa. 2020 Brief 62. International Food Policy Research Institute.

Herzog, Howard J., Ken Caldeira and Eric Adams. undated. Carbon Sequestration Via Direct Injection. http://sequestration.mit.edu/pdf/direct_injection.pdf

Hodgson, Peter E. 1999. Nuclear Power, Energy and the Environment. University of Oxford Imperial College Press.

Hoffman Jean M. 2001. Nuclear's New Age: New Nuclear Reactor Technology. Machine Design. September 27.

Hollander, Jack M. 2003. The Real Environmental Crisis: Why Poverty, Not Affluence, is the Environment's Number One Enemy. University of California Press.

Holt, Mark. 2003. Civilian Nuclear Waste Disposal. Issue Brief for Congress. Congressional Research Service The Library of Congress. Resources, Science, and Industry Division. Updated January 29 2003. Order Code IB92059.

Hope, Kempe Ronald and Bornwell C. Chikulo (eds). 1999. Corruption and Development in Africa: Lessons from Country Case-Studies. New York : St. Martin's Press.

Hore-Lacy, Ian. 1999. Nuclear Electricity. 5th Edition.

Hore-Lacy, Ian. 2000. The Future of Nuclear Energy. Paper presented at the Royal College of Physicians Conference. Adelaide. 4th May. http://www.uic.com.au/opinion6.html

IEA. 2001a. World Energy Outlook 2001. International Energy Agency.

IEA. 2001b. Nuclear Power in the OECD. International Energy Agency.

IEA. 2003. Coal Information 2003. International Energy Agency.

IEA. 2006a. Key World Energy Statistics 2005. International Energy Agency.

IEA. 2006b. IEA Wind Energy Annual Report 2005. International Energy Agency.

International Commission for the Protection of the Rhine. 2004. Rhine Salmon 2020. http://www.iksr.org.

IPCC. 2001. Climate Change 2001: The Scientific Basis. International Panel on Climate Change.

Jalonick, Mary Clare. 2005. Bill Contains Incentives for New Coal-Conversion Plant. Associated Press. 30 September.

James, Clive 2003. Global Review of Commercialized Transgenic Crops: 2002. ISAAA Briefs No. 29. International Service for the Acquisition of Agri-Biotech Applications.

Joughin, Ian and Slawek Tulaczyk. 2000. Positive Mass Balance of the Ross Ice Streams, West Antarctica. Science. Vol. 295. Issue 5554. pp. 476-480. 18 January.

Kaku, Michio and Jennifer Trainer (eds). 1982. Nuclear Power, Both Sides: The Best Arguments For and Against the Most Controversial Technology. New York and London: W. W. Norton & Company.

Kasper, J.N. and Allard, M. 2001. Late-Holocene Climatic Changes as Detected by the Growth and Decay of Ice Wedges on the Southern Shore of Hudson Strait, Northern Quebec, Canada. The Holocene. 11: 563-577.

Kleckner, Dean. 2003. Drought-Tolerant Crops: Preparing to Survive a Heat Wave. AgWeb.com September 4. http://archives.foodsafetynetwork.ca/
agnet/2003/9-2003/agnet_september_4.htm

KPMG, Bureau voor Economische Argumentatie. 1999. Solar Energy: from perennial promise to competitive alternative. Final report, project number 2562, written on the commission of Greenpeace Nederland. August.

Krabil, W., et al. 2000. Greenland Ice Sheet: High Elevation Balance and Peripheral Thinning. Science. 289: 428-30.

Kursunoglu, Behram N., Stephan L. Mintz and Amold Perlmutter. 1998. Environment and Nuclear Energy. New York and London: University of Miami Coral Gables. Florida. Plenum Press.

Kuuskraa, Vello A and Gregory C. Bank. 2003. Gas from Tight Sands, Shale's Growing Share of US Supply. Oil & Gas Journal. Dec 8. Vol.101. Issue 47. p. 34.

Lackner, K. S., P. Grimes and H. J. Ziock. undated. Capturing Carbon Dioxide from Air. paper.

Lackner, K. S., P. Grimes, and H. J.Ziock. 1999. The Case for Carbon Dioxide Extraction from Air. Sourcebook. Sept. Vol. 57. No. 9. pp. 6-10.

Lackner, Klaus S. 2003. A Guide to CO2 Sequestration. Science. June 13.Vol. 300. Issue 5626. pg. 1677. 2 pgs.

Laxon, Seymour et al. 2002. Recent Variations in Arctic Sea-Ice Thickness, Report to the Arctic Ocean Science Board. Tromsø Norway http://www.aosb.org/PDF/OPP_final_report_to_AOSB.pdf

Lewis Jr, Marlo. 2004. Launching the Counter Offensive: A Sensible Sense of Congress Resolution on Climate Change. Competitive Enterprise Institute.

Lin, W. et al. 1995. Genetic Engineering of Rice for Resistance to Sheath Blight. Nature Biotechnology 13, 686 - 691.

Loftus, Peter. 2003. Energy Firms Bury Carbon Emissions. Wall Street Journal. (Eastern edition). New York, N.Y.: Jan 8. pg. B.5.A,

Lomborg, Bjorn. 2001. The Skeptical Environmentalist: Measuring the Real State of the World. Cambridge University Press.

McGowan, Jon G. and Stephen R. Connors. 2000. Windpower: A Turn of the Century Review. Annual Review of Energy and the Environment. 25:147-97.

Mahlman J. D. 1998. Science And Nonscience Concerning Human - Caused Climate Warming. Annual Review of Energy and Enviroment. 23:83-105.

Mahlman, J.D. 2001. The Long Time Scales of Human - Caused Climate Warming: Further Challenges for the Global Policy Process. Pew Center Workshop on the Timing of Climate Change Policies. October 10-12. Pew Center on Global Climate Change. Arlington. VA.

Marland, Gregg and Tom Boden. 2001 or later. The Increasing Concentration of Atmospheric Co2: How Much, When, and Why? Environmental Sciences Division, Oak Ridge National Laboratory Oak Ridge, Tennessee

Martin, Rowan B. 1999. Biological Diversity: Divergent Views on its Status and Diverging Approaches to its Conservation. In Bailey (ed.) 1999.

Michaels, Patrick J. 2004. Meltdown: The Predictable Distortion of Global Warming by Scientists, Politicians, and the Media. Washington. D.C.: Cato Institute.

Miller, T., M.A. Cohen and B. Wiersema. 1996. Victim Costs and Consequences: A New Look. National Institute of Justice Research Report. Washington D.C.: NIJ. http://www.ncjrs.org/pdffiles/victcost.pdf

Mitchell Donald 0. et. al. 1997. The World Food Outlook. Cambridge: Cambridge University Press.

Mock, John E. et al. 1997. Geothermal Energy from the Earth: Its Potential Impact as an Environmentally Sustainable Resource. Annual Review of Energy and the Environment, November. Vol. 22. pp. 305-356.

National Center for Education Statistics 2005. The Condition of Education 2005 US Department of Education. Institute of Education Sciences. NCES 2005-094.

National Energy Board. 2004. Canada's Oil Sands Opportunities and Challenges to 2015. May.

NEA. 2001. Trends in the Nuclear Fuel Cycle: Economic, Environmental and Social Aspects. Nuclear Energy Agency. OECD.

NEA. 2004. Uranium 2003: Resources, Production and Demand. Nuclear Energy Agency. OECD.

Nero Jr, Anthony V. 1982. Safe Enough. In Kaku and Trainer (eds). 1982.

Oliver, Mike. 2001. Alternative Fuels? American Enterprise. September.

Pacific Institute for Studies in Development, Environment and Security. 1999. Sustainable Use of Water, California Success Stories, Executive Summary/Introduction. http://www.pacinst.org/reports/sustainable_california/ca_water_success_stories.pdf

Pew Initiative on Food and Biotechnology. 2001. Harvest on The Horizon: Future Uses of Agricultural Biotechnology. September.

Pingali Prabhu L. and Mark W. Rosegrant. 2001. Intensive Food Systems in Asia: Can the Degradation Problems be Reversed? in D.R. Lee and C.B. Barrett (eds) Tradeoffs or Synergies? CAB International.

Pittock, Barrie (ed.). 2003. Climate Change: An Australian Guide to the Science and Potential Impacts. Australian Greenhouse Office.

Reid, Walter V. and Kenton R. Miller. 1989. Keeping Options Alive: The Scientific Basis for Conserving Biodiversity. World Resource Institute. October.

Renewable Energy Policy Project. 2003. Wind Energy for Electric Power. REPP Issue Brief. July. (updated November).

Reuther, Christopher G. 2000. Saline Solutions: The Quest for Fresh Water. Environmental Health Perspectives. Vol. 108. No. 2. February. http://www.ehponline.org/docs/2000/108-2/innovations.html

Reynolds, M.P., J.I. Ortiz-Monasterio and A. McNab (eds.). 2001. Application of Physiology in Wheat Breeding. Mexico, D.F.: CIMMYT.

Risbud, Aditi. 2006. Cheap Drinking Water from the Ocean - Carbon Nanotube-Based Membranes Will Dramatically Cut the Cost of Desalination. Technology Review. June 12. http://www.technologyreview.com

Romanovsky, V. et al. 2002. Permafrost Temperature Records: Indicators of Climate Change. EOS Transactions. American Geophysical Union. 83: 589, 593-594.

Rosegrant, Mark W. and Peter B. R. Hazell. 2000. Transforming the Rural Asian Economy : The Unfinished Revolution. (A Study of Rural Asia, Volume 1). An Asian Development Bank Book.

Rushing, R.W., 2001. Greener than Thou: Renewable Energy for the Mass Market. Whole Earth. Summer.

Rutherford, Phil. 2002a. Radiation Risk: A Critical Look at Real and Perceived Risks from Radiation Exposure. Slide presentation. August 12, http://www.philrutherford.com/Radiation_Risk.pdf

Rutherford, Phil. 2002b. Radiation Risk. http://www.philrutherford.com/Radiation_Risk_LNT.pdf

Safe, Stephen H. 1999. Endocrine Disruptors, New Toxic Menace? In Bailey (ed.) 1999.

Sanmuganathan K. 2000 WCD Thematic Review Options Assessment IV.2: Assessment of Irrigation Options. Final Version. World Commission on Dams.

Scherr Sara J.1999. Soil Degradation a Threat to Developing-Country Food Security by 2020? Food, Agriculture, and the Environment Discussion Paper 27. Washington, DC: International Food Policy Research Institute.

Semiat, Raphael. 2000. Desalination: Present and Future. International Water Resources Association Water International. Vol. 25. No. 1. pp. 54-65. March.

Sherwood, Keith and Craig Idso. 2004. Another Arctic Avian Aria. CO2Science.org. Vol. 7. No. 11. 17 March.

Sims, Gordon. 1990. The Anti-Nuclear Game. University of Ottawa Press.

Singh R.B. 2001 Investing in Land and Water: The Fight against Hunger and Poverty in the Developing Asia. Proceedings of the Regional Consultation Bangkok, Thailand 3-5 October. Food and Agriculture Organization of the United Nations. WCD Thematic Review.

Smale, M et al. 2001. Dimensions of Diversity. CIMMYT Bread Wheat from 1965 to 2000. Mexico, D.F.: CIMMYT. http://www.cimmyt.org/Research/Wheat/map/research_results/DimDiversity/pdfs/DDiversity.pdf

Smil, V. 2000. Feeding the World: A Challenge for the Twenty-First Century. Cambridge: MIT Press.

Soon, Willie et al. 2001. Global Warming A Guide to the Science. Risk Controversy Series 1. The Fraser Institute. Centre for Studies in Risk and Regulation. Vancouver British Columbia Canada.

SOT. 2002. The Safety of Genetically Modified Foods Produced Through Biotechnology. Society of Toxicology Position Paper. Adopted September 25th. http://www.toxicology.org/ai/gm/GM_Food.asp

Stern, Nicholas. 2006. The Economics of Climate Change: The Stern Review. Cambridge University Press. http://www.hm-treasury.gov.uk/independent_reviews/stern_review_economics_climate_change/stern_review_report.cfm

The Economist. 2004. Sub-Saharan Africa Survey. Jan 15.

Thomas, R. et al. 2000. Mass Balance of the Greenland Ice Sheet at High Elevations. Science 289: 428-30.

Thompson P. A., D. Pridden and J. W. Griffiths. 2001 or later. Offshore Renewables: Collaborating for a Windy and Wet Future?. http://www.owen.eru.rl.ac.uk/documents/BWEA23/BWEA23_Thompson_Wind&Wave_paper.pdf

Thorpe, T. W. 1999. A Brief Review of Wave Energy: A Report Produced for the UK Department of Trade and Industry. May.

Tobin, Mary. 2003. Columbia Research Dispels 150 Years of Thinking - Mild Winter Conditions in Europe Are Not Due to the Gulf Stream. Columbia News. February 5. http://www.columbia.edu/cu/news/

UIC. 2003. Thorium. Nuclear Issues Briefing Paper # 67. Uranium Information Centre. http://www.uic.com.au/nip67.htm

UIC. 2006a. Transport and the Hydrogen Economy. Nuclear Issues Briefing Paper # 73. Uranium Information Centre. June. http://www.uic.com.au/nip73.htm

UIC. 2006b. Safety of Nuclear Power Reactors. Nuclear Issues Briefing # 14. Uranium Information Centre. May. http://www.uic.com.au/nip14.htm

UNDP. 2000. Energy and the Challenge of Sustainability, the World Energy Assessment. New York: United Nations Development Programme.

UNESCO. 1999. Summary of the Monograph. World Water Resources at the Beginning of the 21st Century. Prepared in the Framework of IHP UNESCO. http://webworld.unesco.org/water/ihp/db/shiklomanov/summary/html/summary.html

UNPD. 2002. World Population Prospects: The 2002 Revision Highlights. United Nations Population Division.

USGS. 2006. Mineral Commodity Summaries. United States Geological Survey.

van der Zwaan, B. C. C. et al. 1999. Nuclear Energy Promise or Peril? River Edge. NJ: World Scientific.

Vannuccini, Stefania. 2003.Overview of Fish Production, Utilization, Consumption and Trade, Based on 2001 Data. Rome: Food and Agriculture Organization of the United Nations.

Vesterby, Marlow and Kenneth S. Krupa. 1997. Major Uses of Land in the United States. Resource Economics Division, Economic Research Service, US Department of Agriculture. Statistical Bulletin No. 973. http://www.ers.usda.gov/publications/sb973/sb973.pdf.

Walker J. Samuel. 2000. Permissible Dose: A History of Radiation Protection in the Twentieth Century. University of California Press Berkeley / Los Angeles / London

Wardell, Charles. 2001. The Politically Correct Nuke: MIT Students Help Design a Nuclear Power Plant that They Hope will Revive the Industry. Whole Earth. Winter.

Water Science and Technology Board. 2004. Review of The Desalination and Water Purification Technology Roadmap. Washington. D.C. The National Academies Press.

WHO 2002. The World Health Report 2002: Reducing Risks, Promoting Healthy Life. Geneva: World Health Organization.

Williams, Bob. 2003. Heavy Hydrocarbons Playing Key Role in Peak-Oil Debate, Future Energy Supply. Oil & Gas Journal. Tulsa: July 28. Vol. 101. Issue 29. p. 20.

Winters, Jeffrey. 2003. Carbon Underground. Mechanical Engineering. February. Vol. 125. Issue 2. p. 46. 3 pgs.

Wood, Stanley, Kate Sebastian and Sara J. S Cherr. 2000. Pilot Analysis of Global Ecosystems: Agroecosystems. Washington D.C.: World Resource Institute.

World Bank. 2003. Water Resources Sector Strategy: Strategic Directions for World Bank Engagement. February. Washingon DC.

World Bank. 2006. Global Economic Prospects, Economic Implications of Remittances and Migration. Washington DC.

World Coal Institute. 2004. Clean Coal Building a Future through Technology. London.

World Commission on Dams. 2000. Final Report Fact Sheet. Dams and Development: A New Framework for Decision-Making. The Report of the World Commission on Dams. http://www.dams.org/report/report_factsheet.htm

World Energy Council. 1994. New Renewable Energy Resources: A Guide to the Future. London: Kogan Page Limited.

World Energy Council. 2001a. Wave Energy. Survey of Energy Resources.

World Energy Council. 2001b. Geothermal Energy. Survey of Energy Resources.

World Water Vision Commission (undated) A Water Secure World Vision for Water, Life and the Environment. World Water Council.

WRI. 1994. World Resources 1994-95: A Guide to Global Environment. Washington. DC. World Resources Institute.

Yegulalp, T. M. and Lackner K. S. 2004. Coal-based clean energy systems and CO2 sequestration. Mining Engineering. Oct.Vol.56, Iss. 10; pg. 29, 6 pgs.


NOTES



 



[1] The rich countries are here defined as OECD members except for the Czech Republic, Hungary, Mexico, Poland, the Slovak Republic and Turkey. In 2004 their per capita consumption was 5.49 toe. For the poorer countries per capita consumption was 1.13 toe. See IEA 2006a: 48 57.

[2] The term "feudalism" is used here loosely to mean a society where most people engage in small-scale agriculture and are ruled by lords who live off them.

[3] FAO 2002: 15

[4] WHO 2002: 53

[5] WHO 2002: 54

[6] WHO 2002: 54-55

[7] WHO 2002: 86

[8] Smil 2000: xix.

[9] Haupt and Kane 2004:50.

[10] United Nations figures. www.un.org/esa/population/publications/longrange2/LR_EXEC_SUM_TABLES_FIGS.xls

[11] It should not be too off the mark to assume that the 15 per cent who live in developed countries are consuming 30 per cent of grain (i.e., a per capita share twice that of developing countries). This would mean that of the 1.8 billion tonnes of grain produced every year, 1.26 billion tonnes goes to developing countries. If the population in these countries increases by 65 per cent, doubling their per capita consumption would mean increasing their total consumption 3.3 fold. Multiplying 1.26 billion by 3.3 gives 4.158 billion. Adding the 540 million tonnes consumed by developed countries gives a total of around 4.7 billion which is 2.6 times the original figure of 1.8 billion. If the population in developing countries increases by 50 per cent, grain production would need to increase 2.4 fold.

[12] http://www.earth-policy.org/Updates/Update31_data_WorldProdCons.htm. Grain production is currently just over 1.8 billion tons.

[13] Bruinsma (ed.) 2003: 315 - 316.

[14] Bruinsma (ed.) 2003: 315

[15] Alexandratos 1988: 383-4, FAO 2002: 50.

[16] Reynolds et al. (eds.). 2001: 3, 25, 160..

[17] FAO. 2004.

[18] People's Daily Online 2005. "Hybrid wheat breeds new hope" April 4.

[19] People's Daily 2003. "China Breeds World's First Hybrid Soybean," 17 January 2003

[20] For wheat see: CIMMYT. 2000a For rice see: http://www.futureharvest.org/growth/generalrice.bkgnd.shtml; Mitchell 1997: 60, Conway 1997: 142

[21] http://www.fumento.com/fertilizer.html

[22] Pew Initiative on Food and Biotechnology 2001: 32.

[23] Guin undated, p9 and Avery 1995: 217

[24] Lin 1995

[25] Pew Initiative on Food and Biotechnology 2001: 24.

[26] Pew Initiative on Food and Biotechnology 2001: 22.

[27] http://news.uns.purdue.edu/html4ever/030820.Goodwin.resist.html

[28] The discovery of the gene that protects potatoes against late blight and its cloning by scientists at the University of Wisconsin-Madison was reported July 14 2003 in online editions of the Proceedings of the National Academy of Sciences (PNAS).

[29] Avery 1995: 217

[30] The Economist, August 21, 2003. Rich Pickings: A New Research Centre in Uganda will Study the Banana; and Saving the World's Bananas http://www.whybiotech.com/index.asp?id=4054

[31] Saving the World's Bananas http://www.whybiotech.com/index.asp?id=4054

[32] "Hairpin RNA' beats plant viruses" CSIRO Media Release - Ref 2001/150 - Jun 20 , 2001

http://www.csiro.au/files/mediaRelease/mr2001/prhairpinrna.htm

[33] James 2003, p. 153.

[34] This research by CSIRO was terminated because of allergy concerns.

[35] Avery 1995: 218.

[36] CIMMYT 2000b.

[37] CIMMYT 2001b: 10.

[38] Pew Initiative 2001: 24, 26

[39] Pew Initiative 2001: 27

[40] Pew Initiative 2001: 27

[41] http://www.cimmyt.org/english/wps/news/2006/feb/
farmers_striga.htm

[42] Fox 2003.

[43] Fighting Against Resistance - Christof Fellmann, Checkbiotech.org, January 24, 2003.

[44] Phadnis, Chitra. "Monsanto begins gene pyramiding in Bt cotton seeds." Financial Times May 10, 2002

[45] CIMMYT 2000b

[46] CIMMYT 2002: 9.

[47] CIMMYT 2002:9.

[48] Scientists work on drought-proof rice. Australian Associated Press April 29, 2003.

[49] CIMMYT 2001b: 14.

[50] Genetically Modified Plants Can Survive Drought, Find Scientists - Cordis News Service, August 12, 2003-08-12.

[51] Kleckner 2003

[52] Abebe et al. 2003.

[53] Reynolds et al. 2001: 3.

[54] FAO 2003: 27.

[55] CIMMYT 2001b: 13.

[56] FAO 2003: 27-28.

[57] Reynolds et al. 2001: 142

[58] CIMMYT 2000b.

[59] CIMMYT 2001a: 11.

[60] Reynolds et al. 2001: 124.

[61] Reynolds et al. 2001: 26.

[62] CIMMYT 2001b: 7.

[63] Science Daily 2002 "Fried Green Tomatoes: Transgenic Tomatoes Reveal Critical Component Of Thermotolerance" 20 June. http://www.sciencedaily.com/releases/2002/06/020618073350.htm

[64] Fumento 2003: 234.

[65] Fumento 2003: 234

[66] Fumento 2003: 235.

[67] Reynolds et al. 2001: 108.

[68] Bruinsma 2003: 318.

[69] Rana Munns, CSIRO Plant Industry and Ray Hare at Enterprise Grains Australia.

[70] "Firm Creating Salt-Resistant Crops", The Arizona Republic by Kerry Fehr-Snyder July 15, 2002; Plant Scientist Eduardo Blumwald to Receive Humboldt Award. University of California, UC Davis News and Information July 7, 2003

[71] "China cultivates salt-resistant cloned tomatoes, rice, soya, poplars." Asia Intelligence Wire September 15, 2002.

[72] Fumento 2003: 237.

[73] Reynolds et al. 2001: 219.

[74] Reynolds et al. 2001: 221.

[75] Reynolds et al. 2001: 219.

[76] Reynolds et al. 2001: 219.

[77] Reynolds et al. 2001: 235.

[78] Reynolds et al. 2001: 227.

[79] Fumento 2003: 240.

[80] CIMMYT 2000b: 16.

[81] Reynolds et al. 2001: 176

[82] Reynolds et al. 2001: 176

[83] CIMMYT 2001b: 16.

[84] CIMMYT 2001b: 16.

[85] Avery 1999.

[86] Bruinsma 2003: 318.

[87] Pew Initiative: 31.

[88] The Politics of Biotech Foods AEI Newsletter American Enterprise Institute July 25, 2003.

[89] Fumento 2003: 245.

[90] http://www.cimmyt.org/Research/Maize/about/
01011997brochure.htm

[91] Wood et al. 2000: 21.

[92] Dyson 1996: 193.

[93] Smil 2000: 14.

[94] FAO 2002: 58.

[95] FAO 2002: 60.

[96] Smil 2000: 164-169.

[97] Smil 0201: 306.

[98] Conway 1997: 156.

[99] Smil 2000: 164-169.

[100] Avery 1995: 217.

[101] Smil 2000: 164-169.

[102] Smil 2000: 164-169.

[103] "Blocking Burping Beasts" http://www.abc.net.au/science/news/stories/s310139.htm

[104] Smil 2000: 164-169.

[105] Pew Initiative 2001: 36.

[106] "Sheep thriving in GMO feeding trial" CSIRO Media Release, Wednesday, 22 November 2000 Ref 2000/310

[107] Conway 1997: 153-158.

[108] Alexandratos 1988: 203.

[109] Guin undated

[110] FAO 2002: 61.

[111] FAO 2002: 62.

[112] Conway 1997: 151.

[113] Conway 1997:.153-158

[114] Pew Initiative 2001: 74.

[115] Pew Initiative 2001: 74.

[116] Smil 2000: xx.

[117] Smil 2000: 162.

[118] www.toxicology.org/ai/gm/GM_Food.doc

[119] "Are Foods Developed from Recombinant DNA Safe to Eat?" http://ccr.ucdavis.edu/biot/html/safe/index.shtml

[120] Rowett Research Institute: http://www.rri.sari.ac.uk

[121] Fumento 2003: 281-282.

[122] http://www.agbioworld.org/biotech-info/articles/agbio-articles/critical.html

[123] Pew Iniative 2001: 34.

[124] http://www.csiro.au/csiro/content/standard/ps3u,,.html

[125] Pew Initiative 2001: 40.

[126] GM Potato 'Could Improve Child Health' BBC News, January 1, 2003

[127] GM Potato 'Could Improve Child Health' BBC News, January 1, 2003; Pew Initiative p. 34

[128] Pew Iniative 2001: 34.

[129] "Scientists Create Tomato to Reduce Cancer Risk" Yorkshire Post June 24, 2002.

[130] Fumento 2003: 251.

[131] Fumento 2003: 251.

[132] Health Food, Biotech style- Robert Wager, The Globe and Mail, Oct. 17, 2003

[133] Health Food, Biotech style- Robert Wager, The Globe and Mail, Oct. 17, 2003

[134] Fumento 2003: 194-195.

[135] Fumento 2003: 194-195.

[136] Fumento 2003: 197.

[137] Fumento 2003: 197.

[138] Fumento 2003: 45.

[139] "GM Plants to Fight Allergies" Sydney Morning Herald, August 28, 2003.

[140] Fumento 2003: 36-37.

[141] "GE Grass Good News for Hayfever Sufferers" Royal Society News, June 19, 2003

[142] http://pewagbiotech.org/research/harvest/

[143] Fumento 2003: 15-16.

[144] http://pewagbiotech.org/research/harvest/

[145] Land-mine Detecting Plants Created 10 October 2005. Gizmag Emerging Technology Magazine.

[146] Biotechnology and Genetic Diversity http://www.whybiotech.com/index.asp?id=4009

[147] Biotechnology and Genetic Diversity http://www.whybiotech.com/index.asp?id=4009; "Study Says Transgene Unlikely to Spread among Wild Sunflowers." Life Science Weekly June 23, 2003

[148] AgBioview August 9 2005: http://www.agbioworld.org/newsletter_wm/index.php?caseid=archive&newsid=2398

[149] For a discussion of the various studies done on the issue see Gatehouse et al. 2002.

[150] http://www.whybiotech.com/index.asp?id=1819

[151] Martina McGloughlin Ten Reasons Why Biotechnology Will Be Important To The Developing World AgBioForum Magazine Volume 2 Number 3 & 4 Article 4 http://www.agbioforum.org/v2n34/v2n34a04-mcgloughlin.htm

[152] http://www.whybiotech.com/index.asp?id=1819

[153] Fumento 2003: 212-213.

[154] Fumento 2003: 266.

[155] "GM Cotton Crops Halve Pesticide Use" Sydney Morning Herald August 1, 2003

[156] Fumento 2003: 212-213.

[157] Fumento 2003: 221-223.

[158] Conservation tillage refers to any planting system that leaves more than 30 per cent of the soil covered with crop residue to prevent erosion, compared to less than 15 per cent with conventional tillage. Biotech crops help farmers control weeds without tillage, thus making conservation tillage systems practical. Globally, 'no till' conservation practices have increased by 35 per cent since biotech crops came on the market in 1996. In fact, almost all no-till acreage occurred where biotech crops have been employed. By using conservation tillage, farmers have been able to cut their production costs by 10 per cent or more. Importantly, the crop mulch in conservation tillage systems shades the ground and slows evaporation. The improved soil structure resulting from less ploughing actually increases the movement of water into the soil following rain or irrigation and holds it there, which means less irrigation is necessary. Also, less ploughing means less money spent on fuel.

[159] http://www.agbioworld.org/newsletter_wm/index.php?caseid=archive
&newsid=1751

[160] Above note and http://www.whybiotech.com/index.asp?id=1813

[161] Fumento 2003: 313-314.

[162] Poison-Craving Plant, Germ Designed to Suck up Pollutants 'Bio-Remediation' Shows Promise. USA Today. October 8, 2002

[163] "Superbug to the Rescue!" - Katharine Mieszkowski, Salon.com, August 28, 2003.

[164] Fumento 2003: 306-307.

[165] Fumento 2003: 316.

[166] Bruinsma 2003: 127.

[167] Cited in Table 3.2 of Crosson and Anderson 1992.

[168] Borlaug 1997.

[169] UNPD 2002: 1.

[170] FAO 2002: 41.

[171] Alexandratos 1988: 16-17.

[172] Cited in Rosegrant and Hazell 2000: 292.

[173] Scherr 1999: 16.

[174] Scherr 1999: 16.

[175] "Will Sprawl Gobble Up America's Land? Federal Data Reveal Development's Trivial Impact" Ronald D. Utt. Backgrounder #1556 http://www.heritage.org/Research/SmartGrowth/BG1556.cfm.

[176] Vesterby and Krupa 1997.

[177] http://www.ers.usda.gov/Briefing/LandUse/urbanchapter.htm

[178] http://www.ers.usda.gov/Briefing/LandUse/urbanchapter.htm

[179] Crosson and Anderson 2002: 7.

[180] FAO 2002: 41.

[181] Scherr 1999: 16.

[182] Crosson and Anderson 2002: 7.

[183] Because of their structure and composition tropical soils tend to have a lower inherent fertility and be more susceptible to degradation pressures and they are also subject to more degradation pressure from the climate - higher temperatures, greater high and low extremes of rain fall, and greater rain fall intensity typical of the tropics. leaching of nutrients, faster decomposition of organic matter.

[184] Dyson 1996: 145.

[185] FAO 2002: 43.

[186] Crosson and Anderson 1992: 34.

[187] Pingali and Rosegrant 2001: 385, 386.

[188] Sanmuganathan 2000: iv.

[189] Smil 2000: 76-77.

[190] Con 1997: 253.

[191] Gruin et al. 2000

[192] Gruin et al. 2000

[193] World Bank 2003: 12.

[194] Singh 2001.

[195] Cosgrove and Rijsberman 2000: xx. We can assume that all pumped ground water is for human use.

[196] FAO 2002: 4.

[197] World Commission on Dams 2000.

[198] UNESCO 1999 table 8 and World Bank 2003: 12.

[199] Sanmuganathan 2000: 43, 70.

[200] World Water Vision Commission (undated): 13.

[201] World Water Vision Commission (undated): 13.

[202] Cosgrove and Rijsberman 2000: table 1, xxii.

[203] Cosgrove and Rijsberman 2000: 40.

[204] Smil 2000: 127-130.

[205] Gleick et al. 2002: 20.

[206] Smil 2000: 126-127, 132.

[207] Gleick et al. 2002: 20.

[208] FAO 2003: 49.

[209] Smil 2000: 132.

[210] Smil 2000: 132.

[211] WRI 1994.

[212] Cosgrove and Rijsberman 2000: 9.

[213] California Energy Commission, undated.

[214] Gleick et al. 2002: 4.

[215] Fumento 2003: 302; See http://www.qinetiq.com/

[216] 'Advanced Water Treatment Technologies May Bring Purified Water to San Diego', http://www.gewater.com/library/tp/922_Advanced_Water.jsp

[217] Sanmuganathan 2000: 91.

[218] Pacific Institute for Studies in Development, Environment and Security 1999: 10.

[219] Cosgrove and Rijsberman 2000: 41.

[220] Water Science and Technology Board 2004:12.

[221] Semiat 2000: 54.

[222] Desalination and Water Purification Technology Roadmap 2003: 52.

[223] http://www.membranes.com/; http://www.water-technology.net/projects/tampa/; Beck 2002: 1.

[224] San Diego County Water Authority http://www.sdcwa.org/manage/sources-desalination.phtml

[225] Buros 2000: 5.

[226] Beck 2002: 6f.

[227] Beck 2002: 10.

[228] Glueckstern (undated)

[229] Beck 2002: 10.

[230] Semiat 2000: 63.

[231] http://www.aquasonics.com/index.html

[232] PRRC Biannual Newsletter Volume 16, No. 1/Winter 2000.2001, Petroleum Recovery Research Center http://baervan.nmt.edu/

[233] Desalination and Water Purification Technology Roadmap 2003: 56.

[234] Risbud 2006.

[235] Cosgrove and Rijsberman 2000: xxii.

[236] 2500 km3 of water weighs about 2.5 trillion tonnes.

[237] "North American Company Patents Ice Berg Towing" The World Today - 22-11-01.

[238] Rosegrant and Hazell 2000: 312.

[239] Wood et al. 2000: 69.

[240] Smale, M et al. 2001:.6.

[241] CGIAR and Global Environment Facility 2002: 6.

[242] Rosegrant and Hazell 2000: 312.

[243] Smil 2000: 171-2.

[244] Conway 1997: 276.

[245] Vannuccini 2003.

[246] Alexandratos 1988: 247.

[247] Smil 2000: 12.

[248] Projection of World Fishery Production in 2010 http://www.fao.org/fi/highligh/2010.asp

[249] http://usinfo.state.gov/journals/ites/0103/ijee/trends.htm

[250] Wood et al. 2000: Introduction/3.

[251] Smil 2000: 175.

[252] "Fish Farming the Promise of a Blue Revolution" The Economist Aug 7th 2003.

[253] Genetic Modification of Aquatic Organisms for Aquaculture, SeaWeb Aquaculture Resources http://www.seaweb.org/resources/aquaculturecenter/documents/Aquaculture.GMOD.pdf

[254] "Fish Farming the Promise of a Blue Revolution" The Economist Aug 7th 2003.

[255] Smil 2000: 161.

[256] "The Blue Revolution a New Way to Feed the World." The Economist Aug 7th 2003

[257] "Fish Farming the Promise of a Blue Revolution" The Economist Aug 7th 2003.

[258] Smil 2000: 50.

[259] USGS 2006: 125.

[260] USGS 2006: 125.

[261] Lomborg 2001: 144.

[262] USGS 2006: 125.

[263] USGS 2006: 165.

[264] USGS 2006: 165.

[265] USGS 2006: 129.

[266] USGS 2006: 129.

[267] Smil 2000: 50.

[268] Avery 1995: 230.

[269] Smil 2000: 50.

[270] Avery 1995: 170.

[271] Avery 1995: 171-2.

[272] Avery 1995: 171-2.

[273] Avery 1995: 30.

[274] CIA Fact Book, 2004 estimate.

[275] Average income here refers to GDP per capita not income per worker. The figures used are for 2004 and come from the on-line CIA Fact Book.

[276] "The Great Divide," The Economist, Mar 3rd 2005

[277] World Bank, 2006: 8.

[278] World Bank 2006: 8.

[279] World Bank 2006: 8.

[280] World Bank 2006: 8; The Groningen Growth and Development Centre: http://www.ggdc.net/dseries/Data/TED05II .xls

[281] The Groningen Growth and Development Centre: http://www.ggdc.net/dseries/Data/TED05II .xls

[282] IEA. 2006a: 48. Primary energy is the energy content of a resource at the point of extraction. In the case of coal, geothermal and uranium it is their thermal energy prior to their being converted into electricity, and oil prior to refining. For wind, solar panels and hydropower it is the electricity produced.

[283] The rich countries are here defined as OECD members except for the Czech Republic, Hungary, Mexico, Poland, the Slovak Republic and Turkey. In 2004 their per capita consumption was 5.49 toe. For the poorer countries per capita consumption was 1.13 toe. See IEA 2006a: 48‑57.

[284] EIA 2006: 1.

[285] EIA 2005: 1.

[286] The US per capita level is 7.91 toe. See IEA 2006a: 57.

[287] EIA 2006: 83.

[288] IEA 2006a: 6.

[289] IEA. 2006a: 6.

[290] US Geological Survey World Petroleum Assessment 2000 - Description and Results, US Geological Survey Digital Data Series 60.

[291] In 2005 we consumed 29.6 billion barrels. See BP 2006: 8.

[292] Key advocates of this view are Colin J. Campbell, Kenneth S. Deffeyes and Jean Laherrere.

[293] One of the more renowned optimists is Michael Lynch of Strategic Energy & Economic Research Inc.

[294] The term 'oil' has, to date, been synonymous with conventional crude oil, a liquid mixture of hydrocarbons that percolates through porous strata and flows readily up drilled boreholes.

[295] Alberta Chamber of Resources 2004: 41.

[296] Alberta Chamber of Resources 2004: chapter 4; Athabasca Oil Sands Web Page: http://collections.ic.gc.ca/oil/index1.htm

[297] National Energy Board 2004: 4.

[298] Natural Resources Canada: http://www2.nrcan.gc.ca/es/ener2000/online/html/chap3a_e.cfm

[299] US Geological Survey, "Heavy Oil and Natural Bitumen - Strategic Petroleum Resources", USGS Fact Sheet FS-070-03, August 2003.

[300] Email communications with Alberta oil sands executive.

[301] Bunger et al. 2004.

[302] A figure of 2.6 trillion bbls. is given by Bunger et al.2004. A figure of 3.5 trillion bbls. is given by Williams 2003.

[303] Williams 2003:20

[304] Bunger et al. 2004.

[305] IEA 2001a: 52.

[306] Jalonick, Mary Clare, "Bill contains incentives for new coal-conversion plant", Associated Press, 30 September 2005.

[307] Iran Daily, "Indians Look to Make Oil From Coal", July 30 2005. http://www.iran-daily.com/1384/2336/html/energy.htm

[308] IEA 2006a: 6, 24.

[309] IEA 2003: table 6.8.

[310] Goldemberg 2000: 148.

[311] IEA 2003.

[312] Goldemberg 2000: 148.citing BGR (Bundesanstalt fur Geowissenschaften und Rohstoffe [Federal Institute for Geosciences and Natural Resources] 1998. Reserven, Ressourcen und Verfugbarkeit von Energierohstoffen 1998 [Availability of Energy Reserves and Resources 1998]. Hanover, Germany.

[313] IEA 2006a: 6

[314] EIA: http://www.eia.doe.gov/pub/international/iea2003/table81.xls

[315] US Geological Survey World Petroleum Assessment 2000 - Description and Results, US Geological Survey Digital Data Series 60; International Energy Agency, World Energy Outlook 2001, p. 142.

[316] USGS, Coal-Bed Methane: Potential and Concerns, USGS Fact Sheet FS-123-00 October 2000.

[317] USGS, Coal-Bed Methane: Potential and Concerns, USGS Fact Sheet FS-123-00 October 2000.

[318] US Geological Survey Energy Resource Surveys Program, Coalbed Methane - An Untapped Energy Resource and an Environmental Concern, USGS Fact Sheet FS-019-97.

[319] Kuuskraa and Bank 2003: 34.

[320] Kuuskraa and Bank 2003: 34.

[321] Bundesministerium fur Wirtschaft und Arbeit 2002. Reserves, Resources and Availability of Energy Resources 2002, short version, page 9.

[322] World Energy Council, New Technology for Tight Gas Sands, http://www.worldenergy.org/wec-geis/publications/default/tech_papers/17th_congress/2_1_16.asp

[323] Goldemberg 2000: 147.

[324] United States Geological Survey, Natural Gas Hydrates - vast resource, uncertain future, USGS Fact Sheet FS–021–01, March 2001

[325] United States Geological Survey, Natural Gas Hydrates - vast resource, uncertain future, USGS Fact Sheet FS-021-01, March 2001.

[326] IEA 2001a: 397.

[327] United States Geological Survey, Natural Gas Hydrates - vast resource, uncertain future, USGS Fact Sheet FS-021-01, March 2001.

[328] US Congress, Report of the Methane Hydrate Advisory Committee on Methane Hydrate Issues and Opportunities Including Assessment of Uncertainty of the Impact of Methane Hydrate on Global Climate Change, December 2002, p.8.

[329] Coal contains about 80 percent more carbon per unit of energy than gas does, and oil contains about 40 percent more. Congressional Budget Office 2003: 11.

[330] Mahlman 2001: 10.

[331] Mahlman 1998: 88; Mahlman 2001: 8.

[332] Lomborg 2001: 269.

[333] Lewis jr 2004: 6.

[334] De Freitas 2002: 304-6

[335] De Freitas 2002: 305; Essex and McKitrick 2002: 139.

[336] Lewis Jr 2004: 7.

[337] De Freitas 2002: 306.

[338] http://www.worldclimatereport.com/index.php/2005/03/03/hockey-stick-1998-2005-rip/

[339] Soon et al. 2001: 11.

[340] Michaels 2004: 232.

[341] Michaels 2004: 232

[342] Reuters October 26, 2002

[343] Doran et al. 2002.

[344] Virtual Climate Alert May 21, 2002 Vol. 3, No. 16

[345] Joughin and Tulaczyk, 2000.

[346] Pittock 2003: 35; Mahlman 2001: 17.

[347] Cited by Michaels 2004: 55.

[348] Cited by Michaels 2004: 60.

[349] Michaels 2004: 60.

[350] Cited by Michaels 2004: 58-59.

[351] Chylek 2004: 201.

[352] Michaels 2004: 6.

[353] Lewis Jr 2004: 22.

[354] Michaels 2004: 37-8.

[355] Michaels 2004: 38-9.

[356] http://www.the-south-asian.com/Aug2004/Gangotri_glacier.htm

[357] 'The Climate Himalayan Snow Job" March 17, 2005 http://www.worldclimatereport.com/index.php/2005/03/17/the-great-himalayan-snow-job/

[358] Michaels 2004: 94.

[359] Sherwood and Idso 2004: 44, 47.

[360] http://www.aosb.org/PDF/OPP_final_report_to_AOSB.pdf

[361] Michaels 2004: 42-43.

[362] Michaels 2004: 33.

[363] IPCC. 2001: 33.

[364] Lewis Jr 2004: 9.

[365] Michaels 2004: 118.

[366] Michaels 2004: 172-3)

[367] Romanovsky et al. 2002.

[368] Kasper and Allard 2001.

[369] http://www.co2science.org/scripts/CO2ScienceB2C/subject/c/
carbongrasslands.jsp

[370] Correspondence Nature 428, 601 (April 8, 2004)

[371] Richard Seager of Columbia's Lamont-Doherty Earth Observatory and David Battisti of the University of Washington. See Tobin, Mary. Columbia Research Dispels 150 Years of Thinking - Mild Winter Conditions in Europe Are Not Due to the Gulf Stream, Columbia News, Feb 05, 2003. http://www.columbia.edu/cu/news/03/02/richardSeager_research.html

[372] Stern 2006: 170.

[373] At the time of writing, methane levels in the atmosphere appear to have stabilized.

[374] Stern 2006: 170. Stern gives a figure of 42 gigatonnes of CO which equals 11.45 GtC. 57 per cent of that figure gives 6.53 GtC.

[375] Lackner et al. undated: 4.

[376] Loftus 2003: B.5.A.

[377] Australian Coal Association 2004: 29.

[378] Lackner et al. undated: 6; American Energy Independence Web Site, "CO2 Recycling Capturing Carbon Dioxide Directly from the Air" http://www.americanenergyindependence.com/recycleco2.html

[379] Lackner et al. 1999; American Energy Independence Web Site, "CO2 Recycling Capturing Carbon Dioxide Directly from the Air" http://www.americanenergyindependence.com/recycleco2.html

[380] Project Facts, US Department of Energy, National Energy Technology Laboratory, "Recovery & sequestration of co2 from stationary combustion systems by photosynthesis of microalgae", 11/2003.

[381] Goldemberg 2000: 289.

[382] Goldemberg 2000: 289.

[383] Goldemberg 2000: 289.

[384] Yegulalp and Lackner.2004.

[385] Lackner 2003; Winters 2003.

[386] Herzog et al. undated: 4.

[387] Goldemberg 2000: 276.

[388] World Energy Council 1994: 77.

[389] Rushing 2001.

[390] It is called the Australian Solar Tower Project and at the time of writing was at the final feasibility stage.

[391] The technology is being developed and tested by Oak Ridge National Laboratory in the US http://www1.eere.energy.gov/solar/solar_lighting.html

[392] In 2004 world electricity production was 15,985 TWh and per capita consumption in rich countries averaged 9,710 KWh. See IEA 2006a: 48. The rich countries are here defined as OECD members except for the Czech Republic, Hungary, Mexico, Poland, the Slovak Republic and Turkey.

[393] http://energy.saving.nu/solarenergy/energy.shtml.

[394] KPMG, Bureau voor Economische Argumentatie, 1999.

[395] IEA 2006a: 55. There are a billion kWh to a TWh.

[396] http://www.iea.org/Textbase/stats/

[397] http://www.iea.org/Textbase/stats/

[398] The Dutch figure is 6823 kWh. See IEA 2006a: 55.

[399] 10 per cent of 1505 multiplied by 20 = 3010 which is 31 per cent of 9710.

[400] IEA 2006b: 11.

[401] Renewable Energy Policy Project, 2003: 5.

[402] Grubb and Meyer 1994 and World Energy Council 1994.

[403] "Study of Offshore Wind Energy in the EC", Garrad Hassan & Germanischer Lloyd, 1995, cited in European Wind Energy Association and Greenpeace undated.

[404] Hagerman 2001: 1

[405] World Energy Council 2001a; Fredriksson 2003: 4.

[406] World Energy Council. 1994.

[407] Thorpe 1999: 13.

[408] Thompson et al. 2001 or later: 6.

[409] Fredriksson 2003: 4.

[410] Ocean Power Delivery Ltd: http://www.oceanpd.com/default.html

[411] IEA 2006a: 18.

[412] Goldemberg 2000:155.

[413] This section draws for its information entirely on Goldemberg 2000: chapter 5.

[414] IEA 2006a: 24 and 6. The primary energy equivalent of nuclear electricity is calculated by assuming a 33.3 per cent conversion efficiency from heat to electricity. See IEA 2006a: 59.

[415] Italy imports from France electricity which is produced from nuclear power.

[416] http://www.uic.com.au/reactors.htm

[417] Hore-Lacy 2000.

[418] http://www.uic.com.au/reactors.htm. As well as commercial energy generation, there are about 280 small reactors, used for research and for producing isotopes for medicine and industry over 400 small reactors powering ships; mostly submarines. See Hore-Lacy 1999: v.

[419] NEA 2001: 15.

[420] http://www.uic.com.au/reactors.htm

[421] NEA 2001: 138.

[422] IEA 2001b: 130.

[423] Cohen 1990:163-164.

[424] http://www.world-nuclear.org/info/inf08.htm; Hoffman 2001.

[425] Hoffman 2001.

[426] Wardell 2001.

[427] Abraham 2002: 5; Wardell 2001.

[428] Grimston and Beck 2000: 29.

[429] UIC 2006a.

[430] 441 reactors currently produce 28.78 EJ. (687 Mtoe) of energy (Electricity/ 0.0333).

[431] NEA 2004: 13-22.

[432] http://www.uic.com.au/reactors.htm

[433] NEA 2004:.20.

[434] This was only $95 million in 2002, NEA 2004: 9.

[435] NEA 2004:20.

[436] Garwin and Charpak 2001: 166.

[437] NEA 2001: 30.

[438] UIC 2003.

[439] NEA 2004 22.

[440] NEA 2004: 22.

[441] NEA 2004: 22.

[442] NEA 2004: 22.

[443] NEA 2004: 22.

[444] http://users.rcn.com/jkimball.ma.ultranet/BiologyPages/E/
Elements.html

[445] Hore-lacy 1999: 39.

[446] This follows from fact that the current 6.5 per cent share requires 65,000 tonnes of uranium.

[447] Daley 1997: 60.

[448] Cohen 1990: 183.

[449] American Nuclear Society 2001: 7.

[450] Sims 1990: 40-41.

[451] Garwin and Charpak 2001: 85.

[452] Walker 2000: 48.

[453] IEA 2001b: 171.

[454] Cohen 1990: 114.

[455] van der Zwaan: 20.

[456] Hore-lacy 1999: 59.

[457] Cohen 1982: 73; Walker 2000: 141; Hodgson 1999: 64.

[458] Sims 1990: 85.

[459] Sims 1990: 85.

[460] Cohen 1990: 69.

[461] Rutherford 2002b.

[462] Rutherford 2002a: 19.

[463] GAO 2000: 10.

[464] Kaku and Trainer 1982: 29.

[465] Cohen 1998:525.

[466] Chernobyl Forum 2003-2005: 7.

[467] Hodgson 1999: 68.

[468] Hodgson 1999: 68.

[469] Oliver 2001.

[470] Kursunoglu 1998: 39.

[471] Hodgson 1999: 63-65.

[472] American Nuclear Society 2001: 7.

[473] Hodgson 1999: 65.

[474] Chen 2004.

[475] 300 millirems in the US.

[476] van der Zwaan. et al. 1999: 259; Cohen 1990: 205.

[477] Walker 2000: 52; Sims 1990: 40.

[478] Walker 2000: 31; Sims 1990: 40.

[479] Sims 1990: 243. Walker 2000: 117.

[480] Cohen 1990: 54.

[481] Walker 2000: 139.

[482] Walker 2000: 139.

[483] Sims 1990: 108.

[484] Garwin and Charpak 2001: 171-172.

[485] Hodgson 1999: 79.

[486] Kaku and Trainer 1982: 82-83.

[487] Nero 1982: 87-88.

[488] Cohen 1990: 86-89.

[489] Cohen 1990: 80.

[490] Cohen 1990: 86-89.

[491] Cohen 1990: 77.

[492] Cohen 1990: 86-89.

[493] UIC 2006b.

[494] http://www.uic.com.au/nip20.htm

[495] UIC 2006b.

[496] Hoffman 2001.

[497] Wardell 2001.

[498] IEA 2001b: 175.

[499] IEA 2001b: 197.

[500] Sims 1990: 175.

[501] Hore-Lacy 1999: 32.

[502] Sims 1990: 164.

[503] IEA 2001b: 197.

[504] Hore-Lacy 1999: 56.

[505] Holt 2003: CRS-5.

[506] IEA 2001b: 195.

[507] Hodgson 1999: 70-2.

[508] Holt 2003: CRS-6.

[509] Holt 2003: CRS-6.

[510] Hore-Lacy 1999: 46.

[511] IEA 2001b: 196.

[512] http://www.nrc.gov/waste/low-level-waste.html

[513] Cohen 1990: 206.

[514] IEA 2001b: 196.

[515] IEA 2001b: 51.

[516] Hodgson 1999: 73.

[517] http://www-ns.iaea.org/appraisals/west-kara.htm

[518] Cohen 1990: 179.

[519] Cohen 1990: 179.

[520] Cohen 1990: 179.

[521] Cohen 1990: 184.

[522] Cohen 1990: 184.

[523] Cohen 1990: 184.

[524] Hore-Lacy 1999: 48.

[525] Cohen 1990: 184.

[526] "Nuclear Energy Industry Salutes Senate for Approving Yucca Mountain." PR Newswire July 9, 2002.

[527] Cohen 1990: 221.

[528] World Energy Council 2001b

[529] US Geological Survey, Geothermal Energy: Clean Power from the earth's heat, Circular 1249, 2003, p. 17.

[530] World Energy Council 2001b.

[531] World Energy Council 2001b.

[532] Energy and Geoscience Institute University of Utah 2001 or later: 4.

[533] Energy and Geoscience Institute University of Utah 2001 or later: 6.

[534] US Geological Survey, Geothermal Energy: Clean Power from the earth's heat, Circular 1249, 2003 p. 21.

[535] http://www1.eere.energy.gov/ba/pdfs/geo_hotdry_rock.pdf, page 3-40.

[536] Mock et al. 1997.

[537] Mock et al. 1997.

[538] Armstead and Tester 1987: 51-52.

[539] Armstead and Tester 1987: 56.

[540] The estimate takes into account the fact that hotter countries would have no use for low grade heat for space heating and would only be interested in rocks hot enough for electricity.

[541] Mock et al.1997 Table 1.

[542] Armstead and Tester 1987: 44.

[543] The 10 kilometers beneath each square kilometer provides 0.0215 quads for a 1oC drop in temperature and so 0.001075 quads for a 0.05oC drop. Dividing 445 by 0.001075 gives 413,953.

[544] Duchane 1996: 3.

[545] Duchane 1996: 4.

[546] This is using the USGS's concept of reserve base which includes those resources that are currently economic (reserves), marginally economic, and some of those that are currently uneconomic. The uneconomic part would require some rise in prices or the adoption of some technology improvements. See USGS 2006: 195.

[547] Present iron ore and bauxite would last into the 2080s or 90s with an average annual growth rate of 2 per cent. This provide a 6.5 fold increase in annual consumption which would be more than enough to ensure abundance in a world of 10 billion people. Potash and phosphate, which are used in fertilizer, have present reserves that would last into the next century at that growth rate. However, it is likely that food abundance will not require such a large increase. See USGS 2006.

[548] Lomborg 2001: 140.

[549] Lomborg 2001: 141.

[550] Lomborg 2001: 142 and 145.

[551] USGS 2006: 129.

[552] USGS 2006: 125.

[553] USGS 2006: 183.

[554] http://www.epa.gov/airtrends/2005/econ-emissions.htm

[555] http://www.epa.gov/airtrends/sixpoll.html

[556] http://www.epa.gov/airtrends/pmreport03/pmlooktrends_2405.pdf.

[557] http://www.epa.gov/airtrends/2005/econ-emissions.html

[558] http://reports.eea.eu.int/topic_report_2003_4/en/
Topic_4_2003_web.pdf

[559] http://europa.eu/scadplus/leg/en/lvb/l28159.htm

[560] Depletion of oxygen in a nutrient-rich body of water by growth of too much plant life, leading to death of animal life.

[561] http://www.the-river-thames.co.uk/environ.htm

[562] International Commission for the Protection of the Rhine 2004.

[563] http://www.nyc.gov/html/dep/html/news/hwqs.html

[564] http://www.epa.gov/glnpo/glindicators/fishtoxics/topfisha.html

[565] http://www.epa.gov/glnpo/collaboration/taskforce/factsheet.html

[566] Lomborg 2001: 194.

[567] Lomborg 2001: 195.

[568] International Tanker Owners Pollution Federation, http://www.itopf.com/stats.html

[569] Beijing, March 12 (Xinhuanet

[570] Xinhua News Agency, October 19, 2004

[571] http://www.thewaterpage.com/ganges.htm#Pollution

[572] Xinhua News Agency, March 23, 2005

[573] http://www.chinaenvironment.net/sino/sino5/page12.html

[574] Xinhua News Agency, March 23, 2005

[575] http://siteresources.worldbank.org/INTRES/Resources/
AirPollutionConcentrationData2.xls

[576] http://www.who.int/indoorair/en/

[577] This paragraph relies on Lomborg 2001: 191-192.

[578] http://response.restoration.noaa.gov/bat2/recovery.html

[579] Lomborg 2001: 180.

[580] According to FAO 1997: 21, "the widespread death of European forests due to air pollution which was predicted by many in the 1980's did not occur." Cited in Lomborg 2001:180.

[581] Lomborg 2001: 217.

[582] Lomborg 2001: 217.

[583] Edwards et al. 2005.

[584] American Cancer Society, Breast Cancer Facts and Figures 2005-2006 http://www.cancer.org/downloads/STT/CAFF2005BrF.pdf

[585] Lomborg 2001: 221.

[586] American Cancer Society, Breast Cancer Facts and Figures 2005-2006 http://www.cancer.org/downloads/STT/CAFF2005BrF.pdf

[587] Lomborg 2001: 225

[588] Lomborg p. 225 citing NCI statistics.

[589] Colborn, T. et al. 1996.

[590] Safe 1999: 193-194.

[591] Safe 1999: 193.

[592] Safe 1999: 191-192.

[593] Lomborg 2001: 238-241.

[594] Lomborg 2001: 240.

[595] Safe 2001: 198.

[596] Lomborg 2001: 241.

[597] FAO 2005: 137.

[598] FAO 2005: table 3.

[599] Depletion figures derived from FAO 2006: table 4.

[600] Hollander 2003: 128.

[601] FAO 2006: table 4.

[602] http://www.fao.org/DOCREP/003/X6953E/X6953E05.htm

[603] Martin 1999: 207 suggests a range of 5-10 million current species, with a greater likelihood of being closer to 5 than 10. Here is another view: "For the more conspicuous birds and mammals, the number of species is known quite accurately, both for tropical species as well as temperate ones. It is estimated that at least 98 per cent of birds have been discovered. For birds there are 2-3 times as many tropical species as temperate ones. For other organisms most of the named species (1.4 million) are from temperate countries. If we assume that the same factor applies to other organisms as to birds, then there are 2-3 times this many tropical species (2.8-4.2 million, giving an estimated total species of 4.2-5.6 million." http://darwin.bio.uci.edu/~sustain/bio65/lec10/b65lec10.htm#_Number_of_Species 

[604] Lomborg 2001: 250.

[605] Lomborg 2001: 252.

[606] Conservation International, Biodiversity Hotspots, Atlantic Forest: http://www.biodiversityhotspots.org/xp/Hotspots/atlantic_forest/conservation.xml

[607] Lomborg 2001: 254.

[608] Easterbrook 1995: 559.

[609] Martin 1999: 208-210.

[610] Simons 1996: 446 citing Reid and Miller 1989.

[611] Lomborg 2001: 250 gives a figure of 4,500 mammals and 9,500 birds; 1/14,000 multiplied by 10 million equals 714.

[612] Reporting on the work of UC Berkeley geologist James Kirchner, http://www.rainforests.net/diversification.htm

[613] Lomborg 2001: 249.

[614] Ayittey 1992: 105)

[615] Ayittey 1992: 112

[616] Betrayed 0023 252

[617] Ayittey 1998: 144-145

[618] Ayittey 1992: 342

[619] Ayodele 2005: 2.

[620] Ayodele 2005: 1.

[621] Ayittey 1992: 255

[622] Ayittey 1992: 245

[623] Ayodele 2005: 3.

[624] Ayittey 1992: 239

[625] Ayyittey 2002: 9.

[626] Hope and Chikulo 1999: 122.

[627] Ayittey 1992: 0019.

[628] Ayittey 1998: 177.

[629] Ayittey 1992: 0015.

[630] Ayittey 1992: 0024.

[631] "Sub-Saharan Africa Survey" The Economist Jan 15th 2004.

[632] Ayittey 1992: 0031.

[633] "Sub-Saharan Africa Survey" The Economist Jan 15th 2004.

[634] "Sub-Saharan Africa Survey" The Economist Jan 15th 2004.

[635] Ayittey 2002: 10.

[636] The Economist, 20 Oct 2005

[637] The total foreign debt of SSA governments today stands at $350 billion. (Ayittey2004:2) and of this about half is owed by the 34 Sub-Saharan countries described as Heavily Indebted Poor Countries, the real basket cases. (Cato Institute 2005: 698) The vast bulk of the debt is owed to Western governments and multi-lateral financial and development institutions such as the World Bank, the IMF and the UNDP. Currently debt service obligations absorb a large proportion of export revenue.

[638] These include Bono and Bob Geldoff.

[639] http://nces.ed.gov/programs/digest/d04/tables/dt04_008.asp

[640] http://nces.ed.gov/programs/digest/d04/tables/dt04_185.asp

[641] National Center for Education Statistics 2005:161

[642] http://nces.ed.gov/programs/digest/d04/tables/dt04_362.asp

[643] http://www.bls.gov/news.release/ocwage.t01.htm

[644] www.oecd.org/dataoecd/32/26/33710913.xls

[645] This is called the Flynn Effect. See http://www.wired.com/wired/archive/13.05/flynn_pr.html

[646] It is, of course, problematic to talk about socialism having employment, full or otherwise, given that workers are now owners rather than employees. It is used here as a matter of convenience given the difficulty of coming up with a more suitable word.

[647] In the mathematics of game theory, this is an example of the so-called prisoner's dilemma problem where the 'rules of the game' are such that each individual player is forced to adopt a non-cooperative strategy that delivers to them an outcome that is inferior to the one they would receive in a 'game' that enforced a cooperative strategy.

[648] At the other end of the spectrum, people with abilities in high demand will get what is effectively a negative subsidy. Organizations will bid a "shadow" wage for the labor based on the value they place on it. However, the worker should only receive what is required to induce him or her into that position. Economist refer to the difference as a rent which can be taxed without affecting economic decisions.

[649] http://www.bls.gov/news.release/ocwage.t01.htm

[650] Miller et al. 1996.