BRIGHT FUTURE: Abundance and Progress in the 21st Century

David McMullen

 

 

First published in 2007

Copyright © David McMullen 2007

http://brightfuture21c.wordpress.com/

 

ISBN 978-0-646-46832-7.

303.49

 


CONTENTS

INTRODUCTION AND SUMMARY  1

CREATING FOOD ABUNDANCE  5

Better Plants and Livestock  7

Better Plants  9

Improved Livestock and Poultry Production  19

Defending GM Food  24

Food Safety  24

Environmental Scares  29

Environmental Benefits  32

Crop Lands, a Declining Resource?  34

Extent of the Resource  34

Soil Degradation  37

Summing up on Land  41

Water 42

Harnessing More of the Resource  42

Depletion and Degradation  43

More Efficient Use  44

Competition from Non-Agricultural Uses  46

Non-Conventional Water Resources  48

Genetic Base  52

Fisheries  53

Non-Renewable Resources  56

Nitrogen Fertilizer 56

Phosphate  56

Potassium   57

Fuel for Farm Machinery  57

"Alternative" Agriculture is No Such Thing  57

PLENTY OF RESOURCES  61

Aiming for Global Affluence  61

Our Energy Needs  64

Fossil Fuels  66

Oil 66

Coal 71

Natural Gas  71

Fossil Fuel as a Whole  74

CO2 Emissions and Global Warming  76

Uncertainties  76

Alarmism   79

What about Eco-Catastrophes?  84

Business as Usual for Now  86

Adapting to any Climate Change  86

CO2 Capture and Storage  88

Solar Energy in its Various Forms  91

Direct Solar 91

Wind  96

Waves  98

Hydroelectric Power 100

Biomass  100

Other Possible Resources  101

Summing up on Solar 101

Nuclear Power without the Phobia  102

Resources  106

The Safety of Nuclear Power 108

Concluding Comments on Nuclear 129

Geothermal Energy  129

Energy Overall 135

Minerals and Other Raw Materials  136

Tidying the Nest - Our Effect on the Environment 137

Air Pollution  138

Water Pollution  139

Pollution and Development 141

Pollution Scares  142

Loss of Forests  146

Mass Extinctions  147

CAPITALISM, THE TEMPORARY TOOL OF PROGRESS  150

Introduction  150

Soviet Hangover 151

Africa: More Capitalism Please  152

Capitalism Outgrows Itself 158

What about the "Communist" Countries?  162

Economic Calculation without Capitalism   163

Collective Ownership will be More Efficient 166

Leaping into the Unknown  177

ABBREVIATIONS  179

REFERENCES  181

NOTES  194

 


1

INTRODUCTION AND SUMMARY

Gloom reigns supreme. Any thought of progress is scoffed at. According to the received wisdom, the earth's "carrying capacity" will not permit global prosperity and "human nature" guarantees that any attempt to advance beyond capitalism will end in tears. Challenging these grim prognoses requires a "technofix" approach, and that is what the reader will find in the following pages.

The planet's capacity to comfortably accommodate us is limited only by the application of human ingenuity, something we are never going to run out of. Food production can be increased by making better use of land and water resources, modernizing backward agriculture, and developing higher yielding and more resilient varieties of crops and livestock. Our increasing energy needs can be met from an array of old and new resources. The fossil fuels - coal, oil and gas - on which we presently rely so very heavily, are ample enough, with the application of better methods of extraction and processing, to continue playing a major role for quite some time, and they can do so while keeping CO2 emissions within reasonable limits. In the longer run other energy resources will take on a greater importance, as their technologies develop and their costs decline. The options in view include sun, wind and wave, as well as uranium and thorium for nuclear power, and the geothermal energy beneath our feet. Then there are others we can only dimly foresee, if at all. At the same time, we will find all the raw materials we need to produce ever increasing quantities of goods and services. Most of these materials are in great abundance and are bound to become cheaper with new methods and new opportunities to substitute less costly for more costly ones.

We can get what we want without threatening the biosphere's "life support systems". While our impact on the natural environment is extensive, it is nothing compared with the battering that the earth withstands on a regular basis from super volcanoes, meteors and ice ages. Furthermore, progress leads to cleaner technologies and better knowledge of how to conserve and manage ecosystems.

We will definitely be making increasing use of our large and expanding carrying capacity as the economies of developing countries continue to grow, albeit patchily. By mid century the number of countries and proportion of the world's population in the affluent category will have increased significantly. Others will follow later in the century with some stragglers such as Sub-Saharan Africa taking until early in the 22nd century.

As the world's population increases from its present 6.5 billion to 9 or 10 billion in the second half the century (at which point it is expected to plateau, at least temporarily), a 2.5 to 3 fold increase in grain production will provide everyone with all the food they need, including produce from grain fed livestock. This can be achieved mid century with an average annual production growth rate of 2 per cent. A slower rate would only mean a delay by several decades.

As the century progresses an increasing proportion of the developing world will reach the per capita energy consumption levels presently achieved in the rich countries.[1] Total per capita energy production for a world with 9 billion people requires a 4.5 fold increase to reach the current rich country average. For a world with 10 billion, a 5 fold increase is required. These increases could easily be achieved this century if we maintain the growth rates seen in recent times and those expected in the next few decades. We can expect raw material needs to grow at a similar pace given they are used to build the industries, infrastructure, motor vehicles and homes that use the energy.

We can expect to see the demand on resources by the countries that are already rich to decline in importance. Their food consumption will stabilize given that their population is not expected to grow much beyond its current level of around one billion and satiation levels have been generally achieved. Being at the technology frontier their economies will grow more slowly than those of catch-up countries. Also their stage of development and static population means less expansion of energy intensive production such as heating, cooling, transport and infrastructure.

Being permanently stuck with capitalism is certainly a gloomy thought. Affluence on average conceals gross inequality, and whatever affluence is achieved is for most people accompanied by alienating employment and limited personal development. If human nature has made capitalism necessary, it was because we needed profit seeking capitalists to make us work. However, in the developed economies this is becoming less and less the case as technological progress transforms work generally into something which we want to do primarily for its own sake. On average it is becoming more interesting, complex and challenging as evidenced by the fact that over half the present workforce requires post-secondary training. Most of the really dreadful, dangerous and exhausting jobs have already disappeared and with increasing automation most of the dreary and menial ones will decline over the course of coming decades. Furthermore, under these new conditions, collective ownership by willing producers provides a more efficient economic motive force than ownership by a master class. It can more effectively tap into the creative powers of the vast majority and is not hidebound by sectional interest.

Any case for collective ownership, of course, has to pay heed to the prevailing view that the economically inefficient police states in the "communist" countries have shown that socialism is inherently flawed. As argued in the final chapter, socialism's lack of success in those countries was mainly due to the fact that they were only beginning to emerge from feudalism.[2] Just getting capitalism to develop in such backward conditions is a mighty achievement, let alone socialism. A socialist revolution in North America or western Europe, while having its own challenges, would be on far firmer ground. In particular, there is the transformation of work just referred to plus the fact that it is carried out by a working class that is in the majority, is educated, is worldly wise, understands what the revolution is about and is not easily browbeaten.

A somewhat more obscure argument against socialism which economists raise is also addressed. They argue that we cannot do without capitalism because we require markets for intermediate goods. These are the inputs that firms obtain from other firms for use in the production processes, and include raw materials, components, factory buildings and machinery. According to this view, if you do not have market relations you are stuck with top-down direction of what is produced and by whom, and this is a method which becomes increasingly ineffective as the economy becomes more complex. As argued in the final chapter, they are right about identifying markets for intermediate goods with capitalism but mistaken in their belief that decentralized price setting and resource allocation requires a market exchange.

Of course, radical change does not occur just because conditions for it are favorable. We have to understand what has happened and then act. With anything new and daunting, it takes a while to catch on and then leap into the unknown. And when we finally make a move we are bound to confront a steep learning curve and considerable resistance from remaining supporters of the existing order. So while the future is a bright one, the road ahead may still be long and bumpy.


2

CREATING FOOD ABUNDANCE

Food production will have to increase considerably over the next half century to ensure that everybody has all the food they want. The 9 billion or so individuals expected by 2050 will have to be much better fed than most of the present 6.5 billion.

At the moment, almost a billion people are under-nourished, receiving far from adequate levels of calories and other nutrients. The region with the largest number in this category is South Asia, where just under a quarter of the population are in this wretched condition. The region with the highest proportion in this category is Sub-Saharan Africa with over a third.[3] Worldwide, some 170 million children under five years of age are underweight due to malnutrition.[4] This makes them vulnerable to a range of diseases and it is estimated that around 3.7 million died in 2000 as a result.[5] Two billion people or more have iron, iodine and zinc deficiencies[6] and one fifth of the global disease burden is due to undernourishment.[7]

Then there is the majority of people who receive a more or less adequate diet but with rising incomes aspire to more 'luxury' foods, such as fruit, vegetables, meat and dairy products, which for a given level of calorie consumption require a lot more resources to produce. The calories from a hectare of most varieties of fruit or vegetable are far less than the calories from a staple grain, such as corn, rice or wheat, grown on the same area. Likewise, in the case of grain-fed livestock and poultry that consume far more calories than what humans get from the final product. About 50 per cent of current world grain production goes to feeding animals rather than humans.[8] Obviously if the grain were consumed directly, it would feed a lot more people. It would be more 'calorie efficient'. Then we have increasing demand for products such as tea, coffee, alcoholic beverages, chocolate, herbs and spices that are not consumed for the nourishment but which draw on resources that would otherwise be available for the production of staples.

This increasing pressure on resources as most people move up the "food ladder" will be alleviated to some extent by a number of factors that increase calorie efficiency. These include: an increased preference for chicken rather than red meat; the development of a greater range of palatable meat substitutes; and the development of improved livestock and feed.

A major upsurge in vegetarianism might help, however, there are no signs of this happening. Besides, vegetarianism of the affluent requires a wide range of fruit, vegetables, herbs and spices and possibly various exotic grains that are low in yield and resource efficiency. Even in India where vegetarianism is imposed by the tyranny of religion, a growing share of grain is going to support the burgeoning dairy industry. Furthermore, total vegetarianism would actually be unhelpful, given that some resources are best used for meat production, e.g., grain by-products and pasture land that is unsuited for crops.

So, what are the prospects for improving average consumption levels and eventually reaching a stage where all countries have reached the satiation level achieved in developed countries? They are good as long as we can increase grain production at rates that exceed population growth. As for how long it will take, that will depend on the difference in the two growth rates.

Over time the task will be made easier as the rate of population growth declines. It has been falling since the late 1960s when it peaked at around 2.1 per cent.[9] It is now around 1.13 per cent and according to the UN's medium growth scenario, it is expected to fall further to 1.05 percent in the period 2010 to 2015, to 0.7 percent during 2025 to 2030 and 0.33 per cent during 2045 to 2050.[10] So by mid century even a very modest increase in output would lead to an increase in the per capita average.

Doubling per capita consumption in developing countries can probably be achieved with a 2.4 to 2.6 fold increase in output.[11] This is on the assumption that their population increases by 50 to 65 per cent (i.e., a world total between 9 and 10 billion) and that all the increase in output goes to developing countries. The latter assumption is realistic given that people in developed countries already have plenty to eat and their population is not expected to increase.

Grain production has been increasing at more or less a linear fashion over the last 45 years or more. While varying considerably from year to year, annual growth generally gravitated around 30 million tonnes.[12] If we continue with a similar annual increase we will double production by around 2060 and provide more than a 50 per cent increase in per capita consumption in developing countries. A 2.5 fold increase would take until the final decade of the century.

If we can push up the pace and achieve the 2‑4 per cent annual growth rates achieved in the 1960s and 1970s we would reach the desired levels far more quickly. For example a 2 per cent annual growth rate would provide a 2.5 fold increase by 2050.

So, can we match or even exceed past achievements? Can science keep coming up with higher yielding crops and livestock? Can we ensure that there will be sufficient resources such as good quality land and water? Can we maintain or even increase the fish harvest? These and related questions will be addressed in the rest of the chapter.

Better Plants and Livestock

The main reason for our success in increasing food production during the last 50 years has been the ability of science to increase the yield potential of our plants and livestock, and to improve their ability to cope with a range of hostile conditions. Can we maintain this performance or are we running out of steam?

The prospects look promising when you consider that we are at the beginning of a biotechnology revolution that is bound to bring major advances in plant and animal breeding. Biotechnology provides a tool kit that includes genetic engineering, genomics, marker assisted selection, cell and tissue culture, and increasing knowledge of how physiological characteristics can act as indicators of performance. This tool kit is already starting to bring results.

Both genetic engineering and marker assisted selection rely on our growing knowledge of genes being provided by genomics which aims to describe and decipher the location and function of all the genes of an organism, and the interactions between them.[13]

Genetic engineering is opening up a totally new area of plant and livestock improvement. It allows us to directly manipulate the genes responsible for various attributes. This includes controlling the level of activity of genes by turning them on or off, or up or down, and also transferring genes from other plants or animals. Scientists will produce an increasing flow of results as they learn more about what genes confer what characteristics and improve their ability to manipulate genes, particularly multi-gene manipulation which is necessary in many cases.

Marker assisted selection (MAS) uses genetic markers to assist in selecting plant varieties with particular traits for inclusion in breeding programs. They are easily identified DNA sequences that are located near a specific gene associated with the trait. This technology is revolutionizing breeding methods because a large number of varieties can be screened for a desirable trait without having to grow them to maturity to determine its presence. Samples can be taken from seedlings and tested for the presence of the molecular marker. With traditional methods, a similar procedure would be far more time consuming and expensive, and often impractical. Checking for the continued presence of the markers can also determine whether the desired trait has been successfully transferred through the various stages of breeding and cross-breeding. This technology has already achieved considerable results, but will achieve a lot more as more is learnt about the role of genes in bestowing traits and their corresponding genetic marker is identified.

Tissue culture refers to a process where new plants are grown from individual cells or clusters of cells, often bypassing traditional cross-fertilization and seed production.[14] This technique enables breeders to attempt wide crosses between varieties that could not be hybridized before and enable faster stabilization of breeding lines. Such methods are also used to produce pathogen-free plants for distribution to farmers and for germ plasm storage.[15]

Better knowledge of what identifiable physiological features are associated with tolerance to certain conditions will assist the selection of varieties for inclusion in breeding programs. Features that are relevant to performance include: root structure that allows higher nutrient uptake, early ground cover to reduce evaporation of soil moisture and large seeds to assist early crop establishment.[16]

There is also much that can be done without cutting edge science or major breakthroughs. In large parts of the developing world high yield plant varieties still need to be adapted to local conditions. And there are neglected crops such as cassava and banana that would benefit from the kind of attention that rice and wheat have received in the past. Likewise, livestock suited to the tropics can benefit from more research. Breeding the water buffalo for meat and milk is an example often cited.

There are also many farmers in the developing world who have yet to take advantage of what has already been achieved because their use requires a more advanced form of agriculture and economic infrastructure. This includes distribution of high-yielding hybrid seeds that cannot be home grown, support from extension services and access to necessary ancillary inputs such as fertilizer and reliable water needed to achieve the promised higher yields.

What is achieved in coming years will depend very much on the level of research funding. Research in recent times has been hampered by the cut back in funding to the major research institutions in the last decade, including those associated with the Consultative Group on International Agricultural Research (CGIAR), the main umbrella organization for research in developing countries. Funding began to decline once research efforts had dealt with the urgent problems of the 60s and 70s by providing a new generation of high yielding rice and wheat. Notwithstanding this problem there are still a whole range of important improvements at various stages along the pipeline. We'll look first at the higher yielding plants currently being developed and then the advances in livestock and feed.

The examples provided are meant to give a general indication. They are not exhaustive and some of those in the pipeline may never see the light of day. Not included is speculation of what may be on offer two or three generations down the track when our knowledge is far more advanced. Perhaps, one day we will produce food directly without having plants or animals as intermediaries. If a plant can produce grain surely we can, and with a multitude of customized features. Likewise for producing muscle tissue (i.e., meat) without the rest of the animal. The output from any given input of land or water with such technology would increase dramatically.

Better Plants

Plants can be improved in a number of ways. Firstly we can increase their yield potential. This is the yield that can be achieved under the best conditions in terms of weather, water and soil. Secondly we can increase their capacity to narrow the gap between actual and potential yields under less than favorable conditions, i.e., conditions of stress. Thirdly we can increase the ability of the crop to survive in an edible state after it is harvested.

Increasing Yield Potential

Yield potential of plants can be increased by various means. Hybridization is one approach. Thanks to an imperfectly understood effect called heterosis, the hybrid from a cross of two different plant varieties grows more vigorously and produces more grain than either parent. In some crops including maize this process is simple. However, with other major crops it is a difficult and ongoing process.

Hybrid rice was first developed in China in the early 1970s and is now planted on about half that country's rice growing area. First generation hybrids have increased yields by 15 to 20 percent and second generation by a further 5 to 10 per cent of their predecessors.[17] A third generation is now being developed which will increase yields even further. Other Asian countries are now beginning to follow suit, developing their own varieties in line with their own palates and environmental conditions. Chinese scientists have also recently managed to develop and test a hybrid wheat breed that could at least double their country's present per-hectare yields.[18] They have also recently created the first hybrid soybean which is expected to increase yields by 20 per cent.[19]

Researchers can also breed plants that put more energy into grain production and less into the rest of the plant. Researchers are working on a new rice and wheat 'architecture' that will significantly increase their harvest index, in other words more grain and less plant. This will be achieved by developing varieties with larger grain heads on thicker but fewer stems.[20]

Other fascinating approaches include: crop plants with an algae gene that boosts yields by almost a third because the new strain converts nitrogen fertilizer far more efficiently[21] and rice with an antisense gene, that inhibits the formation of certain proteins and thus prolongs the grain-filling period of the plant. This rice, in its first field test, increased productivity by 40 per cent[22].

A long-term hope among some scientists is to create plants that are far more efficient at photosynthesis, the process that converts sunlight into energy. If this could be improved, plants could reach maturity more quickly, allowing more crops each year. Nitrogen fertilizer consumption could also be reduced because photosynthesis is the main consumer of this input. Apparently photosynthesis is very inefficient and worked a lot better back in the early days of plants when the atmosphere had little or no oxygen. A number of projects are doing very preliminary research on this problem.

Helping Plants Cope Better with Harsh Conditions

Crops are generally grown under less than ideal conditions which subject them to stresses that can reduce yields quite significantly. These stresses are usually classified into biotic and abiotic. Biotic stresses include the ravages of diseases and insects, and competition from weeds, while abiotic (or non-biological) stress includes the effect of too much or too little water, excessive heat or cold and soil problems such as salinity, acidity and erosion.

Because the amount of damage is quite high, there is much to be gained by improving plant tolerance. And the prospects for progress in this area look very good, with the level of success depending mainly on the extent of funding and the supply of trained workers.

Where crops have a particular strain or wild relative that copes well with a particular stress, this feature can be incorporated into existing commercial varieties through cross breeding. This can be assisted by the use of molecular marker technology to screen a large number of varieties where it is known which gene expresses a particular feature. Where reproduction is through cuttings, tissue culture can provide farmers with disease free plants.

Using genetic engineering, changes can be made directly at the gene level to bestow tolerance. This may involve transferring a gene from a totally different life form that is known to cope well with the particular stress or tweaking the plants existing genes to achieve a similar effect. The more we know about genes and how to manipulate them, the more that can be achieved.

The examples provided below should give a fair indication of the kind of work being done. While some are already showing up in farmers' fields most are still at the research and development stage. It is not an exhaustive list and there are no doubt very important omissions, and possibly some inclusions that will not meet up to expectations. We will start with biotic stresses.

Biotic Stresses

Rice has been genetically engineered to resist devastating diseases such as sheath blight, bacterial blight and tungro virus.[23] Sheath blight resistance has been achieved through the transfer of genes from an insect and from the soil bacterium Bacillus thuringiensis, more commonly called Bt.[24] Scientists have also successfully transferred a bacterial resistance gene from wild rice to cultivated rice.[25] In the case of barley, resistance to a particular disease has been conferred by a wine grape gene.[26]

Scientists are working on a bread wheat that is resistant to the devastating leaf blotch. They have discovered a gene that provides resistance to this disease and are using marker technology to find wheat varieties with this gene. As soon as a seedling sprouts, a small piece of the young leaf can be ground and then a DNA test can be run. This shows whether the markers for the gene are present.[27]

A transgenic potato is being developed that is resistant to the late-blight that caused the Irish famine in 1840 and still causes major havoc. Fungicide has limited effect, is expensive and when used in large amounts can be an environmental problem. The protective gene comes from a wild potato that scientists believe co-evolved in Mexico alongside the blight pathogen.[28] Potatoes could also become a major food source in tropical countries with the development of a variety incorporating a gene from chicken that resists bacterial rot.[29]

Reducing disease in the banana would make a major difference in poor countries where it is a staple food. Because the domesticated banana is usually a seedless clone which grows from cuttings, tissue culture is being introduced to propagate offspring that do not carry over disease from the parent plant. Disease-free cells are selected, removed under sterile conditions and placed in a growth medium. The resulting plants are then distributed to farmers.[30]

Researchers are working to map the entire genetic code of a wild banana from East Asia in the hope it will reveal the genes that provide resistance to the two worst enemies of the banana crop - black Sigatoka and Panama fungal diseases. Once identified, researchers hope to insert these genes into edible var-ieties.[31]

In the battle against viruses, techniques have been developed to insert harmless parts of the virus into the plant to set off an immune reaction much like an inoculation. And having become part of the plants genetic make up it is passed on to the next generation. The most dramatic example of such a process was papaya in Hawaii which had been devastated by papaya ring spot virus. A genetically engineered plant virtually brought the industry back from extinction. A biotech papaya is now being brought to farmers in Southeast Asia, the Caribbean and several other developing areas where papaya is a staple food. In Australia, scientists have developed a similar 'vaccination' technique that has already been used to create potatoes resistant to Potato Leaf Roll Virus and which they hope to apply to a range of plants that are vulnerable to viruses which up till now have proven to be virtually unbeatable.[32]

Providing plants with their own defenses against insects and pests can often be far more effective than other measures such as pesticides, or, at the very least, an important adjunct. In recent years the most dramatic advance in this regard has been the development of so-called Bt crops. This has also been one of the major genetic engineering success stories to date. The plant expresses insecticidal proteins derived from genes cloned from the soil bacterium Bacillus thuringiensis (Bt). These proteins bind to receptor proteins in the insect gut, destroying cells and killing the insect in several days.

Quite large areas are now being sown with Bt corn, canola and cotton. A Bt potato has also been available for use, however, major buyers such as fast food chains have put the stopper on it for fear of been picketed by bio-fearmongers. The more widespread use of these Bt varieties in coming years and the introduction of the gene into other crops including rice and wheat will bring continuing benefits. Yield gains from using Bt corn are estimated to average 5 per cent in temperate regions and 10 per cent in tropical regions.[33]

This is just the first of many toxins that will be provided to plants through genetic manipulation. Included among them will be the transfer of genes from other plants that have shown themselves to be more resistant to given insects. Australian researchers have added a gene from green beans to field peas, creating a crop with a built-in insecticide that is almost 100 per cent effective against pea-weevils, the most damaging pest in that country's pea crop.[34] Also resistance to the white-backed plant hopper is being transferred from barley to rice.[35]

Plant 'architecture' can also provide protection. This includes: developing wheat with more solid stems that will be less susceptible to attack by Hessian fly and sawfly;[36] maize with thicker epidermal cell walls that prevent armyworm larva establishing in the whorl (hair) of the plant;[37] and plants with an enhanced natural ability to produce leaf wax which makes them more difficult for insects to consume.[38]

One of the most destructive pests is the nematode, a microscopic worm that feeds on plant roots and comes in about 15,000 varieties. Scientists are using the genes for defense proteins that occur naturally in rice and sunflowers to fortify potatoes and bananas from this pest.[39] And the breeding of a nematode resistant soybean has been made possible with the help of molecular marker technology.[40]

Another stress that reduces crop yields is the competition from weeds. The biggest news in this area is the introduction of herbicide resistant genetically modified corn and soybeans. The crop contains a gene that makes it tolerant of the herbicide, Roundup, so that when a farmer sprays the field, weeds are killed but not the crops. This proves far more effective than when spraying can only be carried out prior to planting. A weed called striga can devastate grain and legume harvests in Sub-Saharan Africa. Researchers are countering this problem by developing a maize with a herbicide resistance derived from a naturally occurring gene in maize.[41]

There is a concern that pests can evolve resistance to the pesticide being incorporated in plants by genetic engineering, just as they evolve resistance to pesticide sprays. So far resistance has not become a problem with Bt crops despite their having been in use since 1996.[42] Scientist are looking at a number of strategies to delay or eliminate this danger. One approach is to use a Bt gene that is more widely expressed in the plant giving a better knock out punch that leaves less room for the build up of immunity that can happen when the insect experiences lower levels of exposure.[43] Another is the use of "pyramiding" where two Bt genes which are lethal in totally different ways are inserted into the plant. It is highly improbable that an insect would develop resistance to both.[44]

Abiotic Stresses

As well as dealing with a host of weeds and pests, crops also have to contend with the elements. The weather can be too dry, wet, hot or cold, and the soil can be poor.

Droughts and Flooding Rains

Lack of water is a major constraint on crop yields in many areas[45] and can destroy a crop in severe cases.

Crossing with drought resistant wild relatives is one approach. For example, CIMMYT (Centro Internacional de Mejoramiento de Maíz y Trigo or International Maize and Wheat Improvement Center) is currently developing drought-resistant wheat varieties descended from crosses that included goat grass, one of wheat's wild relatives.[46]

Progress is being made by CIMMYT in mapping genes for drought tolerance in wheat[47] and researchers at the University of Queensland are endeavoring to do the same for rice.[48]

Transferring to crops draught tolerance genes from other plants is another promising approach. It is believed that once the genes responsible for superior drought tolerance in sorghum are identified that these genes could be activated in maize because the two plants are likely to share the same basic drought tolerance pathways.[49] Scientists at the University of Bonn have identified a gene in the resurrection plant from South Africa which helps it to survive droughts. The plant can lose up to 95 per cent of its moisture without being harmed by slowing down its metabolism to almost zero during a dry period. It then springs back to life in a few hours once it receives water.[50] In Texas, US Department of Agriculture (USDA) researchers have identified the genes that help a type of grass from South Africa and a type of moss native to the High Plains of the United States to survive extended dryness.[51]

Plant varieties can be bred with physiological traits conferring drought tolerance. Molecular biologists in Oklahoma are developing a drought resistant wheat by adding genes to synthesize a naturally occurring sugar alcohol called mannitol which accumulate in leaf tissues.[52] Other helpful physiological features include larger seed size that improves crop establishment, early ground cover and pre-anthesis biomass that reduces evaporation of soil moisture, and roots that are able to extract water deep in the soil.[53]

Another strategy which might be more aptly called drought avoidance rather than drought tolerance involves breeding plants that match their development cycle with the availability of water. This has been used in the past and still offers much promise. In some cases this could mean that the periods of maximum water requirement match the periods of maximum availability.[54] In other cases it could mean ensuring maturity prior to the arrival of a dry period at the end of the growing season. This could be achieved through faster maturity[55] or through allowing earlier planting by developing varieties that can cope with shorter daylight hours.[56]

Where plants are not being deprived of water there is a good chance they are being drowned in it. A widespread problem in irrigated and high rainfall wheat-growing regions is water logging due to poor drainage. The prospects for developing water logging tolerant wheat are considered good because of genetic variability for this trait.[57] Breeders have found that "synthetic" wheat, bred from grass species that are the wild relatives of wheat, are exceptionally good sources of tolerance.[58]

Heavy rain on maturing wheat crops can cause the grain to start germinating before it is harvested. This degrades end-use quality due to the undesirable proteins produced during germination. CIMMYT has identified high rainfall wheat lines with high levels of sprouting tolerance which could be employed in breeding programs to rectify this problem.[59]

Hot and Cold

Just as it can be too wet or too dry, it can also be too hot or too cold. Agricultural research bodies in developing countries consider heat stress as one of their top research priorities.[60] CIMMYT has already had some success in identifying wheat varieties in their seed banks that have various traits generally associated with tolerance to heat stress. This includes leaf traits such as evapotranspiration, rolling, greater thickness or uprightness.[61] They expect genetic markers to facilitate the process. Similar efforts are also being made in the case of tropical maize.[62] And on the genomic front, researchers have identified a protein that acts as a master regulator of the tomato heat stress response.[63]

Cold tolerance can bestow a range of benefits. It can ensure that the crop won't be destroyed by a cold snap at the beginning of growing season. It can allow crops to be grown in climates too cold to support them currently or permit an extra crop by earlier planting and/or later harvesting.

The Chinese claim to have inserted cold tolerance genes from fish into beets, while British researchers are achieving similar results by incorporating a gene from carrots into various crops.[64] Researchers in Canada, have isolated a powerful gene from larvae of the yellow mealworm beetle that keeps the worms from freezing to death during the winter. They believe it is far more powerful than 'antifreeze genes' found in flounder, a fish which is no slouch when it comes to protecting itself in cold waters.[65]

Where the cold cannot be dealt with head on, it can be avoided by making plants grow faster. Researchers at Cambridge University's Institute of Biotechnology in England put a gene from a flowering weed into tobacco plants, making the tobacco grow much more quickly. The gene produces a protein that causes the plant cells to divide much faster at the tips of roots and shoots.[66]

Unfriendly Soil

Plants can find the soil far from friendly for a range of reasons ‑ in particular, salinity, poor structure and acidity.

Some crops have genetic variation for salt tolerance which can be exploited in breeding programs, particularly with the help of molecular markers.[67] Chinese researchers claim to have developed salt-tolerant varieties of rice[68] and Australian researchers announced that they have successfully bred salt-tolerant durum wheat by crossing an ancient salt-tolerant durum wheat variety with modern commercial ones.[69]

Genetic engineering can take traits from plants and organisms that thrive in a high salt environment. Scientists have genetically modified a tomato plant that thrives in salty irrigation water. The tolerance comes from a protein known as a 'sodium/proton antiporter,' which uses energy available in the plant cells to move salts into compartments within the cells.[70] Once the salt is stashed inside these compartments - called vacuoles - it is isolated from the rest of the cell and unable to interfere with the plant's normal biochemical activity. Not only does the tomato tolerate salt, it also removes salt from the soil. Work is being done to extend this technology to other crops. Chinese scientists have cultivated salt-resistant tomatoes, soybeans, rice and a fast-growing poplar using a gene cloned from a salt-resistant plant called Suaeda Salsa.[71] Another possible approach is to take genes from a bacteria that lives in places like the Dead Sea and splice it into crops. [72] This bacteria can thrive in salt levels ten times higher than ocean water.

Plants often have difficulty accessing the micro-nutrients in the soil because of its structure or composition. Improving the take up ability of the root system may help, possibly through better root system geometry.[73] So would increasing the nutrient reserves in seeds. These would sustain life until the root system is well developed.[74] Another approach might be to reduce the plants needs for certain nutrients. Improving their distribution within the plant would be one way of achieving this.[75] Little effort has gone into breeding crops adapted to these kinds of soil conditions despite the genetic potential.[76] Molecular markers will greatly facilitate the selection of micronutrient efficient genotypes.[77] Genetic engineering also offers promise. A gene for copper efficiency has been transferred from rye to wheat. The transferred gene confers on plants a much greater ability to mobilize and absorb copper ions tightly bound to the soil.[78] Major crops such as corn, wheat and rice have a lot of trouble absorbing iron from alkaline soils which make up a significant proportion of arable land. However, some crops including barley have no trouble. So researchers at the University of Tokyo took two genes from barley and introduced them into rice plants. The result was a four-fold yield increase in the same soil.[79]

Soil acidity is a major constraint on crop production. This is mainly because it releases aluminum ions which are highly toxic to plant roots. Vast areas either have their yields seriously reduced by it or are made unsuitable for cultivation. The problem is particularly serious in tropical regions. While, improving acid soils is part of the answer, its role is limited by the expense and by the fact that disturbing the soil can lead to erosion. Developing aluminum tolerant plants is often a feasible solution either on its own or as a complement to soil improvement.

There are good prospects for wheat given that there is considerable genetic diversity in aluminum tolerance. In Brazil local low yield aluminum tolerant wheat varieties have been interbred with high yielding varieties to provide the benefits of both.[80] A Portuguese landrace also has a high aluminum tolerance and has yet to be exploited in breeding programs.[81] Another strategy is to transfer rye's greater aluminum tolerance to wheat. Triticale (a cross between rye and wheat) could serve as a bridging parent to achieve this transfer.[82]

In the case of maize, researchers are confident that molecular markers and genomics will lead to the development of aluminum tolerant suitable maize cultivars.[83] Genetic engineering is also making some progress in the area. There has been some preliminary work on transferring a gene from an aluminum tolerant plant to maize.[84] Another strategy being pursued is to insert into plants a bacterial gene that codes for citric acid secretion. This allows them to emulate aluminum tolerant plants the roots of which secrete the acid into the surrounding soil in order to 'capture' the toxic aluminum ions which would otherwise attack the plant roots. This approach has been trialed on tobacco, papaya, and rice plants.[85]

Stacked Traits

There are many cases where yields would be increased significantly by the plant having more than one form of improved stress tolerance. For example, a crop may confront dry weather, acid soil and regular insect plagues. Genetic engineering ought to be able to contribute a great deal in this area by "stacking on" genetic changes appropriate for each of the stresses. A genetically modified maize that combines both herbicide tolerance and insect resistance has already been released; and there are already plans to extend this combination to other varieties, notably sugar beet, rice, potatoes and wheat.[86]

Post-Harvest Waste

As well as increasing the size of the crop, plant improvements can also increase the proportion that actually reaches the consumer. Ultimately this is what matters - yield net of post-harvest waste. A lot of food is lost through post-harvest spoilage, so anything that can make the harvested crop more robust will increase the effective food supply. This is particularly so in developing countries with poorer harvesting methods and a lack of refrigeration, storage and transport.

The delayed ripening of fruit and vegetables would improve shelf life and reduce spoilage. Genetic engineering is being used to control the amount and timing of the production of the hormone ethylene that regulates ripening in fruits and vegetables. Research has reached an advanced stage with tomatoes, raspberries, melons, strawberries, cauliflower and broccoli.[87] And in the Philippines, scientists have developed a papaya that instead of rotting in one week, can stay fresh for three months.[88] Researchers in England have found a "freshness gene" in petunias which shows promise.[89]

In the case of grain, greater resistance to storage pests would make a big difference. CIMMYT has discovered a source of such resistance and is incorporating this trait into maize breeding stocks.[90]

Improved Livestock and Poultry Production

Just as crops have to deliver up more from every hectare of land and kiloliter of water, so do livestock and poultry. These resources are used directly in the case of grazing and indirectly where the animals consume feed crops. About 3.3 billion hectares are under permanent pasture - more than twice the area under arable and permanent crops.[91] And as already mentioned domestic animals consume about half of world grain production. This ranges from around 60 per cent in Europe[92] and the US down to quite low levels in India and Sub-Saharan Africa.[93]

Improving livestock and poultry productivity becomes even more important as people in the developing countries increase their per capita consumption of meat and dairy products in step with rising income levels. For most of these countries increases in meat consumption have so far been fairly slight if not stagnant, as in the case of Sub-Saharan Africa. However, if middle income countries such as China and Brazil are anything to go by meat consumption can rise dramatically over a number of decades. In the case of China, meat consumption has quadrupled over the past 20 years.[94]

Increased productivity comes down mainly to improvements in feed and forage, disease control and livestock breeding. As with grain production for human consumption, this is in part a matter of developing countries catching up with the practices of developed ones and in part a matter of pushing out the technology frontier. The former is exemplified by the fact that in 1997-98 beef yield per animal was less than 60 per cent and milk yield per cow was less than 20 per cent of those achieved in the developed countries.[95]

Improved Feed and Fodder

There are many ways in which we can improve what livestock have to eat. Feed and fodder can be made more nutritious, the diet can be enhanced with additives and supplements and nutrients in food can be made more accessible by improving digestibility.

In developing countries simply catching up with the world's best feed practices can bring large gains. Feed can be harvested at the right time to maximize nutrient recovery, processed to retain more nutrition and improve digestibility, and stored properly to avoid nutrition loss. Animals can also benefit from being fed well balanced mixtures and provided with food supplements.

Smil (2000) points out that the overwhelming majority of China's pigs (which account for 90 percent of the country's meat output) is still not fed well-balanced mixtures but just about any available edible matter and hence it is commonly lacking in protein.[96] As a result feeding rates are well above the norms prevailing in Western countries and pigs take at least twice as long to reach slaughter weight as a typical North American animal - and its carcass is still lighter and fattier. He tells a similar story for chickens. The hundreds of millions of chickens roaming China's farmyards take three times as long to reach a lower slaughter weight as North American broilers.[97] China is far from being the most backward when it comes to animal raising.

Conway mentions a number of possible ways that genetic engineering could transfer greater nutrition to feed and forage.[98] Cereals are low in lysine compared with legumes such as peas and lupins. A gene transfer from legumes to cereals would benefit pigs and chicken. On the other hand legumes are deficient in a number of sulphur amino acids required by cattle and sheep. These would benefit from transferring genes from sunflower seeds and chicken egg protein to forage legumes such as lucerne and clover.

The wider use of growth hormones could considerably improve productivity. Those in cattle (BST) increase milk production efficiency by up to 40 per cent per cow and increase feed to beef conversion by about 9 per cent.[99] According to Avery the potential of pig growth hormone (PST) is even greater than for cattle.[100] He claims that PST will produce hogs with up to 60 percent less fat and 15 percent more lean, using one-third less feed grain.

Reducing methane production in livestock could save up to 10 percent of feed because of the energy loss avoided.[101] Smil refers to an additive, produced from a fungus, which reduces methane production by altering the metabolism of ruminant bacteria,[102] and scientists are developing a vaccine that will discourage the production of these methanogenic micro-organisms.[103]

Much can be done to improve the digestibility and nutrient absorption of feed. Better processing is one approach. For example, straw can be made more digestible by a range of chemical treatments, and lignin and cellulose in crop residue can be broken down using a range of methods including fermentation.[104] Plants can be bred to remove or neutralize substances that interfere with digestibility and nutrient absorption. Soybeans and wheat are being genetically engineered to express the phytase enzyme. This neutralizes phytate, a substance that "is widely distributed in cereals and legumes and reduces the absorption of iron, zinc, phosphorus and other minerals in humans and other animals."[105] Researchers have genetically modified lupin so that sheep absorb more sulphur amino acids required for wool and muscle growth. Presently, a large proportion of the acids are broken down in the rumen before reaching the small intestine where they would otherwise be absorbed. The lupin has been modified to contain a sunflower gene that produces a protein that is both rich in sulfur amino acids and stable in the sheep's rumen.[106] And Conway refers to research aimed at inserting genes from crops like sorghum, maize, millet into forage legumes to reduce lignin content and increase their digestibility by 10-30 per cent.[107]

Better Disease Control and Healthcare

Disease significantly affects livestock productivity. Alexandratos refers to estimates showing that at least 5 percent of cattle, 10 percent of sheep and goats and 15 percent of pigs die annually due to diseases.[108] And in the case of animals that survive, productivity is less than that for healthy animals.

Farmers in developing countries will benefit from access to better veterinarian services and disease control measures as they modernize and farmers worldwide will benefit from the forward march of medical and veterinary science which will improve our ability to prevent, diagnose and control disease.

As with human health, biotechnology will play an important part in disease control. Some progress has already been made in the development of genetically engineered vaccines. For example, researchers at the School of Veterinary Medicine at the University of California, Davis have been developing such a vaccine for rinderpest, a devastating viral disease that is responsible for millions of deaths among cattle herds each year throughout Africa and Asia.[109] The vaccine is produced by transferring two genes from the rinderpest virus into the virus used to make the smallpox vaccine. It is particularly suited to the backward areas affected because it requires no refrigeration and is simply scratched onto an animal's neck or abdomen. Furthermore, a cattle herder can produce thousands of doses by scratching the skin of a calf, applying the seed vaccine, and a week later harvesting the scab in saline solution.

In central Africa sleeping sickness (trypanosomiasis) poses an enormous obstacle to human health and cattle production. A range of measures offer the hope of recovering infested areas for agriculture. These include trypanocidal drugs, aerial spraying, adhesive insecticides, impregnated screens and traps and the use of sterile insects.[110]

Breeding Better Farm Animals

Breeding programs can improve livestock and poultry in a range of ways that ensure that they make the most of what they are fed. This can mean more meat as a proportion of body weight, taking less time to reach slaughter age, greater disease resistance, better ability to process food, less nutrient needs, more milk or eggs for a given level of feed, improved reproductive efficiency and the ability to consume a wider range of foods.

As with plants, biotechnology will play a major role in future breeding programs. With rapid advances in understanding the genetic make-up of animals, genes that are important for economic performance, such as those for disease resistance or for adaptation to adverse environmental conditions, can be identified and transferred into animals, either through marker-assisted selection or through genetic engineering.[111]

Conway reports on the development of genetically engineer livestock that produce greater quantities of bovine growth hormone.[112] This would enable them to reach optimal slaughter age more quickly, meaning that less of the feed and water consumed would go into just standing around breathing rather than growing.

Another strategy is to reduce the nutrient needs of the animal. Conway suggests the possibility of introducing genes for sulphur amino acid biosynthesis, present in the bacterium E. coli, directly into sheep, bypassing the need for improved fodder.[113]

Given the high losses from disease, the transfer of genes encoding for resistance from other animal species and even from plants could bring significant benefits.[114] For example, a genetically modified cow is being developed with a mouse gene that makes them resistant to mastitis of the udder.[115] Currently, antibiotics are used to treat the disease, and the milk cannot be used while the cows are on the drugs.

Smil sees much to be gained in devoting more resources to breeding animals suited to the tropics, a region that has not received anywhere near the attention of the temperate zones.[116] He gives the example of how the water buffalo might be transformed from a working animal into a valuable meat and milk specie. It is particularly suited to tropical and sub-tropical climates. Because of the higher count of cellulose-breaking bacteria and protozoa in their guts, they use low-grade roughages more efficiently than normal cattle and have overall lower feed/gain ratios. In addition, buffalo milk is richer in protein and fat than cow's milk. However, a breeding program would be needed to raise their average milk and meat yields which are far behind those for temperate-climate cattle.[117]

Defending GM Food

While genetic engineering promises to contribute much to the challenge of increasing food production, its opponents have managed to whip up considerable opposition. We are told that it is 'unnatural', tinkering with nature, playing God; that it poses fearful food safety risks and threatens the environment with 'super weeds' and 'genetic pollution'. We are like the master's apprentice tinkering with forces we do not really understand and which can get out of our control.

A host of inquiries have given genetically modified food the nod of approval and firmly rebutted the claims of opponents. They consider the risks are mostly identical to the risks associated with conventional foods and that those that are different are well covered by the regulatory regimes in place. The only real concern is that as changes made by genetic engineering become more varied and complex that the science and technology needed to assess them keeps pace.

A range of regulatory bodies are involved assessing and regulating transgenic crops. In the US there is the Food and Drug Administration (FDA), the Department of Agriculture (USDA) and the Environmental Protection Agency (EPA). At the international level there is the World Health Organization (WHO) and the Food and Agricultural Organization (FAO). Genetically engineered crops have been grown and tested for 20 years and eaten by millions of people on a daily basis since 1996 without any disastrous consequences.

Food Safety

There is no credible evidence that GM foods are less safe to consume than other food. The risks are not of a different nature than those that are already familiar to toxicologists and can be created by conventional breeding.[118] In fact some transgenic food currently on the market are identical to the conventional product because the gene change is not in the final product. Where there is a change to the final product, it will be easier to evaluate for safety compared to those developed through traditional breeding because the new method is more precise. Instead of randomly combining all the traits of the two parent organisms, as happens with conventional breeding, genetic engineering permits identification and transfer of only desirable traits. Scientists know what has been changed and therefore what to look for when evaluating possible risks.[119]

In its short history, transgenic food has had three main food safety claims against it. These were the study which claimed that laboratory rats were being poisoned by GM potatoes; concerns about the use of antibiotic resistance genes in the gene transfer process; and the introduction of allergens. We examine these in turn.

Rats Don't Like Raw Potato

A preliminary study performed at the Rowett Research Institute in Scotland by Dr Arpad Pusztai reported that rats developed intestinal problems after being fed raw transgenic potatoes containing a lectin with known insecticidal properties.[120] The study was heavily criticized by independent experts and by the Rowett Institute itself, which discredited the study entirely after performing an audit of the research.

The British medical journal Lancet made the unwise decision to publish the study. In an editorial disowning it, they conceded that they were caving into pressure from GM opponents who were running the line that failure to publish was suppression. This flies in the face of the normal practice of academic journals of only publishing papers that have successfully run the gauntlet of peer review. With laboratory studies the emphasis is on ensuring that the usual rules of evidence are being applied. This study failed that test totally.

Antibiotic Resistance Marker Genes

There has been some objection to the incorporation of antibiotic-resistant marker genes in transgenic crops together with the gene conferring the desired trait. The antibiotic kills seedlings to which the genes have not been properly transferred.[121] There has been a worry that if these genes were present in transgenic food or feed, they could confer resistance to disease-causing micro-organisms in the stomachs of any human or farm animal eating it. There is even a concern that antibiotic resistance could be passed on to people who consume livestock products.

There are a number of answers to these fears: (1) The resistance gene protects against an antibiotic which is not used on humans and animals; (2) The antibiotic resistance may not necessarily be transferred to the final plant variety distributed to farmers because it is the product of a cross between the original transgenic plant and a commercial one; (3) if it is transferred, the chance of it being incorporated into the genetic make-up of micro-organisms is zero, and this is even if we didn't allow for the effect of digestion which tends to totally destroy genes and DNA; and (4) recent attempts to get microbes to pick up the trait confirmed the impossibility.[122] Even if the worry had some grounds to it, it is becoming a thing of the past as researchers develop other methods to determine whether a gene has "taken."

Introducing Allergens

Another potential risk posed by GM foods is the introduction of genes from organisms that cause allergic reactions in some people. This would pose a problem if people with allergies are unaware of the danger. This is most likely if the GM food is a widely used ingredient. Outside a relatively small number of genes associated with a limited number of foods, allergic reactions are fairly rare. Most food allergies occur in response to specific proteins in only eight foods: peanuts, tree nuts, milk, eggs, soybeans, shell fish, fish and wheat. Furthermore, the small risk is totally under control. Any additional components added to a GM crop are clearly defined and easy to detect and can be tested for any allergic reaction or other toxic effect. These would be picked up in mandatory tests that are much the same as those for pesticides and food additives. This is what happened in the case of an experimental soybean with an added Brazil-nut protein. It was abandoned once the problem was recognized.

Safety Endorsements

A long list of relevant bodies have concluded that genetically modified food is as safe as any other food. These include: the WHO, the FAO, the United Nations Food Program, the International Society of Toxicology, the French Academy of Science and Medicine, the American College of Nutrition, the American Medical Association, the General Accounting Office (the investigative arm of the US Congress), the National Academy of Sciences (NAS), the Royal Society and the British Medical Association.

Safer and Healthier

GM foods are not only safe, they have the potential to make food even safer and healthier.

Removing allergens Eliminating or reducing the allergenic properties of food would be a major service to that significant proportion of people who suffer from allergies. Using gene silencing techniques which reduce or shut off the production of the offending protein, researchers have already grown low allergy rice, wheat and soybean while progress is being made with peanuts and prawns.

Healthier oils In an effort to create healthier fats, researchers have modified the fatty acid composition of soy and canola in several ways. This includes oils with reduced or zero levels of saturates and trans-fatty acids and with high levels of oleic acid.[123] Plans are also afoot to introduce fish type omega-3 into oil-seed crops. This could be achieved by introducing genes from algae and marine micro-organisms.[124]

Better frying potatoes A transgenic potato has been developed which contains a gene for an enzyme which greatly increases starch synthesis. The increased starch content makes the potatoes take up less fat during frying, resulting in a lower-fat product.[125]

More protein In India, a gene was added to ordinary potatoes giving them a third more protein than normal, including substantial amounts of the essential amino acids lysine and methionine. The new gene comes from the amaranth plant which grows in South America.[126] Protein enriched maize and soybeans have also been produced[127] and researchers are seeking to improve the protein content of vegetable staples such as cassava and plantain.[128]

Antioxidants Scientists have produced tomatoes with two and half times the normal level of lycopene. Lycopene is thought to reduce the risk of several types of cancer and some forms of heart disease. However, it is normally difficult to increase the amount in one's diet and taking it as a supplement does not work.[129]

Another antioxidant that genetic engineering can help with is vitamin E. Studies show that vitamin E lowers the risk of cardiovascular disease, cataracts and some cancers, and it may slow the progression of some degenerative diseases such as Alzheimer's.[130] However, to achieve its results it needs to be taken in levels that are not practical to receive from our diet because of the quantities involved, e.g., four pounds of spinach per day or 3,000 calories of soybean oil.[131] Researchers are hoping to increase our intake by tinkering with a gene that converts the less potent gamma form of vitamin E to the more potent alpha form in soybean, corn and canola oils.

Vitamin A Every year some 500,000 children in the developing world go blind because of vitamin A deficiency. Researchers are hoping to reduce this appalling statistic with the help of a gene from daffodils. This produces elevated levels of beta-carotene which are then converted to Vitamin A in the human body. Rice with this gene (called golden rice because of its color) has been crossed with local varieties of rice which are undergoing field trials and will hopefully be available to subsistence farmers in the near future.[132] Work is also progressing on developing a similarly enhanced mustard which is grown widely in developing countries for its oil.[133]

Access to iron Most people get too little iron, with almost one third of the world's population believed to be anemic, and possibly around one fifth of all malnutrition deaths caused by a lack of iron.[134] Scientist are working on a variety of rice that has a higher level of iron in the grain and also makes it more accessible.[135] The amount of iron is doubled with the aid of a gene from the French bean, while accessibility is aided by two mechanisms. The first one involves a gene from fungus which counteracts a molecule called phytate that locks up about 95 percent of the iron in the plant. The second involves a gene from basmati rice which makes a protein that aids iron absorption in the human digestive system.

Food tolerance The vast majority of east Asians and blacks, and many whites are intolerant of cow's milk. That's because their bodies do not produce enough of the enzyme lactase, which is needed to digest the milk protein lactose. In France researchers are working to eliminate the problem by giving cows a gene that will cause them to manufacture their own lactase, which will be present in their milk.[136]

Many people cannot eat wheat, oat, rye or barley products because the gluten makes them ill. Consequently, British researchers are working on a process to remove from gluten the part which causes illness while leaving the part that is important for baking.[137]

Medical Uses

As well as the nutritional benefits of genetically modified organisms, there are the medicinal.

Vaccines in food offer considerable promise for developing countries. Because they would be administered orally it would avoid the horrendous number of HIV and hepatitis B infections that are presently caused by unsafe injection. They would also be inexpensive and require no refrigeration. Potatoes, tomatoes, carrots, bananas and rice are being developed containing vaccines for a range of diseases including food borne E. Coli, cholera and hepatitis B.[138]

A plant could possibly provide an edible form of immunotherapy for asthma. Tests with mice are promising. Consumption of engineered lupin plants that contained an asthma allergen from sunflower seeds protected the mice from a large otherwise asthma-inducing dose of the allergen in the air.[139]

Genetically modified plants, bacteria and animals are being turned into little factories churning out cheap ingredients such as proteins, enzymes and hormones for the pharmaceutical industry. The diseases being treated so far include hemophilia, cystic fibrosis and multiple sclerosis.[140]

Menaces can be neutralized. For example a ryegrass is being develop with less hay fever allergens in its pollen[141] and work has been done that may lead one day to a malaria resistant mosquito.[142] Also, new friends can be made, such as a gene-altered microbe that when applied to the mouth elbows out bacteria that cause tooth decay,[143] animals with tissue and organs suitable for human use[144] and plants that change color in the presence of landmines.[145]

Environmental Scares

GM crops are accused of posing a number of environmental risks. There is said to be a danger from gene flows which can create "super weeds" in the wild and "genetic contamination" of other crops. Also US Bt crops have been accused of endangering the monarch butterfly.

'Super Weeds'

Critics raise the specter of genetically enhanced crops breeding with wild relatives to create a 'super weed' that could overwhelm the natural environment and curtail genetic diversity both among plants in general and among the existing wild varieties that provide the 'gene pool' for breeding better commercial varieties.

The first question to ask is how likely is such inter-breeding? To begin with, there needs to be wild relatives in the region. This rules out wheat, corn, soybean, cotton and potato in most places where they are grown. The main possible concerns would be rice in Asia and Africa, corn and potatoes in Mexico and Central America, wheat in the Middle East and soybean in Korea and China

The proximity of related species does not necessarily mean that they will inter-breed. They need to flower at the same time, share the same insect pollinator (if insect-pollinated) and be close enough for the transfer of viable pollen. The latter can easily be thwarted by creating a buffer zone planted with traditional crop varieties to minimize any possible effects of pollen flow to a neighboring farmer's field or to a wild plant relative.[146]

Would a trait provide a selective advantage? Some traits are obviously not a risk. For example, tolerance to a particular herbicide is not likely to confer an advantage to a plant in the wild because the herbicide is not encountered there. If the weed becomes a problem on farms or areas of human settlement, it can be controlled with some other herbicide. Other traits - such as resistance to pests or disease, or tolerance of hostile growing conditions such as drought or poor soil - could theoretically give a weedy relative an advantage. However, the likelihood is diminished when we keep in mind that wild plants by their nature are already stress tolerant. If they were not they would simply die out. Domesticated plants on the other hand have lost much of the hardiness of their ancestors. Farmers select for edible yield while making up for any drop in stress tolerance by a range of farm practices such as irrigation, soil improvement and pest control. Any reintroduced stress tolerance would have to be quite strong to compete with the wild varieties. One example is sunflower which has been given a gene from wheat to resist white mould. If this genetically modified variety were introduced, gene flow would be inevitable because the crop is grown in the same regions as the wild varieties. However, it would have very little effect because the latter already have resistance to white mould.[147]

It should also be kept in mind that the risk faced is identical to the risk from domestic plants bred conventionally for stress tolerance. Ironically, genetic engineering opens up a number of ways of ruling out gene flow. One approach is to incorporate what has been called a 'terminator' gene into the plant which renders it sterile while another approach involves passing on the attribute on the maternal side and hence not transmitted through the pollen.

Genetic Contamination

Similar to the 'super weed' ruckus was the one made over the claimed appearance of genes from genetically modified maize in Mexican landraces. This was dubbed 'genetic contamination'. Landraces are the varieties developed by small-scale farmers over the centuries and have evolved through selection to thrive under particular environmental conditions and to meet local food preferences.

It was finally determined that the claim was unfounded.[148] However, if indeed there had been a gene transfer it would have been no different in nature from those involving conventional modern varieties. These have been occurring for may decades without causing any problems. If plants are superior from the farmers point of view, their seeds will be retained. If they are inferior they will not be.

Monarch Butterfly

Opponents of GM crops have made a big deal out of a supposed threat to monarch butterflies from Bt crops.[149] It is something of an enviro icon, and no anti GM rally is complete without the presence a number of eco-bubble-brains dressed up as butterflies. The kafuffle started when the journal Nature in 1999 published a paper by Dr. John Losey of Cornell University showing the toxic effects when monarch butterfly larvae in a laboratory study were fed their favorite food, milkweed, covered with pollen from Bt corn.

A number of objections have been raised to the study. To begin with, only one type of Bt corn pollen was tested from among the many types of Bt corn in use. Recent studies indicate that a few types of Bt corn pollen may kill or slow the growth of monarch caterpillars, while other types of Bt pollen have no harmful effect.

More importantly, the laboratory results were in no way indicative of the real world risks to the butterfly. In the field, the risks to the larvae are minimal for a range of reasons: corn pollen is produced for only a short time during the growing season; farmers control milkweed in and around their fields, just as they control other weeds; corn pollen is heavy and is not blown far from corn fields by the wind; and even if milkweed were within a few meters of cornfields, pollen density on the leaves would not be high enough to pose a danger

The EPA, a body that is more often than not the greeny's friend has given Bt crops a clean bill of health. After evaluating the evidence, the EPA concluded that the scientific evidence demonstrates that Bt corn does not impact on monarch butterfly populations and that a hazard in the laboratory does not translate into a risk in nature.[150] Finally, if there had been a problem with Bt corn it would be resolved by varieties currently being developed that only express Bt in the stalk. Only insects that actually attack the plant would have any possibility of being affected.[151]

Environmental Benefits

Often ignored are the environmental benefits of GM crops. These include reducing impacts on the environment and providing remedies for past damage.

Less Use of Pesticide

The insect resistance of Bt crops has lead to a greatly reduced use of pesticide. US corn growers, for example, have reduced pesticide treatment for the European corn borer by about a third[152] and according to one projection the use of pesticide in the corn crop will drop by 70 per cent once resistance to corn rootworm is also incorporated into seeds.[153] It has been reported that Chinese farmers of Bt cotton have slashed their use of pesticides by about 80 percent.[154] In Australia, pesticide use on Bt cotton crops was about half that on conventional crops, and with a newly released version, trials suggest a 75 per cent reduction.[155] In India the use of Bt cotton has cut pesticide spraying by two thirds.[156] The recently announced blight resistant potato promises large reductions in the use of fungicide and insecticide, given the high levels currently used to control the disease.

The adoption of herbicide resistant crops is also leading to a more environmentally friendly herbicide regime. Because the crop is resistant to it, a post-emergent herbicide can be sprayed over the crop killing any weeds that may have sprouted. The herbicide used - glyphosate with the trade name Roundup - is required in lower quantities because it kills such a wide range of weeds replacing the need to use a multitude of herbicides. It is also environmentally quite benign. It has extremely low toxicity to people and animals. It also binds well to the soil until it completely deteriorates, so there is very little that can run off into water supplies.[157]

Less Tilling of the Land

Because herbicide resistance allows for crops to be sprayed after they have been planted, herbicides can more effectively control weeds so reducing the need to till the soil for that purpose.

Reduced tillage has a range of benefits in terms of conserving agricultural resources and the environment generally. It dramatically reduces soil erosion which affects fertility and clogs up rivers and streams, carrying pesticides and fertilizer with it. The crop mulch shades the ground and slows evaporation and the improved soil structure resulting from less plowing actually increases the movement of water into the soil following rain or irrigation and holds it there, which means less irrigation is necessary.[158] Low tillage also means less tractor passes and less fuel consumption. According to one study no-till saves on average about 3.9 gallons of fuel per acre.[159] Studies by groups such as the Conservation Technology Information Center and the American Soybean Association all attest to the fact that herbicide resistant crops have significantly encouraged the use of low till methods.[160]

Higher Yields Mean Less Pressure on Resources

A primary objective of GM crops is to increase yields, and to the extent that they succeed they lessen the demand for resources such as land, water and energy and leave more land for wildlife. In the case of the crops that have been used to date, the gains have been through significantly reducing the losses caused by weeds and pests.

Environmental Remediation

Genetic engineering is about to bring a revolution in bioremediation. This is the use of organisms to remove contaminants from water and soil. University of Georgia researchers have modified a poplar tree which can suck mercury from the soil with the help of a bacteria gene which bestows a tenfold increase in mercury tolerance.[161] Another group of researchers have added a gene from the E. Coli bacteria and another from soybeans to make a distant relative of cabbage into a connoisseur of arsenic. The plant pumps arsenic from the soil and stores it in its leaves, where it can be easily harvested and disposed of.[162] Biologists at the University of California at San Diego modified a relative of the mustard plant so that it sucks up various heavy metals into its stems and leaves. These include lead, arsenic, mercury and cadmium.[163] At the University of Washington researchers have inserted a mammalian liver enzyme into a tobacco plant enabling it to absorb and degrade a variety of chemicals including the most widespread ground water contaminants called chlorinated solvents[164] And researchers at Ohio State University have engineered a form of algae to make it extract copper, zinc, lead, nickel, cadmium and mercury, and other metals from contaminated water.[165]

Crop Lands, a Declining Resource?

While the prospects are good for more productive plants and livestock, will achievements in these areas simply be compensating for a decline in the land resource base rather than actually increasing output? This will mainly depend on the following factors which we will discuss in turn:

·        the amount of extra land that can be brought into crop growing;

·        the encroachment of the built environment onto cropland; and

·        the extent of soil degradation which either makes land unusable or seriously reduces yields.

Extent of the Resource

How much extra land could be opened up to crop production? It has been estimated that the 1.5 billion hectares currently used for crop land represents about 36 per cent of the land that is to some degree suitable for that purpose. [166] In other words, there is an extra 2.7 billion hectares. This gives a total of 4.2 billion hectares which is about a third of the non-ice-covered land area. The remaining 9 billion hectares or so is excluded mainly because of unsuitable soil and/or climate.

Of course, most of this extra 2.7 billion hectares would never become available. Some of it is covered in human settlement or is too inaccessible, while a large part is taken up with forests and other natural areas that are (or should be) mainly committed to uses in their existing state such as conservation, water catchment and timber.

However, it only requires a relatively small proportion of this area to be available and of reasonable quality for it to represent a significant addition to crop area. According to Buningh and Dudal (1987),[167] out of a total forest land of 4.1 billion hectares, 100 million had high crop potential and 300 million had medium potential. While out of 3 billion hectares of grasslands, 200 million have high crop potential and 300 million have medium potential. With current crop land at around 1.5 billion hectares, a few hundred million would be a significant addition. This includes some of the old cropland in places such as North America, Europe and Argentina which could be returned to use if costs were lower or prices higher. Of course, in the case of land currently used for grazing, one would have to take into account the loss of livestock production.

Then we have large areas which are presently not used because of degradation or natural infertility but which could be brought into use with improved soil management methods and new crops that can tolerate the poor soil. These include the large areas with acid soils, particularly in South America and central Africa[168] and also some of the land that is very saline either naturally or through human mismanagement.

Next we have good land which up till now has been unus-able because of the lack of fresh water. This could be brought into use by the desalination of sea water and brackish groundwater. This process, which is discussed in more detail in the section on water resources, is getting cheaper by the day with new innovations and industry maturity.

So overall, there seem to be good reasons to conclude that, while there are not the vast virgin lands of yesteryear, the extra land still available will nevertheless provide a sizeable cushion against the impact of increased human settlement and soil degradation. In fact these would have to be quite large in order to actually reduce the land resource base.

Encroachment on crop land by the built environment is primarily an issue for developing countries and the US. Developing countries will house 90 per cent of the population increase expected over the next half century and at the same time will undergo a great deal of space consuming economic development. The US is the only major developed region expected to undergo a population increase in the foreseeable future, due mainly to high immigration and high fertility rates among immigrants.

Europe is expected to shrink from its present 726 million to 632 million in 2050, opening up the prospect that the area under built environment may actually decline and the availability of cropland increase.[169]

The FAO estimates that on average people in developing countries use about 0.04 hectares of built environment per head.[170] With the population expanding by two billion or so by 2025, an extra 80 million hectares would be required by then. If this were all cropland it would represent 5 per cent of the total. By the time the population reaches 9-10 billion mid century the increase between now and then would be 120-160 million hectares. This would be 8-10 per cent of total cropland.

Of course not all of this will in fact be suitable for crops or be premium grade if it is. Nevertheless, it is probably correct to assume that a significant proportion would be, given that urban centers are often sited on fertile agricultural land in coastal plains or river valleys. Alexandratos surmises that about 60 per cent of any increase would be on potentially arable land. This would include both actual cropland and land that could be useable.[171] So this would mean 3 per cent being taken out by 2025 and around 5 or 6 per cent by 2050.

With the rate of urbanization increasing during this period, a growing proportion of the expansion in the built environment will take the form of urban expansion. According to one study, for all developing countries, the annual loss of arable land transformed to urban uses due to expanding urban populations is estimated at 476,000 hectares.[172] This is 12 million hectares over 25 years and 24 million over 50 years.

When looking at urban expansion in developing countries, it needs to be kept in mind that a significant proportion of urban land is still used for agriculture by households, for example, 28 per cent of Beijing and 60 per cent of Bangkok.[173] These urban activities can take many forms:

Horticulture takes place in home sites, parks, rights- of- way, roof tops, containers, wet lands, and green houses. Live stock are produced in zero-grazing systems, rights-of-way, hill sides, coops, peri-urban areas, and open spaces. Agro forestry is practiced using street trees, home sites, steep slopes, within vine yards, green belts, wet lands, orchards, forest parks, and hedge rows. Aquaculture is practiced in ponds, streams, cages, estuaries, sewerage tanks, lagoons, and wet lands. Food crops are grown in home sites, vacant building lots, rights-of-way for electric lines, schoolyards, churchyards, and the unbuilt land around factories, ports, airports, and hospitals.[174]

Another thing to keep in mind is that in many places there is going to be an absolute drop in the rural population, and the corresponding decline in rural settlement will free up some land for crops.

In the US, the total developed area, including non-urban infrastructure, was estimated at 5.2 per cent of the total in 1997.[175] That is 48.4 million hectares or 0.17 hectares per head. With the US population expected (on the most likely assumptions) to increase by another 100 million over the next half century that would mean another 17.2 million hectares assuming the average area per head remains the same.

US cropland covers 455 million acres (182 million hectares).[176] If all of the increase in developed area were on cropland, it would represent a 9 per cent reduction in the latter. However, that is not likely to be the case given that many fast growing areas such as Florida and Arizona are not areas with high concentrations of prime crop land.[177]

Curiously the rural population in the US takes up a lot of residential land. In 1997 this was estimated to be 73 million acres (30 million hectares), typically 8 hectares or larger for each household.[178] Presumably a significant proportion of this could be placed under crops if costs were lower or prices higher.

So, to sum up, while the encroachments by human settlement are bound to be significant, they will not be on a scale that will make them a threat to food security. This is particularly so when we keep in mind that the process is gradual and that much of the increase will be a generation or two away, at a time when agriculture should be much more productive than it is now.

Soil Degradation

The next question is whether continuing soil degradation is going to seriously undermine agriculture's resource base. Farming practices can harm the soil in a range of ways. Water and wind erosion cover the biggest areas, and is the main problem for rain-fed cropland. For irrigated land, the main concern is increasing soil salinity.[179] Other significant forms of degradation include loss of organic matter and nutrient depletion.

Erosion occurs where soil is dislodged and removed by water or wind. The impact of any level of erosion on productivity will depend on the depth of the topsoil. Salinity is caused by excessive irrigation and poor drainage leading to the build up in the soil of salt left there by evaporating water. Nutrient depletion is due to insufficient application of fertilizer or to applications in the wrong proportion. Low levels of organic matter lead to a degradation of physical properties of the soil so that it loses the ability to hold water, and to retain and release nutrients.

The extent of degradation is not well understood. There is a serious lack of detailed studies and conflicting interpretations of what is known. For example, in the case of India, estimates by different public authorities vary from 53 million up to 239 million hectares.[180]

Notwithstanding this uncertainty, there is general agreement that while soil degradation is a major problem most land is not seriously affected. Studies reviewed by Scherr suggest that soil quality on three-quarters of the world's agricultural land has been relatively stable since the middle of the twentieth century.[181] Also, at least to this stage soil degradation has not had a serious overall impact on crop productivity.[182]

Of particular importance is the fact that degradation is not a serious constraint on food production in the temperate regions of the world. These include most of North America and Europe. Their soils are the result of glaciation in the last Ice Age, are deep and fertile and are fairly resistant to degradation.[183] And they are better managed by modern agriculture.

Furthermore, a lot of soil degradation is on lands that while extensive in area are not major contributors to total food production because of inherently poor growing conditions. The climate or terrain is unsuitable and the soil is inherently of low fertility.[184]

Soil degradation has also had its share of alarmism. During the 1970s and 1980s, so-called desertification received a great deal of attention. It was believed that deserts such as the Sahara were spreading irreversibly. However, since then remote sensing has established that desert margins ebb and flow with changes in the climate and studies have revealed the resilience of crop and livestock systems and the adaptability of farmers and herders.[185] In the case of wind erosion in North America, past concerns showed insufficient recognition of the fact that erosion usually involves soil being blown from one field or farm to another and hence no loss to agriculture. According to Crosson and Anderson, US studies have found very small long-term yield effects due to erosion. They indicate that if erosion rates were to continue at the same rate as in 1982 for 100 years, national average yields in the US would only be reduced by be 3-10 percent. [186]

Arguably the main soil problems are (1) salinity and the excessive use of nitrogen relative to other nutrients in a lot of irrigated farming in Asia[187] and (2) the grossly inadequate use of fertilizer of any kind in Sub-Saharan Africa.

There are a range of countermeasures that can be taken against degradation. In some cases problems can be remedied and in others preventive measures adopted to avert or retard further damage.

To a considerable extent the ability to take effective preventive and remedial action depends on technical capacity which in turn is a function of the level of modernization and the stage reached by science and technology. Where agriculture is backward, many soil and land management measures are not possible because there is not the access to the knowledge, infrastructure, inputs and equipment made possible by modern science and industrial development. Backwardness limits knowledge of the soil and its vulnerabilities and the ability to keep track of and analyze any changes in its condition. There are not the resources to carry out measures such as earth movement to prevent erosion and better irrigation systems to reduce salinity.

A change of institutional or political conditions will in many cases also make a major difference. Land management and agriculture generally will benefit if there is a government that is willing and able to progressively increase infrastructure, extension services and research and does not simply see agriculture as something to be taxed for the benefit of the ruling elite and its urban support base. A change in the incentives facing farmers in developing countries would also improve how they respond to the problem. Greater land ownership among farmers would mean a greater willingness to invest in measures to conserve land because their future rights to use it are more secure. And ending the common policy of subsidizing water and nitrogen would also assist in the battle with salinity and nutrient problems. Funding of research is critical in soil management as with other aspects of agriculture.

Below we look at the main forms of degradation and the measures that can be adopted to deal with them.

Erosion

Wind and water erosion can be prevented by a range of measures. The movement of wind and water can be impeded or diverted by planting trees, hedgerows and grass strips and the construction of terraces and storm water drains. And, as we mentioned above, the soil can be protected by conservation tillage which minimizes disruption of the soil surface and maintains a cover of plants or plant litter.

Salinity

Estimates of the rate at which land is being seriously impaired by salinity vary considerably. One claims that 0.5 million hectares per year are being affected while another claims 2 million hectares.[188]

Measures to prevent the problem include additional drainage, better canal lining or use of pipes, and more judicious water applications. Remedial action can also be taken where the problem has emerged. Planting salt tolerant trees and grasses, which "suck up" the salt, is one approach,[189] and plants are being bred that are particularly suited to this job. Another approach which can be applied in some cases is to lower the water-table below the root zone and flush the salts away to newly constructed subsurface drainage systems. According to Conway writing in the mid 1990s, the cost of doing this in India was of the order of $325-$500/ha.[190]

Loss of Organic Matter

For loss of organic matter, the answer often lies in leaving more of the crop residue in the field and making greater use of livestock dung. However, in many parts of the developing world these are used for fuel, so improvement may have to await ready access to modern energy sources such as electricity and fossil fuel.

Nutrient Mining

In some particularly backward regions, especially Sub-Saharan Africa, only further economic developed and higher incomes will end what is often referred to as nutrient mining, where nutrients taken from the soil either by plants or leaching are not replaced by adequate applications of fertilizer. In this region fertilizer use per hectare is only about 10 per cent of the global average[191] and will have to increase about four times to meet nutrient needs at the current level of production. Generally more nitrogen is required than potassium, and more potassium than phosphorus.[192] Prices for fertilizer are high because of inefficient local production, high shipping costs for imports and poor transport. Where transport is particularly poor fertilizer is simply not available. And most farmers could not afford it even if it were delivered to their door at world prices.

In other areas such as China and India nutrient mining occurs even though relatively high amounts of fertilizer are used. This is because the mix is not in the right proportions for the plants' needs. Given that plants use nutrients in a certain proportion to each other, the increase in the external supply of one nutrient, enables plants to extract more of the natural supply of the other nutrients in the soil. The main problem is the overuse of nitrogen relative to the other macronutrients, phosphorous and potassium and to micronutrients such as sulfur and zinc.

Farmers needs a change in incentives so that they are less drawn to the short term gains from nitrogen use and are more heedful of the longer term effects of nutrient depletion. This requires reduced poverty so that they are not living hand to mouth, changes in property rights so that they have more of a stake in the future productivity of the land and an end to the common practice of subsidizing nitrogen. Better knowledge would also assist. This requires a greater general appreciation of the problem by farmers and the means to carry out necessary soil testing and plant analysis.

Summing up on Land

So to sum up on the state of the land resource, the evidence indicates firstly, that there is still a significant amount of extra new land available; and secondly, that recent degradation has not been enough to significantly slow down average crop yield increases and large areas are not seriously affected. While this does not rule out the presence of a real and increasing problem, it does suggest that the resource as a whole is not in imminent, grave danger. Whether the situation improves or deteriorates in the future will depend on the extent that remedial and preventive measures are applied, and this in turn depends mainly on the pace of economic and social progress.

Water

The other major resource required by agriculture is water. As with land, there are concerns about whether the resource will be sufficient to meet our food needs. This will depend on the following factors:

·        how far we can increase our use of rain, rivers, lakes and groundwater;

·        how well we can stem or reverse the depletion or degradation of presently exploited resources;

·        the extent that we can become more efficient in our use of water, both in food production and in other activities that compete with agriculture for water; and

·        the prospects for tapping into the non-conventional resources, namely salty water and polar ice.

Harnessing More of the Resource

Some regions get all the water they need from the rain that falls on the field (green water). For others rain water is insufficient or at the wrong time. They have to rely on water brought in from elsewhere (usually by rivers) or local rainwater which has been stored in aquifers, dams and lakes (blue water). This is drawn off and distributed by irrigation systems.

Presently around 280 million hectares are under irrigation.[193] Over 70 per cent of this area is in developing countries, which are often in regions that are either arid or have monsoons that bring the rain all at once. China and India have about 20 per cent each.[194]

There is scope to expand this area significantly, although by how much is open to some dispute. Presently about 10 per cent of blue water is diverted or pumped for human use.[195] Much of the remainder is unavailable for a range of reasons. For example, rivers run through regions unsuited to farming or the local farmland has all the water it needs, and some water is required for navigation and environmental flows. The FAO has published what some consider an upbeat estimate of 200 million hectares of extra irrigated land in developing countries.[196] What we cannot possibly expect to achieve is the kind of expansion that occurred over the last 50 years when withdrawals were doubled[197] and the area of irrigation increased two and a half fold.[198]

Depletion and Degradation

Part of any expansion, will have to make up for some deterioration in existing systems. Each year infrastructure becomes more dilapidated, more silt builds up in reservoirs, and aquifers become more depleted and in some cases mined out.

Turning these problems around will be one of the objectives of political and economic development over coming decades. The level of infrastructure investment in both rejuvenation and expansion will need to increase considerably from what it is at present. Ensuring that schemes are properly maintained will also require a revolution in management which is generally incompetent and corrupt.

Moves are afoot in many countries to reform their systems. This includes greater accountability for performance and participation by farmers in various aspects of management,[199] separation of service delivery from regulatory functions, and contracting out of operations and maintenance tasks to the private sector or non-government organizations.

The depletion of aquifers is a serious problem in some areas including many parts of India, China and the United States.[200] How important are they? It has been claimed that 10 per cent of the world's food production is dependent on aquifers that are being depleted.[201] Over-drawing of groundwater was estimated to have been 200 cubic kilometers in 1995, 8 per cent of withdrawals by agriculture.[202] On the assumption of equal water productivity this over-drawing would provide about 4 per cent of food given that irrigated land as a whole provides 40 per cent. However, because groundwater irrigation is more reliable than surface irrigation its contribution will be higher than that.

The main problem with most aquifers is that there is no regulation of their use. They are a common pool resource and any individual farmer can drill a hole and install a cheap pump. This is compounded by the fact that in many countries farmers have managed to obtain large subsidies for electricity and diesel fuel, the biggest recurrent pumping costs. Governments will have to bite the bullet and take on the politically difficult task of removing these subsidies. Access to the resource also has to be regulated. One approach is to provide farmers with a right to a certain quota which would be assigned once a study had determined what was a sustainable level of total use or acceptable level of depletion. Farmers who pump more than their quota would then be either charged very high prices or forced to buy pumping rights on an open market from others not using their full entitlement.

On the supply side, depletion can be addressed by taking measures to increase the rate of aquifer recharging by various 'water harvesting' techniques which capture some of the rainwater which presently evaporates. These include containing the water behind dams, or in ponds so that much of it can be absorbed into the ground or digging recharge wells or cisterns that drain water from surrounding higher ground. The water captured would include floodwaters that do not flow into streams and rainwater that falls on areas other than cropland such as pasture and wasteland. One particular proposal is to encourage, through subsidies if necessary, flooded paddy rice cultivation in lands above the most threatened aquifers in the wet season.[203] At this stage it is not known how far artificial recharge measures could go in countering large scale depletion.

More Efficient Use

An alternative to increased water supply is increased efficiency in use. Doubling the output from a given amount of water is just as good as doubling the amount of water. There are many ways that farmers can get more crop from each drop of water applied to the field. These can be divided into measures that increase the efficiency of water application in the field and measures that increase the plants response to water.

The two traditional methods of water distribution that still dominate irrigation in most countries are flood irrigation which covers the whole field with a layer of water and furrow irrigation which channels water from ditches to crops along slightly inclined parallel rows.[204] With these methods significant amounts of water are lost to evaporation, leaching or runoff. Better methods from this point of view are sprinklers, and drip irrigation where the water is delivered by pipes running along the surface or underground near the roots. To date these new methods have not been widely adopted. Although where they have, the results have been dramatic. Cyprus and Israel are leading examples and show that they can be put to widespread use.[205]

Field management measures can ensure that both irrigation and rain water are better used. Increased crop residues or ground cover, made possible with low till techniques, helps retain water or melting snow that would otherwise runoff or evaporate. Increased level of organic matter in the soil increases its ability to absorb and retain moisture.[206] Land leveling, with the help of cheap laser technology, can benefit both irrigated and rain-fed agriculture by reducing run-off and ensuring that water is distributed evenly. It has been reported that field leveling in a region of Arizona lead to a water use decline of between 20 and 32 percent and yield increases from 12 to 22 percent.[207]

Water efficiency can also be improved by increasing our knowledge of the plant's water requirements at various stages in its growth. This knowledge can be combined with equipment monitoring the field for information about soil moisture and the condition of the crop. This can even be used to trigger water applications.[208] Measuring soil moisture can be performed by fairly simple and inexpensive devices such as gypsum blocks containing two electrodes which are buried at several locations and depths in root zones. A pocket-size impedance meter can then measure changes in moisture content.[209]

Of course, harnessing the water and applying it to the land as efficiently as possible is only half the story. The ultimate measure of efficiency is the final harvest achieved. This will also depend on the choice of plant varieties and measures taken by the farmer to maintain soil quality and to protect the crop from various stresses.

The development of plant varieties that put more of their effort into producing grain rather than stalks or leaves, or that speed up or bring forward the grain growing phase will mean more final output for a given amount of water. Likewise, having plants that cope better under stress means less water is wasted on plants that end up dying or underperforming.

Water efficiency can also be improved by using plants that require less water. We can switch to less thirsty crops. For example, growing sorghum instead of corn as stock feed would lower water needs by 10-15 percent and sunflowers instead of soybeans as an oil crop would reduce water by 20-25 percent.[210] Or we can breed plant varieties that require less water. This includes plants that can grow in drier areas where nothing of interest could grow before. Another approach is to develop plant varieties that are more tolerant of saline water hence creating a water resource out of what was otherwise unusable.

These methods of increased efficiency in water use should take us a long way to ensuring that the water supply is sufficient for our needs. An important impetus to greater efficiency would be an end to the heavy subsidizing of water through under-pricing. At the moment prices are generally nominal and collection rates low.

Competition from Non-Agricultural Uses

Agriculture can expect to face increasing competition for water from non-agricultural uses. At the moment they make up about 30 per cent of withdrawals - about 20 per cent for industry and 10 per cent for municipal use.[211] This demand will increase in developing countries as their populations and economies grow. However, there is much that can be done to keep non-agricultural uses of water in check.

Having a water system that does not leak is a good start. In many cities in developing countries a large proportion of water is lost to leaks in the system because of poor maintenance.[212]

Another part of the solution is to achieve most outcomes using less or even no water. For cooling in electricity generation, water-free technologies can be used such as dry cooling towers. In production, innovation can bring forth new water saving technologies. For example, at the Oberti olive plant in Madera, California, where water is used in the curing process, they almost halved water use by reducing curing time from seven days to three.[213] Consumers can have the same need met with a less water intensive product. For example, reading the news on the Net requires no water whereas producing the paper used in the traditional tabloid or broadsheet requires a considerable amount.

It is not hard to imagine a whole range of innovations that could reduce water consumption in the home. Water-saving shower heads could be more widely used. Toilets that use little or no water could significantly reduce domestic consumption. A waterless, electrically powered toilet has been developed which has no odors or insect problems, and safely and effectively biodegrades human wastes into water, carbon dioxide and a soil-like residue.[214] Fumento cites the case of a new train toilet which sanitizes the waste and returns water for flushing and hand washing:

Something called a macerator chews up waste and feeds it into an aerated tank containing membranes coated with muck munching bacteria. The solids are broken down primarily into carbon dioxide and water, while the gas is pumped off and bacteria free water passes across the membrane. Some of the water is sterilized with ultraviolet light and returned to the flushing tank. The rest goes through a reverse osmosis device that filters out the remaining chemicals, such as proteins and urea, so that the water is entirely microbe free and can be used for hand washing. This way the system needs servicing only once a month to remove built up sludge.[215]

No doubt the computerized and automated kitchens of the future will be able to make more efficient use of water both in cooking and dishwashing. Future washing machines may work with less water or have their own recycling systems. There is even talk of waterless washing using nano-machinery that imitates the behavior of enzymes. It is also possible to imagine the development of fabrics that repel dirt and grease.

Reuse is another way of saving water. In some cases no treatment is required, e.g., washing or cooking water diverted to the garden or to the toilet cistern. In other cases various levels of treatment would be needed. For example, sewage and industrial effluent can be cleaned up using technologies that are now getting cheaper and more effective. This can be fed back into the municipal water supply or made available for specific uses.

San Diego is looking at a proposal to mix recycled sewage water with the city's drinking water.[216] The sewage water will undergo conventional tertiary and advanced treatment steps. Advanced treatment will include micro-filtration pretreatment, reverse osmosis, disinfection and nitrate removal. The re-purified water will then be blended with other local supplies. Upon withdrawal, the water undergoes final treatment including conventional coagulation, mixing, clarification, filtration and disinfection before introduction to the city's pipelines.

Waste water can also be made available to agriculture. In water starved Israel, for example, well over half of waste water is used for irrigation after treatment and it makes up about 20 per cent of the irrigation total.[217]

In many industrial activities, there is a great deal of scope for internal recycling. New water would only be needed as a top up where there are losses from evaporation or leakages, or the scale of operation is increased. A major user of water is the power industry for cooling. This is an area where far more recycling can be applied. Even in the sensitive area of food processing recycling seems to be an increasing option. The Californian olive processing plant previously mentioned reuses 80 per cent of its processing water with the aid of a membrane filtration system.[218]

Households, municipalities and industry can also do more to harvest their own rain water. Rain water runoff that goes down drains can be better used. Rain can be collected from the roofs of houses, factories and other large buildings, and stored in tanks. The Frankfurt Airport terminal, for example, collects water from its vast roof for such low-grade water needs as cleaning, gardening, and flushing toilets.[219]

So in a nutshell, households and industry can reduce their competition with agriculture by finding less water intensive ways of meeting their needs, by making more discarded water available to agriculture and by harvesting their own rain. All these approaches can be encouraged by having water charges that reflect the true cost of water and encourage more frugal use and the development of more water efficient technologies.

Non-Conventional Water Resources

There are two non-conventional sources of fresh water that we need to consider: (1) desalinated sea water and briny groundwater; and (2) polar ice. They are non-conventional in the sense that they have only been tapped on a small scale and would require a considerable amount of technical development before they could play a bigger role. They would also require a lot more capital and energy than the conventional resources.

Desalination

Seawater covers 70 per cent of the planet and comprises 95.5 per cent of all water. Brackish groundwater is found in vast underground aquifers throughout the world and often far inland in otherwise dry climates. It includes the vast supplies that accompany fossil fuel extraction.

So far desalination has only been put to limited use, because of the cost. Desalting capacity is about 32.4 million cubic meters (or 8.6 billion gallons) per day.[220] This is a tiny fraction of our present fresh water consumption. About half of this capacity is installed in Persian Gulf countries where water from other sources is very limited and cheap energy to run the process is available.[221] Desalination plants can also be found at specific locations, including island resorts, where there are no alternatives and the demand is sufficient at the high price to economically justify a facility. They are also sometimes used to bring slightly brackish water up to a standard where it can top up conventional supplies. Investment in new capacity appears to be quite healthy. In the US, for example, there are new large-scale facilities being built or planned in Southern Florida, Southern California and El Paso, Texas.[222] The facility in Tampa Bay Florida is the largest desalination facility so far in that country,[223] while San Diego hopes that planned facilities will provide 15 per cent of its water from the ocean by 2020.[224]

Virtually all desalination capacity is provided either by thermal or membrane units. Each provides roughly half of capacity, although membrane technology is edging ahead.[225] In the distillation process salt water is heated to boiling point to produce water vapor which is then condensed to form fresh water. In the membrane process the salt and water are physically separated. Electrodialysis (ED) uses voltage to separate the salts, whereas reverse osmosis (RO) uses water pressure.[226] Most membrane facilities use RO while significantly less use ED. Generally, distillation and RO are used for seawater desalting, while low pressure RO and electrodialysis are used to desalt brackish water. Treating brackish water is far cheaper than treating seawater.

Costs have fallen significantly over the last decade and this is expected to continue. Like other industries, desalination has benefited from a range of advances such as better materials to choose from and improved computerized management of operations. Membranes are achieving faster flow rates, longer lifespan, less fouling[227] and greater energy efficiency.[228] At the same time their cost of manufacture is declining with increased automation.[229] It is believed that a better understanding at the molecular level of the RO process will lead to faster flow rates and better salt rejection.[230]

Completely new technologies may offer possibilities for much greater cost reductions. A number are already in view and expectations are that, with a greater research effort, others could be around the corner.

One at the early commercial stage is called the Rapid Spray Evaporation process which ejects water through a nozzle into a stream of heated air. Because the water is a fine mist it creates a vast surface area which allows the water to evaporate quickly, leaving behind salt in a dry form or as a supersaturated solution easily converted to sea salt.[231] The company developing the technology, AquaSonic International, has at the time of writing started producing small portable units and is in the throws of developing the technology for large-scale plants.

A modified reverse osmosis process is being developed at New Mexico Tech.[232] It uses cheap clay membranes that do not require the usual water pretreatment, operate under lower hydraulic pressure, produce a solid salt waste and yield 100 per cent water recovery.

Reassessing some of the many past failures in the light of subsequent advances in scientific and technical knowledge could prove fruitful. For example, knotty design problems may be sorted out with new computational modeling techniques and the technology made feasible with new materials and production processes.[233]

Then there are totally new concepts. One that looks promising is a nanotube-based membrane. This is being developed by researchers at the Lawrence Livermore National Laboratory.[234] A field of nanotubes functions as an array of pores which allows water molecules through, while keeping salt and other unwanted molecules at bay. And, despite their diminutive dimensions, these pores allow water to flow at faster rates for a given pressure compared with reverse osmosis membranes. This will mean that less energy will be required.

While desalination can expand the water resource, it does so by placing greater demands on other resources, particularly energy which it would use in large quantities even with considerable improvements. Our ability to meet our increasing energy and raw material needs is discussed in the next chapter.

Polar Ice

Three quarters of the world's fresh water is polar ice. It starts out either as snow or as seawater which loses its salt when it freezes. The quantities are enormous. There are tens of millions of cubic kilometers of ice. In comparison the 2500 cubic kilometers of water we withdraw for irrigation is miniscule.[235]

At the moment ice exploitation is just a 'boutique' industry catering to the bottled water and spirits market. Its appeal is that it is extremely pure and does not require the normal extensive range of treatments. A specially equipped ship comes along side of an Arctic iceberg located in a quiet cove and cuts off chunks which are then thawed. However, to ship quantities that make a significant contribution to our irrigation needs would require a massive fleet of supertankers. Just supplying 5 per cent of current levels would require over 700 deliveries per day by 500,000 tonne super tankers.[236] Tankers moving the equivalent in water of our current oil consumption (i.e., 4.5 km3) would move less than 0.2 per cent of our present irrigation withdrawals.

Towing or nudging icebergs with the help of ocean currents is another option which has been discussed. These would have to come from the Antarctic because those from the Arctic are insufficient to make a big difference. Of course if you simply tried to tow an iceberg to Saudi Arabia or California, it would have melted away before it arrived. A number of solutions have been proposed to deal with this. One is to cut a bow into the front end and cover it with kevlar. This would reduce melting to acceptable levels. The iceberg would then be cut up and melted down, and the water piped to irrigation systems and reservoirs. Another approach which has been trialed is to seal the iceberg in reinforced plastic so that melting is no longer an issue.[237]

Moving icebergs into unfamiliar territory could raise a range of environmental issues that would have to be taken into account. There may be an increased risk of oil spills in polar regions, however, these should be countered by better ship design and more effective clean-up methods. There may be some destinations or routes where icebergs would be unwelcome because of an excessive intrusion on the environment. Moving through shallow seas, an iceberg could cool down the surrounding waters or scrape the bottom, damaging marine life. Ice cold fresh water runoff would also reduce the salinity of the surrounding sea water and could precipitate a sudden change in temperature. The plastic bag solution is less likely to cause these problems because they would be smaller and the ice perhaps already melted before it arrived at problem spots, and no freshwater is released. However, given that the size of icebergs is typically no bigger than a tanker, one would be looking at a similar number of icebergs as tanker trips.

What about transporting thawed ice by pipeline? They would need to stretch for thousands of kilometers to the more arid regions and in the case of the Antarctic much of it would have to be underwater. There would also need to be many of them. The Baku-Tbilisi-Ceyhan pipeline from oil fields in the Caspian Sea to the Mediterranean Sea can carry one million barrels of oil per day. That is a lot of oil. However, that much water is nothing. It is 0.06 km3 per year, a minute fraction of total irrigation withdrawals. Even under pressure, a pipeline is only the equivalent of a small stream. It can never compare to a river.

So, in sum, at this stage it is difficult to foresee polar ice being an important contributor to our water supply.

Genetic Base

There is a concern that modern agriculture is narrowing the genetic diversity of crops by replacing a large number of local varieties (landraces) with a small number of widely used modern high yield varieties (HYVs). It is claimed that this "genetic erosion" is eliminating much of the gene pool required for breeding various favorable traits and is making us more vulnerable in the face of new stresses. However, the evidence does not back up these claims.

Despite the popular belief to the contrary, HYVs retain a considerable amount of diversity. There are many varieties in use at any one time adapted to a range of conditions. Furthermore, the level of diversity has been increasing continuously over the past few decades as adaptations to specific stresses have been fine tuned. In the case of wheat there is actually a more diverse range of varieties in the field than at the beginning of the 20th century.[238]

A considerable number of the traditional landraces are still in use, particularly in the areas from which the crops originated. In some regions they are still dominant, for example, rice in Sub-Saharan Africa and maize in West Asia/North Africa, Asia (excluding China), Sub-Saharan Africa, and Latin America.[239] In the case of wheat, landraces are still grown extensively in parts of West Asia, North Africa, and Sub-Saharan Africa (Ethiopia and Sudan).[240] West Asia, where much of agriculture originated is still home to a vast array of traditional varieties of the lesser crops such as lentils, oats, barley, rye, almonds, apricots, cherries, figs, grapes, olives and plums.[241]

Arguably more important than landraces, particularly with improved breeding techniques, are the original wild varieties of domesticated plants. Because they survive without human care and protection they are generally more resistant to biotic and abiotic stresses. These will not be found in farmers fields but out in the wild.

If you include in diversity not only what is in the fields of farmers but also what is in the fields and greenhouses of research stations and in gene banks, there has been an improvement over time for both rice and wheat. This adds a number of other dimensions to diversity all of which have been increasing.

They include temporal diversity (average age and rate of replacement of cultivars); polygenic diversity (the pyramiding of multiple genes for resistance to provide longer-lasting protection from pathogens); and pedigree complexity (the number of landraces, pureline selections, and mutants that are ancestors of a released variety).[242]

Something else to consider is effective diversity. Traditionally the diversity available to a farmer was confined to whatever was in the local region and what they could do with that was limited in the absence of modern plant breeding. Now we have breeding institutions that can pull germ plasm from anywhere in the world when breeding a new plant. They have speedier and more effective ways of screening for desirable traits and using them to create new commercial varieties. The greater effective diversity is evidenced by two facts. Firstly, traditional methods took millennia to greatly increase yields, while modern breeding methods have tripled them within a number of decades. Secondly, yields are far more stable from year to year than they used to be because modern varieties are more stress tolerant than their landrace predecessors. Finally with genetic engineering we have a further extension to diversity because it allows scientists to draw on the characteristics of totally unrelated life forms.

Fisheries

While not a major supplier of food energy, fish provide about one-sixth of all animal protein,[243] and in developing countries the harvest nearly equals the combined local production of cattle, sheep, pigs and poultry.[244] About 70 per cent of fish are caught while the rest are cultivated.[245] About 30 per cent of caught fish are used for non-food purposes, mainly animal feed.[246]

The catch has increased fivefold over the last 50 years.[247] However, there does not seem to be much if any room for the fish catch to continue growing. The FAO anticipates a small increase if fisheries are better managed and a decline if they are not.[248] They believe that between 70 and 80 per cent of fish stocks are fully exploited, overexploited, depleted or recovering from depletion.[249]

Remedial measures include reducing the capacity of fishing fleets, setting up marine reserves, removing government subsidies and assigning property rights to individuals or groups of fishermen to provide an incentive for good stock-management practices. Other threats to coastal fisheries that have to be dealt with are pollution and degradation of coral reef and mangrove habitats.[250]

Fish cultivation or aquaculture has prospects for significant expansion. At the moment output is concentrated on crustaceans and mollusks, freshwater carp in China and salmon. Around 80 per cent of mollusks are cultured, around 20 per cent of shrimps/prawns and around 33 per cent of salmon.[251]

Growing fish in captivity instead of catching them is comparable to the move on the land from hunter-gathering to farming. This allows the development of better breeds and the adoption of management practices such as protection from other predators, provision of better feed and optimal timing of slaughter. While they have a long way to go to catch up with the changes that we have made on land, the industry has made some progress.

Tilapia, a freshwater, plant-eating fish popular in America, has been bred to be hardier and grow 60 per cent faster than the wild variety.[252] Genetically modified salmon are being developed which possess a gene that protects them from freezing when raised in icy waters and a gene that expresses a growth hormone so they reach maturity more quickly, while requiring less food.[253] Other areas of improvement being investigated by breeders include disease resistance and increased fertility. Feed suppliers have also had some success in improving feed efficiency. For example, the amount of feed used for growing salmon is 44 per cent of what it was 30 years ago.[254]

As with any other human endeavor, aquaculture can have impacts on the environment that need to be checked. Chemicals, uneaten feed, dead fish and fish feces from inland and shoreline aquaculture has contaminated drinking and irrigation water, seeped into aquifers, and affected coastal fisheries. Where there is limited water exchange, the decomposition of organic waste can contribute to local eutrophication and all the environmental problems that can cause.

In intensive shrimp production about a third of the water has to be changed daily, and about half of it is fresh water needed to obtain the optimum salinity level. This call on freshwater can lead to a drop in groundwater levels; and large volume pumping of freshwater and seawater also affects the biodiversity of affected areas.[255] Shrimp production has also caused extensive damage to wetlands and mangroves and the creation of infertile land through salinity.

Much of the remedy lies in improvements to the poor regulatory and institutional arrangements to be found in the developing countries involved. This is similar to logging where the government fails to properly protect land supposedly under its control.

Improved technologies and practices can make a difference. One area of success has been in the development of more digestible feed formulations that leach less waste into the environment. For example, nitrogen waste for a given quantity of salmon is one sixth of what it was thirty years ago.[256] A shrimp farm in the US uses other fish to mop up shrimp waste.[257] The use of antibiotics in Norwegian aquaculture is less than 0.5 per cent of what it was ten years ago. Vaccines have brought about great reductions in the use of antibiotics and other chemicals.

We can expect to see the greatest growth in fisheries out at sea where there is not the same competition for resources found on land or near the sea shore. This will take time to develop if only because of the lack of knowledge and new investment involved. The technology to pen, cage or otherwise control the fish still has to be designed and built; and to domesticate a new specie, knowledge is required of such matters as stocking densities, water quality, breeding conditions, animal behavior and precise nutritional requirements. For aquaculture to make a major impact on the food supply, there would have to be large scale investment in facilities such as pens or other means of controlling the fish.

Future cultured fisheries out at sea will face environmental problems similar to some of those above plus a range of new ones. One concern relates to 'genetic pollution' from domestic varieties breeding with wild ones. Like any environmental concern it would need to be assessed on the evidence on a case by case basis. However, given the option of breeding fish that cannot breed in the wild this will never be an overriding problem. Then we have the effect on wild species and ecosystems generally of building large pens and cages and concentrating large numbers of domesticated fish in a relatively small area. These are similar to the issues that we have faced and continue to face in land based food production.

Non-Renewable Resources

One of the reasons modern agriculture is often slammed for being unsustainable is its use of non-renewable resources particularly inorganic fertilizer and fossil fuels. Inorganic fertilizer refers to the three macronutrients when obtained from outside agriculture, in other words, not from the recycling of organic matter. Nitrogen is the most important followed by phosphorous and potassium. Fossil fuels are important mainly in the production of nitrogen and as fuel for farm machinery.

Nitrogen Fertilizer

Inorganic nitrogen fertilizer is produced primarily from synthetic ammonia which is obtained by combining nitrogen and hydrogen. The ammonia is then used to produce various synthetic nitrogen fertilizers including the most common one, urea. Nitrogen makes up almost 80 per cent of the atmosphere (and much of the nitrogen not in the atmosphere eventually returns to it) while hydrogen is the most abundant element in the universe.

Natural gas is presently the most commonly used fossil resource input, both as the source of hydrogen and for the energy in the production process. According to estimates from the late 1990s, if natural gas had provided all the feedstock for hydrogen and all the fuel, its total consumption would have been just under 7 per cent of the world's natural gas extraction.[258]

Nitrogen fertilizer production is certainly very energy intensive, however, as discussed in the next chapter, a diverse range of options will allow us to meet our energy needs. The next chapter also examines the prospects for using water as a hydrogen feedstock instead of hydrocarbons. (Hydrogen is the H in H2O.)

Phosphate

Phosphate fertilizer is made from phosphate rock treated with sulfuric acid. Commercially viable reserves of phosphate rock are estimated to be 18 billion tonnes.[259] This would last 60 years if we consumed at twice our present annual level of 148 million tonnes. The reserve base is estimated to be 50 billion tonnes.[260] This also includes explored resources which are presently non-economic or would require at least some use of unproven technology. Assuming the same rate of consumption these would last 170 years.

With further exploration we should expect discoveries of extensive new deposits in the future.[261] Furthermore, large phosphate resources have been identified on the continental shelves and on seamounts in the Atlantic Ocean and the Pacific Ocean.[262] These cannot be recovered economically at the moment but this could change with new technologies.

The sulfur in evaporite and volcanic deposits, and that associated with natural gas, petroleum, tar sands, and metal sulfides amount to about 5 billion tons.[263] At double current usage rates of 59 million tons a year, these would last 65 years. The sulfur in gypsum and anhydrite is almost limitless, and some 600 billion tons is contained in coal, oil shale, and shale rich in organic matter.[264]

Potassium

Commercially viable reserves of potash or potassium oxide are estimated to be 8.3 billion tonnes and the total reserve base 17 billion tonnes.[265] At double current usage rates of 31 million tons a year, these would last 170 and 350 years respectively. The estimate for the total known resource is 250 billion tonnes.[266]

Fuel for Farm Machinery

Farm machinery takes a very small share of fuel consumption. US agricultural field machinery consumes annually no more than 1 percent of the country's liquid fuels.[267] In terms of resource use it is a vast improvement on draft animals. Using grass as a fuel is extremely land intensive. In the US, the shift from draft animals to internal combustion engines released 30 million acres of prime arable land for crops.[268] To match the 1995 mechanical power of American tractors with horses would require at least 250 million of these animals and 300 million hectares, or twice the total of US arable land, to feed them.[269]

"Alternative" Agriculture is No Such Thing

While we can be optimistic about everybody being fed as a result of advances in the agricultural sciences and the modernization of Third World agriculture, we cannot be the same about the 'alternative' agriculture espoused by the greens. With their alternative we would not be able to feed ourselves and we would trash all remaining natural habitats in the futile attempt. This alternative would have us do without 'unnatural' things such as inorganic fertilizer, chemical pesticides and genetic engineering.

Instead of getting nitrogen from the air as we mainly do now, we would have to confine ourselves to getting it in 'natural' ways such as from animal manure, human sewage and 'green manure' legumes. At the moment inorganic nitrogen provides the bulk of our needs so there would be a big shortfall to fill.

Organic enthusiasts reassure us that there is lots of potential organic fertilizer that we could be using. According to them there is lots of animal manure, crop residue, urban sewage and compostable landfill going to waste. However, experts at the USDA have calculated that the available animal manure and sustainable biomass resources in the US would provide only about one-third of the plant nutrients needed to support current food production.[270] What about using urban sewage sludge more broadly on crops? In the US, all of the urban sewage equals only 2 percent of the nitrogen currently being applied in commercial fertilizers and a significant proportion is already being used for agricultural fertilizer.[271] What about compostable materials from current urban landfill waste? Any urban waste would only be a small addition to the manure and other farm waste already being used by farmers in the US and elsewhere.[272]

The only 'unlimited' source of nitrogen for organic farmers is 'green manure' legumes grown in a crop rotation to provide nitrogen for subsequent crops. However, land put aside for this purpose is not available for crop production. So the land taken up both directly and indirectly for a given quantity of crop output is increased. In any year a significant proportion of the land is not taken up with growing final crops but rather with growing manure! So even if the yields in the fields growing the final crop were the same as for modern sensible agriculture, the average for cropland as a whole is going to be far less.

There is a similar story with chemical pesticides. If you let insects eat part of your crop rather than use pesticide, you need more land for a given crop. Pesticide is not always the only remedy for pests and in some cases if wrongly or over used can make the pest problem worse. However, this does not negate the fact that in most cases there is no substitute for chemical pest control. Furthermore, other measures tend to be adjuncts to pesticide use rather than substitutes. A study by Texas A&M University indicates that U. S. field crop yields would decline drastically if farmers in that country substituted the currently available organic pest controls for synthetic pesticides. Soybean yields would drop by 37 percent, wheat by 38 percent, cotton by 62 percent, rice by 63 percent, peanuts by 78 percent, and field corn by 53 percent.[273]

Other productivity reducing and resource wasting practices of organic farmers include foregoing the use of antibiotics and growth hormones for livestock and the use of genetic engineering.

The exponents of 'alternative' agriculture also tend to have a strong, low-tech streak to them. Machines are seen as unnatural and dehumanizing, and their use is destroying the planet. In a similar vein, the small farmer is the hero and agribusiness the demon. Once again this is at odds with efforts to economize on the use of land and water. Two examples should make the point, namely, the present move to precision farming and the prospect some time in the future of factory farming.

The first of these technologies will allow farmers to micro-manage each separate patch of ground. Its particular stresses can be detected and specific solutions applied. Photography from satellites or aircraft can tell a considerable about how the crop is performing in each field particularly in the infrared and near infrared range. Farm vehicles can assess soil conditions with corers and electromagnetic induction (EMI) equipment while recording their position with the use of GPS. This information can then be fed into a computer with geographical information system (GIS) software which can present the data as maps, tables graphs, charts or reports. A tractor can be directed by a computer to dispense variable amounts of pesticide, fertilizer and water on the basis of location information provided by the GPS and field condition data provided by the GIS. The process can also be put in reverse. Different inputs, plant varieties and cultivation methods can be tried in different fields and their performance easily compared. So far this technology has only been adopted on a small scale. However, it will no doubt become more widespread once the technology matures, costs come down and farmers get used to the idea.

In the longer term crop growing may actually become factory production carried out in multi-level buildings. This would allow for massive increases in output per hectare. A 20 story farm factory on one hectare of land would not just grow the equivalent of 20 hectares. Output would be even higher than that because crops would be grown under optimal conditions in terms of growing medium, lighting, climate and water supply.

A population of 10 billion people with grain output of 550 kilograms per capita each per year (double the present average) and achieving a yield potential of 10 tonnes per hectare requires an area of 550 million hectares. Assuming 20 story facilities that is a land area of 27.5 million hectares or 275,000 km2. That is slightly larger than New Zealand or Colorado, and slightly smaller than Italy.

Per person the land area is 27.5 m3, the size of a living room. The building floor space per person of 550 m3 (23.5 x 23.5) is half the area of a quarter acre suburban block. If each floor only needs to be about a meter or two high, you are looking at a cubic area comparable to a typical bungalow. The construction investment required to accommodate our food production would then be no greater than that required to accommodate ourselves. So, it is unlikely to be a daunting task for the economy of the 22nd century.

There would also be greater water efficiency. The water would be delivered precisely as required, and none of it wasted on underperforming plants. The energy consumption of such food production methods would probably be greater than present methods. Pumping water to each floor, lighting, heating, cooling and building construction would require a lot of energy. However, at the same time, activities such as plowing, planting and harvesting would either no longer be necessary or be done with greater energy efficiency.


3

PLENTY OF RESOURCES

Aiming for Global Affluence

Being able to eat all that you want is, of course, only part of achieving the basic level of prosperity which is enjoyed by most of the one billion people living in developed countries. They also generally have well built and comfortable accommodation, and ready access to infrastructure such as sewerage, electricity, communications, transport and hospitals, to domestic labor saving appliances and to an abundance of cheap food and clothing. They can also afford a regular night out and an occasional holiday. These relatively fortunate can be found in western Europe, the US, Canada, Japan and various outposts such as Australia, New Zealand, Hong Kong, Singapore and Taiwan. At the moment, Portugal could be considered the cut off point with an annual GDP per capita of $18,000.[274] In the following discussion this will be taken as the minimum target that other countries need to achieve.

Once you move outside the top group, the level of economic development and living standards drops away quite quickly. The average income[275] for the middle group of countries between Portugal and China is only half the rich group minimum. It is home to 1.16 billion people of whom less than 220 million live in countries with average incomes over $10,000. Also, we find here a lot of very poor people because of wide income disparities. Brazil, Peru, Mexico and Colombia provide good examples of this. The same can be said for China where the bottom 20 per cent of its 1.3 billion people are much poorer than its average income of $5,600 suggests.[276]

Between China and India in terms of average income, is a group mainly comprising the Philippines, Egypt and Indonesia with a population of 550 million. Their average income is just over one fifth that of the rich country minimum. Then we have the bottom group topped by India. It includes Sub-Saharan Africa, Pakistan, Bangladesh and Burma. Here we find 2.45 billion people or 38 per cent of the world's total. India with a population of 1.1 billion has an average income of $3,072, just over one sixth of the rich group minimum, while 1 billion live in countries with an average income of less than $2,000. Table 3.1 shows a list of selected countries outside the rich list in descending order of GDP per capita. It shows how many times this has to increase to reach our minimum target of $18,000.

Table 3.1: Factor required to increase GDP per capita to $18,000, selected countries (2004 data, purchasing power parity)

Country

Required factora

 

Country

Required factor

Czech Republic

1.07

 

Philippines

3.67

Hungary

1.21

 

Egypt

4.41

Argentina

1.47

 

Indonesia

5.26

Poland

1.50

 

India

5.86

Saudi Arabia

1.53

 

Vietnam

6.62

Mexico

1.83

 

Pakistan

8.42

Russia

1.90

 

Bangladesh

9.42

Thailand

2.20

 

Sudan

9.49

Brazil

2.25

 

Burma

11.39

Iran

2.37

 

Uganda

12.46

Turkey

2.46

 

Nepal

12.60

Colombia

2.75

 

Kenya

17.56

Algeria

2.76

 

Nigeria

18.44

Ukraine

2.83

 

Ethiopia

23.96

Venezuela

3.15

 

Congo

25.59

Peru

3.24

 

Tanzania

27.91

China

3.24

 

Somalia

33.64

Source: CIA Fact Book on line, accessed January 2006

a. For example, the Czech Republic's GDP per capita needs to be 1.07 times its present level.

 

If, as this chapter will argue, resources place no limit on affluence, the poorer countries can be expected to make up a lot of ground in the course of this century. There is no crystal ball. However, given recent performance and the kinds of growth rates that are required by different countries to reach $18,000 per head, it is possible to make some broad brush predictions and be fairly sanguine about the possibility of a large proportion of the developing world achieving a level of affluence this century and the worst off ones early in the 22nd century. A lot will be achieved even if growth is less than stunning and there are periods of war, revolution or economic stagnation.

We will get off to a fairly good start if World Bank predictions for the next 10 years prove correct. The bank expects GDP per capita growth for developing countries to average 3.5 per cent per annum over that period.[277] This would be similar to the performance of the last five years [278] and provide a 41 per cent increase by 2015.

Countries that only need a doubling in per capita income to reach our minimum target would reach it mid century with an average annual growth rate of 1.5 per cent while a tripling in the same time would require 2.3 per cent. These growth rates are not hard to imagine and if achieved would bring into the affluent camp Latin America, Eastern Europe, the former Soviet Union and about half the population of the Middle East and North Africa. The expected annual growth rates in per capita income for these regions over the next decade should put them on track if achieved: for the former Soviet Union and eastern Europe it is 3.5 per cent, for the Middle East and North Africa 2.6 per cent and for Latin America 2.3 per cent. [279]

China needs a three and a quarter fold increase. If as expected it continues at the 6 per cent per annum which it has been averaging over the last 25 years, it will be halfway to $18,000 by 2015.[280] A further 10 years at the same rate would take it to target (2025). Alternatively, a further 20 years at 3 per cent would achieve the same result (2035). If things do not go so well, it may be mid century.

India's GDP per capita has been growing at around 4 per cent over the last decade.[281] If that rate continues for another ten years, GDP per capita would increase by 50 per cent and the country would then be a quarter of the way to the target. If it were to continue at that rate of growth, it would reach it by 2050. Otherwise, any average growth rate greater than 1.65 per cent will achieve the desired result before 2100.

Countries such as Bangladesh and Pakistan that have a GDP per capita of around $2,000 need to achieve good growth performances to reach $18,000 by 2100. However, they do not need to come near the record pace of Japan, South Korea and Taiwan that grew about 18 fold during the last century (or 3 per cent per year). Almost matching Spain, Finland or Italy would be sufficient. They grew 10 to 12 fold.

However, most of Sub-Saharan Africa would require a record performance to meet the same deadline. As discussed in the final chapter, present political conditions are totally un-conducive to economic development, so expecting such a level of success does seem, at least from the present vantage point, excessively sanguine. However, the region would have to be fairly unlucky not to have made some significant inroads into the political obstacles by mid-century. Furthermore, increasing per capita income levels will be helped by the slowdown in the growth, and then the stabilization, of population during the course of the century.

What about the poor countries eventually catching up with the rich ones? This will require outstripping their growth rates for an extended period. There are a number of reasons why this is likely to occur:

·        at an early stage of industrialization where current production methods are relatively backward, moderately small investments in improvements can make a proportionately large difference;

·        at this stage, a lot of people are just learning to do their job and will make considerable improvements in their efficiency over the short to medium term;

·        being followers rather than leaders, poor countries can adopt technologies that have already proven successful. They do not have to worry about the technologies that did not make the grade or go through the initial teething problems. The adoption of US technology by Japan and western Europe after World War II are prime examples of this; and

·        there is the opportunity for technology leapfrogging where the newcomer goes straight to a cheaper technology. For example, mobile phones in India and Sub-Saharan Africa can provide people with telecommunications with far less investment than land lines.

 

********************

The following examination of the viability of widespread affluence in the 21st century and beyond looks at the extent of energy and raw material resources, and at our ability to limit the impact of industrialization on "life support" resources such as air, water, weather and natural habitat. Energy receives the most attention because of the range of technologies and resources involved while raw materials receive the least because it is mainly a matter of detailing their vast abundance and the considerable scope for substitution between them.

Our Energy Needs

In 2004 we produced 11,223 million tonnes oil equivalent (mtoe) or 470 exajoules (EJ) of commercial primary energy.[282] Just on 45 per cent was consumed by the 15 per cent (925 million) living in the rich countries, giving them an average per capita level five times that of the remaining 5.4 billion on the planet.[283]

We will need to increase this output considerably over the course of the century as poor countries develop and narrow the gap with the rich ones. According to the mid-range projection by the US Energy Information Administration (EIA), world energy consumption will grow at an average rate of 2 per cent over the period 2003 to 2030.[284] This is slightly lower than the average annual increase of 2.2 per cent from 1970 to 2002.[285] If this rate were to be maintained throughout the century, annual energy consumption by 2100 would increase more than 6.5 fold to 3146 EJ (77,115 mtoe) per annum. Depending on whether the population is closer to 9 or 10 billion, this would provide a global per capita energy consumption level a bit below or a bit above the present US level.[286] Around 140,000 EJ would be consumed during the course of the century. Mid-century annual consumption would be around 1,170 EJ while 38,000 EJ would be consumed between now and then.

Even lower growth rates would achieve significant results by the end of the century. A rate of 1.7 per cent would increase energy output 5 fold and give a world of 10 billion people the current rich country per capita average of 5.5 toe. A rate of 1.5 per cent would give a 4 fold increase and a per capita average of 4.5 toe.

While the rich countries will continue to increase their energy consumption it will be at a significantly slower rate than the poor ones. This is because of a static population for the group as a whole, slower economic growth rates at the technology frontier and being at a higher and less energy intensive level of development. In line with the recent past, the EIA projects a 1 per cent annual growth rate compared with 3 per cent for the poor countries over the next quarter century.[287]

If rich countries were to continue increasing energy consumption by 1 per cent per year and their population remained static, while overall energy grew annually by 2 per cent and the population of the poor countries increased by 60 per cent, by the end of the century, rich country per capita consumption would increase from 5.5 to 14 toe and poor country per capita consumption from 1.1 to 6.8 toe. This would bring the poor countries as a whole almost up to present US per capita consumption levels and shrink the disparity between rich and poor countries from five to one to two to one.

The task now is to assess our ability to meet these energy consumption levels. We need to know how long we can continue to rely heavily on fossil fuels and to what extent global warming places a serious limit on their use. Then we need to know whether other resources are extensive enough to eventually fill the breech and whether we will have the technology to exploit them. In the case of nuclear power some time also needs to be spent allaying concerns about radiation hazards which are proving to be an obstacle to a rational consideration of this technology.

Fossil Fuels

Around 80 per cent of the energy that we use at the moment comes from fossil fuels - oil, coal and gas.[288] Below we examine each of these fuels in turn and look at how long they can be expected to last given different assumptions about their rate of use. We then conclude with an overall assessment of the fossil resource.

Oil

Oil meets around 35 per cent of our primary energy needs.[289] It is critical to the transport sector where it provides around 95 per cent of what is required. The most recent attempt to quantify the resource base for conventional oil was the World Petroleum Assessment 2000 undertaken by the United States Geological Survey (USGS).[290] They provided a figure of 959 billion barrels for proven reserves. This is the amount of oil that could be produced profitably at current prices if there were no further discoveries or advances in extraction technology.

To this figure they add a range of values for additional resources which are classified into expected reserve growth and undiscovered resources. Reserve growth (also called field growth) is the expected growth in reserves over the next quarter century through better definition over time of what is in known fields and the development of better recovery methods, while the figure for undiscovered resources is an estimate, based on geological knowledge, of further oil that will be found by 2030.

Estimates for these additional resources range from 776 billion to almost 2.8 trillion barrels. The USGS estimates that there is a 95 per cent chance that the real quantity is at least the low value, a 5 per cent chance that it is at least the high value and a 50 per cent chance it is at least 1.6 trillion barrels. These are based on subjective probabilities assigned by people with expert knowledge of the different oil deposits.

So in sum, they are saying that we can be very sure of a total resource, including reserves, of 1.74 trillion barrels (0.959 plus 0.776) but that there is a reasonable chance of it being considerably more. The USGS usually cites a mean value for the additional resources of 1.7 trillion. This gives a total resource of almost 2.7 trillion barrels (0.959 plus 1.7) or 16,522 EJ. That would last 90 years with static consumption levels[291] and until 2056 if consumption were to grow annually by 2 per cent.

A more pessimistic school of thought claims that the ultimate remaining resource is only around one trillion barrels.[292] They argue that the reserve figures of OPEC countries have been exaggerated for political reasons and downplay the scope for new technology to squeeze a bit of extra oil from an increasingly depleted resource. Consequently, they see depletion and increasing costs of extraction occurring in the next decade or so. At the same time they see problems with alternative resources, including unconventional oil, filling the breech because of high costs and environmental damage.

Then we have the optimists who consider the USGS estimates too conservative.[293] They argue that technological advances will increase maximum recovery rates from their current levels of around 50 per cent, increase the ability to exploit resource in difficult geological formations, to drill further into the earth and in deeper oceans and to detect new deposits. For them, the introduction of the more challenging non-conventional resources (see below) and alternatives to oil can be a more leisurely affair allowing plenty of time for ironing out problems and reducing their costs.

The existence of the resource is not the only consideration. There is also the matter of ensuring that it is made available by investing in sufficient extraction and processing capacity. The generally accepted view at the moment is that the price levels we have been experiencing in recent years, and those expected in the future, should be sufficient to induce a considerable increase in investment, including by OPEC countries. A 2 per cent annual increase would mean that in 2025 we would need to be producing 50 per cent more than we were in 2005 and in 2050, 2.4 times as much.

So far we have been discussing what is generally referred to as 'conventional' oil. This is more or less oil that flows from oil wells.[294] 'Non-conventional' oil on the other hand involves more costly extraction techniques and comprises bitumen from oil sands, kerogen from oil shale and extra heavy oil. Despite the higher costs, some of these resources are commercial at current and expected oil prices and more will become so as the required technologies mature.

Oil Sands Oil or tar sands, are grains of sand or porous rocks that are mixed with bitumen. This is a thick, sticky form of oil which at room temperature is much like cold molasses. As a result it does not flow from the ground like conventional oil. Other means are required to extract it and then it has to be further processed to create a synthetic crude oil. This process includes the addition of hydrogen, something in which bitumen is deficient.

The vast bulk of the resource is located in Alberta, Canada where production has grown significantly over recent years as a result of higher oil prices and declining costs. Output is now around one million barrels a day and is expected increase significantly in the next decade.

Most of present exploitation is confined to oil sand close to the surface which is extracted using open cut methods. Giant shovels load up equally huge trucks which cart the oil sands ore to a crusher. Here the sands are pulverized, and the bitumen separated out by various processes employing water, steam and solvents.

However, most of the reserves and resource as a whole are too deep in the ground for surface mining, so there will have to be an increasing reliance on in situ methods. These reduce the viscosity of the bitumen while it is still in the ground so that it can flow sufficiently to be pumped to the surface. Currently about a third of extraction is done this way[295] and it is bound to have an increasing role if the resource is to be extensively exploited.

A range of in situ methods are being developed.[296] The most commonly used method at the moment relies on steam injection to soften up the bitumen. The injection of solvent is also used and a hybrid combination of solvent and steam is being trialed. Two other methods under development are fireflooding and electrovolatization. Fireflooding heats the bitumen by burning some of it. Injected air feeds a fire front that softens the bitumen up ahead. Electrovolatization heats the oil with an electric current. Also being mooted is the use of microbes that would reduce the viscosity of the bitumen.

According to estimates published by the Alberta Energy and Utilities Board, the initial volume-in-place, based on currently available data, is 1.6 trillion barrels while the ultimate volume in place, a value representing the volume expected to be found by the time all exploratory and development activity has ceased, is 2.5 trillion barrels (15,300 EJ).[297] About 300 billion barrels have been identified as recoverable reserves with current technologies and processes.[298]

The USGS provides an estimated technically recoverable resource of 531 billion barrels for Alberta and 120 billion barrels for the rest of the world, giving a total of 651 billion.[299] This broader concept presumably takes into account the less well explored resources and higher oil prices.

Before these figures are compared with those for conventional oil, one needs to take into account the fact that both the extraction of the bitumen and its conversion to synthetic crude are very energy intensive. For every barrel of crude oil produced somewhere near the energy equivalent of one third of a barrel is consumed.[300] So, the reserve of 300 billion barrels needs to be adjusted down to 200 billion barrels (1,224 EJ).

Shale Oil Oil shale is a sedimentary rock containing kerogen, a waxy organic substance that originated from the remains of algae and other living matter and which can be converted to oil through a process called retorting which involves heating the shale in the absence of air to temperatures of 500 degrees C or more. The shale can be mined much like coal and then processed at the surface or retorting can be performed in situ much like the oil sands process. As with bitumen from oil sands, hydrogen has to be added to create an acceptable crude oil.

In many regions there has been no real effort to delineate the resource through lack of commercial interest. In the US during the 1970s there was a lot of interest when oil prices were expected to remain high. So there is some knowledge of that country's resource. Its resource base is estimated to be about 2 trillion barrels.[301] Globally, the resource base is conservatively estimated to be 2.6 trillion barrels.[302]

However, potential world oil shale reserves are put at a mere 160 billion barrels.[303] With such a large resource it is hard to imagine reserves remaining at such a low level. If shale oil is anything like conventional oil, new deposits will be progressively added as the industry develops. Just assuming the same ratio of reserves to the ultimate resource in place as Alberta's tar sands gives a reserve of 312 billion barrels. However, whatever figure we use, there has to be a similar adjustment for energy consumption in extraction and processing which we found for oil sands.[304] For the smaller figure we are looking at 107 (655 EJ), for the larger 208 (1273 EJ).

Heavy Oil Heavy oil is oil with a high density and viscosity which often requires the injection of super-heated steam into the reservoir to reduce viscosity and increase reservoir pressure. As with bitumen it needs to be upgraded to achieve a standard crude oil. Most of the oil is in Venezuela where the resource is currently assessed at over 1.2 trillion barrels (7343.5 EJ) and reserves at about 270 billion barrels.[305] Assuming one third energy loss in production, that is equivalent to 180 billion barrels (1101 EJ) of conventional oil.

So, for non-conventional oil as a whole, there are between 3,000 and 3,600 EJ of energy which could be recoverable in the near future. This would increase recoverable oil resources by 20 per cent to around 20,000 EJ (3.27 trillion barrels) if we accept the USGS estimate for conventional oil. With a 2 per cent consumption growth rate, that would push oil availability out by about 7 years to 2063.

Converting gas and coal to liquid fuel Another option is to produce synthetic oil from gas and coal. In the past such fuel was only produced when access to far cheaper crude oil was denied, Nazi Germany and Apartheid South Africa being the cases in point. They both produced it using their ample supplies of coal. Now with higher oil prices and improved technology there is renewed interest. Plans are presently afoot to establish a gas conversion plant in Qatar whose offshore gas field is home to one tenth of the world's proven gas reserves; China expects to have a coal liquefaction plant operating in Inner Mongolia by 2007; and the South African company Sasol has been having exploratory talks in the US [306] and India.[307] To meet liquid fuel requirement to 2100 with our 2 per cent growth assumption would require over 30,000 EJ (4.9 trillion barrels) from coal or gas. However, as with the other non-conventional forms of oil there are significant energy losses in production which have to be taken into account.

Coal

Coal currently supplies a quarter of primary energy with most of it being used for electricity production where its contribution is 40 per cent.[308] Proven recoverable reserves are about one trillion tonnes[309] which has an energy value of around 21,000 EJ.[310] These are fairly accurately measured resources that would be economical at present prices and accessible with current technologies. At current coal usage rates of around 5 billion tonnes these would last for 200 years.[311] Assuming coal keeps its share and energy consumption increases by 2 per cent per year, present reserves would last 80 years. If tomorrow coal were to take on a bigger role and say grow at 3 per cent per year they would last 65 years.

The potential resource is much larger than these reserves. To begin with, there are the coal deposits that have not being explored or assessed because there is not the demand for them or that are too costly to mine given current market prices. Then there are those that could be exploited with improvements in mining technology. Some deposits are presently too difficult to get at, for example, because the seam is too thin. New tunneling methods or in situ gasification may remove these kinds of obstacles. Recovery rates in mines could also be increased. At the moment, many mines are operated using the traditional room-and-pillar mining method which leaves about half the coal in place. The long wall method which recovers about 90 per cent of the coal could be made more widely suitable or totally new methods devised. Furthermore, in less developed economies higher recovery rates would be achieved by adoption of more sophisticated and capital intensive methods. Wider application of surface mining would be an example of this. The total resource has been estimated to be 6.11 trillion tonnes (179,000 EJ).[312] Assuming 2 per cent annual energy growth and coal retaining its present share by growing at the same rate, the resource would last almost 160 years. At 3 per cent growth it would last 120 years.

Natural Gas

Natural gas is predominantly methane and supplies 21 per cent of our primary energy.[313] Proved conventional reserves are currently around 180 trillion cubic meters (tcm). At a conversion rate of 37 EJ per tcm that has an energy value of around 6500 EJ and is quite close to the energy value of current oil reserves.[314] These would last until 2070 at the 2003 usage rate of 2,618.5 billion cubic meters (bcm). If usage grew at the same rate that we assume for energy as a whole, i.e. 2 per cent, they would last until 2045. If gas increases its share of fossil energy as expected, say averaging an annual growth rate of 2.5 per cent, it would last until 2042.

There are two main estimates of the total resource. The USGS, using the same system of classification as for oil, has a mid range estimate of 386.5 tcm (14300.5 EJ) while the figure from Cedigaz is 490 tcm (18130 EJ).[315] The difference is due to the adoption by USGS of a 30-year forecast period instead of the unlimited forecast span used by Cedigaz. Assuming the more conservative USGS figure, the resource would last until 2150 at current production levels, until 2071 with 2 per cent growth and 2064 with a 2.5 per cent growth.

Non-Conventional Gas Resources

Coal-bed methane Coal-bed methane is methane within coal which is either created chemically as heat and pressure are applied to coal in a sedimentary basin or through bacteria that obtain nutrition from coal and produce methane as a by-product. Because of its large internal surface area coal can hold six or seven times as much gas as a conventional natural gas reservoir of equal rock volume can hold.[316]

Coal-bed methane production grew dramatically in the US during the 1990s. In 2000 it was reported to be 7.5 per cent of US gas production, although somewhat less of consumption given the significant gas imports from Canada.[317] Recently production has also started to take off in a number of other countries.

To extract the methane, some of the water which permeates the coal bed must be pumped out to release the pressure which is keeping the gas trapped within the coal. Once the gas starts to flow, it is at a much slower rate than conventional wells.

On the basis of fairly limited data the USGS estimates that the world resource in place could be up to 210 tcm (7800 EJ).[318] For the conterminous United States, they estimate that the resource could be more than 20 tcm, with about 2.8 tcm recoverable with current technology. If we assume a similar ratio applies to the world as a whole, the recoverable resource would be 30 tcm or 1,110 EJ.

Tight gas A major non-conventional resource that is beginning to be exploited is tight gas. This is gas that requires the host rock to be fractured before it will flow. Extraction will benefit from a range of on-going improvements in mining methods, including drilling and fracturing techniques. In the US it is already providing about 20 per cent of local production.[319]

Although tight gas reservoirs exist in many regions, only the US resources have been assessed. The potential resource for that country has been estimated to be 15.7 tcm (583 EJ) with current technology and 19.82 tcm (733 EJ) with 2025 technology.[320] Germany's Federal Institute for Geosciences and Natural Resources has arrived at a global potential of 2856 EJ.[321] This is only a small fraction of the resource in place. Some estimates suggest that there is as much as 424.7 tcm (15,854 EJ) just within the state of Wyoming alone.[322]

Aquifer gas Natural gas is often found dissolved in aquifers and the amount dissolved increases substantially with depth. It is variously referred to as aquifer gas, hydro-pressured gas or brine gas and is expected to occur in nearly all sedimentary basins of the world. While no detailed assessment of the resource is available, estimates derived from groundwater volume suggest a resource ranging from 2,400 to 30,000 tcm (90,000 to 1,100,000 EJ) with a mean estimate of 16,200 tcm (600,000 EJ). While highly speculative, these estimates suggests a resource of staggering proportions.[323]

Methane hydrates Gas hydrates are ice-like solids in which water molecules trap gas molecules in a cage-like structure known as a clathrate. They form when water and natural gas combine under conditions of moderately high pressure and low temperature. Because the gas is held in a crystal structure, the gas molecules are far more densely packed than under more normal conditions.

Research so far indicates most of the hydrate takes the form of grains or particles in pores of sedimentary rocks in zones which can range from tens of centimeters to tens of meters in thickness. Gas hydrate also occurs as nodules, laminae, and veins within sediment and sometime as thick pure layers.[324] These deposits are to be found beneath the ocean floor at water depths greater than about 500 meters and in Arctic permafrost.

The resource in place is believed to be vast with estimates ranging from 2830 tcm to 8.5 million tcm.[325] According to the IEA, the median estimate is about 21,000 tcm (777,000 EJ).[326] For the US, the USGS provides a mean estimate of 9069 tcm.[327] This would suggest that it is at least similar in size to the conventional resources in place of coal, oil and gas combined and possibly some orders of magnitude larger.

A lot of the resource would be quite a challenge to recover because it is widely dispersed in hostile Arctic and deep marine environments, and encased in sediments with low-permeability. Any large scale production in the near future is easier to imagine if sufficient deposits can be found which are the exception in terms of these characteristics. At this stage the existence or extent of such deposits is not known.

The US and Canada are presently investigating the resource potential within North American permafrost and testing various technologies. Japan and India also have significant programs. The US Department of Energy (DOE) expects that we will have the resource knowledge and technology to begin commercial production by 2015.

The various methods being examined involve perturbing the hydrate in place so that it decomposes to constituent natural gas and water. They include heating, injecting various chemicals and decreasing reservoir pressure. Any mining process would have to take into account the risk of major methane releases into the atmosphere. The magnitude and likelihood of such a release are not yet known, nor is the mitigating effect of seawater which can prevent methane from reaching the atmosphere. The global geologic record appears to indicate destabilization of the hydrate zone in the past has lead to very substantial releases of methane. However, the reasons for this are not yet well understood.[328]

Fossil Fuel as a Whole

As things stand fossil resources that are already recoverable have an energy value in the order of 60,000 EJ (see table 3.2). This includes the reserve estimates for coal and non-conventional oil and gas plus the resource estimates for conventional oil and gas.

With total primary energy consumption in 2004 of 470 EJ, and assuming 2 per cent annual energy growth, currently recoverable fossil resources could continue to meet 80 per cent of our energy until 2075. To get through to 2100, we would need to increase recoverable resources from 60,000 to 110,000 EJ. This would mean tapping quite a small proportion of the remaining resources in place - 3 per cent of the highly speculative total in Table 3.3, and 6 per cent if we leave out methane hydrates.

Table 3.2 Currently Available Fossil Resources

Fuel

Recoverable in near future (Exajoules)

Coal

21,000

Conventional oil

16,522

Heavy oil

1,101

Oil sands

1,224

Shale oil

1,273

Conventional gas

14,300

Coal-bed methane

1,110

Tight gas

2,856

TOTAL

59,386

Sources: See text.

Table 3.3: Resources in place for coal and non-conventional oil and gas (excluding what is already recoverable)

Fuel

Other resources in place (Exajoules)

Coal

158,000

Oil sands

14,800

Heavy oil

6,200

Shale oil

14,700

Coal-bed methane

6,700

Tight gas

>50,000

Aquifer gas

600,000

Methane hydrates

780,000

TOTAL

1,630,400

Sources: See text.

 

Increasing our access to the resource by this amount should not be a major demand on investment or technological innovation. Remote resources can become less so, recovery rates can be improved in coal mining, and drilling, rock fracture and in situ technologies can improve considerably.

This suggests that it would be possible to continue our reliance on fossil fuels through this century and into the next. However, given the vast levels of energy that would be consumed in the 22nd century and beyond, the resource is certainly an historically limited one. With a continuing growth rate of 2 per cent, the entire resource would be fully consumed during the first part of the 23rd century, while a growth rate of 1 per cent after 2100 would only stretch the resource to the last quarter of the 23rd century. Only with a zero growth rate after 2100 would the resource last until the middle of the millennium.

CO2 Emissions and Global Warming

The biggest question mark hanging over fossil fuels is not their availability but rather the effect on the climate of the carbon dioxide (CO2) released when we burn them. For a given unit of energy, coal is the worst in this regard followed by oil then gas.[329]

Carbon dioxide, methane, water vapor and a number of other gases and aerosols, which reside in the atmosphere, retain some of the heat that would otherwise escape back into space. As a result most the earth is well above freezing for most of the time. In fact, average global temperatures are 33oC warmer than they would otherwise be.[330] The concern is that we are increasing temperature levels by adding extra CO2. This has a direct greenhouse effect and also an indirect one because the warming increases the amount of water vapor in the air.

Any atmospheric warming effect would follow a 30 year or more lag due to the fact that the oceans absorb much of the heat.[331] The most pronounced effect of any warming would be a rise in sea levels due to the thermal expansion of the oceans and the melting of ice sheets in Greenland and Antarctica. This would occur gradually over centuries and even millennia. The increase in water vapor will lead to increased precipitation overall.

Uncertainties

While there is general agreement that increased CO2 can cause warming, there is considerable disagreement or uncertainty about the extent of the impact. This shows up even between climate models used by researchers. Their predictions of the effect of doubling CO2 from its pre-industrial level ranges from 1.5-4.5 degrees C.[332] The low end is fairly benign and scarcely noticeable while the high end could be far more serious.

There are three major areas of uncertainty and controversy. These are (1) the extent that there has been warming to date and the blame due to human emissions of greenhouse gases, (2) whether any increase in clouds from global warming amplifies or diminishes the greenhouse effect and (3) the extent that emissions ultimately translate into higher concentration levels in the atmosphere.

The surface temperature records show warming at the rate of 0.17 degreee C per decade since 1976.[333] However, various doubts have been raised about the figures on which this is based.[334] To begin with, some claim that they fail to adequately adjust for the so-called heat island effect, whereby readings from urban weather stations can be influenced by localized warming due to heat-retaining asphalt, brick and concrete replacing grass and trees. Adjustment problems include: the frequent lack of nearby rural stations for comparisons; the use of population rather than construction growth as an index of urbanization; and the failure to take into account the fact that even quite small towns have a significant heat island effect. Other sources of inaccuracy in the record include the uneven placement of weather stations, with most located in the northern hemisphere, at mid-latitudes and on land; the two thirds decline in the number of stations since 1975;[335] the use of sea surface temperature as a proxy for air temperature over the ocean; and changes in vegetation and structures in the vicinity of weather stations.

It is frequently claimed that warming from the enhanced greenhouse effect has been temporarily dampened by sulfur emissions which have a cooling effect and their impact will diminish with increased pollution controls. However, this does not take into account the fact that reductions in sulfur emissions are accompanied by reductions in soot emissions which have a warming effect. At the very least soot would do much to cancel out the sulfur. Indeed, James Hansen, the NASA researcher who helped father the global warming scare in the late 1980s, estimates that soot may be responsible for 25 percent of observed global warming over the past century.[336] The sulfur theory also receives no comfort from the fact that there has been a warming in the northern hemisphere, where the emissions mainly occur, compared with a cooling in the southern hemisphere.[337]

Detecting warming is one thing, blaming greenhouse gas emissions is another. A graph that appears prominently in publications of the International Panel on Climate Change (IPCC) is the "hockey stick" which shows temperature levels as fairly flat throughout the last millennia until the beginning of the twentieth century when they rise significantly. This conveys the idea that temperature levels do not normally vary much and that there must be something abnormal happening in the last century, the prime candidate, of course, being greenhouse gas emissions. The way this was devised using tree ring proxy data has recently been subject to what appear to be fatal blows from its critics.[338] It also flies in the face of considerable evidence of climate variability over the last thousand years or so. There appears to have been a warm period from the 8th to 13th century when temperatures were at least as high or even higher than present levels. It was a time when grapes grew in southern England and the Vikings settled Greenland. This was followed by what has been called the Little Ice Age which lasted until the middle of the 19th century. This suggests a cyclical movement with the current warming at least in part being due to an ongoing emergence from the Little Ice Age and that any future warming will not be as far from the normal range as the hockey stick suggests.

Another major area of uncertainty is the effect of clouds. High level cirrus clouds amplify any warming while low level cumulus clouds dampen it. How global warming would affect the absolute and relative prevalence of these two types is quite unclear. One theory presently being assessed claims that ocean warming leads to a reduction in overhead cirrus clouds, and this could do much to dampen the greenhouse effect. It has been dubbed the iris effect because it can be compared to the way the iris of the eye opens and closes in response to changing light levels.

Another source of uncertainty is the extent that CO2 emissions translate into higher concentration levels in the atmosphere. This depends on the workings of the so-called carbon cycle which is far from well understood. CO2 is held in the atmosphere, the biosphere and the oceans and there are constant exchanges to and fro between them. The atmosphere contains around 800 gigatonnes of carbon (GtC), the land biosphere (plants, animals and soil) 2000 GtC, surface ocean 1000 GtC and intermediate and deep oceans 38,000 GtC.[339] The level of CO2 in the air could be just as affected by natural variations in the exchanges between the atmosphere and the biosphere or oceans as by emission levels. Another wildcard here is the extent that warming or increased CO2 levels lead to either positive or negative CO2 feedbacks. On the one hand, increased CO2 can lead to increased plant growth which leads to greater uptake of CO2. On the other hand, changed climate conditions may lead to increased decomposition and greater release of CO2.

Alarmism

The picture is made even murkier by the alarmism encouraged by various elements in society. First, there is the environment movement which has a penchant for discovering catastrophic consequences in everything that humans do. Secondly, among scientists in the area there is a tendency towards alarmism because those who shout "big problem" either out of honest belief or opportunism tend to get more than their due share of research funding. And then to top it off, we have the media which is more than happy to be fed sensational stories of looming disaster.

Two prominent global warming advocates have openly admitted that it was OK to exaggerate in order to get the ball rolling. According to Professor Stephen Schneider of Stanford University, drawing attention to global warming required "getting loads of media coverage. So, we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have."[340] James Hansen the NASA scientist who kicked off the greenhouse scare in 1988 at a US Congressional hearing, admitted in 2003: "emphasis on extreme scenarios may have been appropriate at one time, when the public and decision makers were relatively unaware of the global warming issue."[341] Possibly others consider there is still sufficient urgency to justify embellishment, particularly given the limited level of effective government response.

A lot of the alarmism is about how serious and nasty climate change is already happening in the here and now. We are presented with smoking guns or asphyxiated mine canaries that can only prompt us to think the worst about what is in store for us. We are being told that the small amount of warming that has occurred in the last 100 years is melting our ice caps and glaciers and causing more extreme weather events. Even increased plant growth from higher CO2 levels can be given a gloomy spin.

We're Melting!

One of the most popular shock horror genre involves stories of melting glaciers and ice caps. While climate science predicts a very slow melting over many centuries if warming is sustained, the more zealous climate worriers would have us believe that our cities are just about to be inundated. For example, Greenpeace's spokesperson on global warming claimed not long ago that by 2080 Manhattan and Shanghai could be underwater if we did not rapidly cut back on greenhouse gases.[342]

Antarctica

We have been told that Antarctica, the largest store of ice, is warming and this is responsible for stressing out penguins and causing ice shelves to crash into the sea. However, warming reports have relied excessively on the Antarctic Peninsula which is only 1/50th of the entire continent. In this area there has been a warming of about 4.5 degrees Fahrenheit (2.5oC) since 1945 and the melt season has increased by 2 to 3 weeks in just the past 20 years. However, if you look at the Antarctica as a whole the story is quite different. Meteorological data indicates that there has been an overall net cooling on the continent between 1966 and 2000.[343] This cooling was during summer and autumn, the only seasons when a temperature change can have an effect on ice formation or melting.

Back in 1998 a very large iceberg broke away from the Ross Ice Shelf. This of course was attributed to global warming by many despite the fact that the temperature record shows no warming in the region. According to Antarctic researchers from the University of Wisconsin, "The breakage is part of the normal iceberg formation or 'calving' that comes as thick layers of ice gradually slide down from the high Antarctic plateau, and is not related to climate changes or global warming."[344] Furthermore, studies show that the glaciers feeding the Ross Ice Shelf are actually getting bigger. So there is no reason to believe that it will contribute to rising sea levels any time soon.[345]

Notwithstanding popular perceptions, climate scientists expect that any global warming during the 21st century will cause the Antarctic to make a negative contribution to sea level because the warmer, wetter atmosphere will lead to more snow over the continent.[346]

Greenland

The other main potential source of melting land-based ice is Greenland. However, at least for the moment it does not look like much is happening. According to Thomas et al., there is an overall balance with some regions thickening and others thinning.[347] While Krabill et al. estimate that there is an overall balance above 2000 meters but an overall thinning at lower elevations which is sufficient to raise sea level by 0.13 millimeter per year.[348] That is much less than the thickness of a tooth pick. So to have any effect at all on sea level this century, any ice melting will have to get a move on. Even the IPCC expects that Greenland will contribute little if anything to sea levels over the next century. They see a contribution ranging from -0.02 to 0.09 meters.[349]

If we are going to see Greenland ice sheets crashing any time soon we will also need to see quite a lot more signs of warming in that region than we have seen to date. According to Krabill et al, for the period 1900-95, the highest summer temperatures were in the 1930s while the last 15 years of the period were about half a degree colder than the ninety-six-year mean. According to Hanna and Cappelen, temperatures in the southern coastal region have dropped 1.29 degrees C since 1959; and the nearby sea surface has seen a similar downward temperature trend over the same period.[350] Chylek reports that the Greenland coastal stations data have undergone predominantly a cooling trend since 1940, and at the summit of the Greenland ice sheet the summer average temperature has decreased at the rate of 2.2 dgreees C per decade since the beginning of measurements in 1987.[351]

Mountain Glaciers

While a comparatively small source of melting ice, mountain glaciers are a veritable treasure trove of inaccurate claims of global warming in action. A number of examples should give the general flavor. One of the more renowned is the case of Mount Kilimanjaro. Claims repeated by various prominent people such as Senator John McCain and Sir David King, the UK's chief scientist, that global warming has caused snow and ice loss on Mount Kilimanjaro have been a media attention grabber. However, it has been melting since the end of the 19th century and the melting over the last 30 years represents the slowest rate of decline since 1912.[352] And it is hard to blame warming even for that given that temperatures don't vary much around the annual average of -7.1 degrees C and satellite data since 1979 show a slight cooling over the mountain and surrounding region. The culprit appears to be the fact that it has been drier over the last century. "With less snowfall to replenish the glacier and less cloud cover to shield it from solar radiation, Kilimanjaro lost glacial mass even during periods of global and regional cooling."[353]

In early September 2001, NBC had a report, claiming that the melting of glaciers in Montana Glacier National Park is due to warming. It is true that there has been a 3.5oF warming if you only go back to 1950. However, if you examine the entire temperature record over the last century or more there is no upward trend, with current average summer temperatures being much as they were at the beginning of the record.[354]

On July 9 2001, the Washington Post published a story claiming that glaciers were receding in Peru because of global warming. It reported the claims of a local glaciologists that this was due to rising temperatures. However, a look at the records indicates that there has been no warming in the region over the last two decades. Furthermore, Peru's glaciers have been receding for at least 150 years.[355]

On March 14 2005 Reuters news agency cites a World Wildlife Fund press release about retreating glaciers in the Himalayas. They were especially interested in the Gangotri glacier, which they said is retreating at an average rate of 23 meters per year. It is true that the glacier has been retreating at an increasing rate and that summer temperatures have increased since 1990. However, the glacier has been retreating over most of the last century and the acceleration in the rate began in the mid 1950s.[356] Also the summer temperature increase since 1990 comes after a dip in the 1970s and 1980s and temperatures are still lower than they were during the 100 years prior to that.[357] This looks more like a long term retreat which has little or nothing to do with warmer weather.

Arctic

While not contributing to rising sea levels, in much the same way that melting ice cubes do not raise the water level in a glass, melting sea ice in the Arctic would still be of some interest if it could be shown to be a smoking gun for global warming.

Greenpeace makes much of a 5.5 per cent decline in Arctic sea ice since 1978.[358] However, to see a human cause is to mislead with statistics. Data from a range of sources indicates no long term trend in Arctic temperatures.[359] Although, what does show up is a temporary dip in the 1970s and a recovery since then to the levels of the 1930s and 1940s. Weather balloon data does show some long term warming in winter, however, this is not going to affect ice cover given that ice does not melt at that time of year.

Studies in 1999 and 2000 of measurements taken by submarines, seemed to suggest that there had been a 42 per cent loss of sea ice thickness over the last 40 years. This is one of the most quoted claims in the IPCC's Third Assessment Report. At the same time, however, other studies attribute some or all of the 42 per cent decrease to the location and timing of the submarine cruises and show either no significant decrease or a more modest 12 to 16 per cent. A panel commissioned by the Arctic Ocean Science Board to review the research accepted the possibility that the observed thinning was the product of sparse data coverage and large inter-annual variability of sea-ice thickness and hoped that new satellite-based measuring methods presently under development would provide data of the quality required.[360]

On August 19 2000, the New York Times reported on page one, that "North Pole is Melting" and that "the last time scientists can be certain that the Pole was awash in water was more than 50 million years ago. The Times based its story on a call from a couple of scientist on board a Russian ice breaker, one of whom was a professor of oceanography and co-chair of the IPCC's Working Group II ("Adaptation and Impacts of Climate change").[361] The ship found itself in open Arctic water and the scientists were convinced that this was a sign of global warming. However, the Times finally issued a not very prominently placed retraction after it received numerous eyewitness accounts and photographic evidence of open water at the Pole in previous years. Presidential hopeful Senator John Kerry obviously missed the retraction because he said on May 1 2001 "…[T]his summer the North Pole was water for the first time in recorded history."[362]

More Frequent and Violent Storms

As a matter of routine, whenever there is a violent storm, the media and the alarmists inform us that such events are becoming more frequent and violent and it is due to global warming. In its 2001 report the IPCC found "no compelling evidence to indicate that the characteristics of tropical and extra-tropical storms have changed" during the 20th century.[363] In the case of hurricanes from the Atlantic which do so much damage when they hit landfall in the US, there has actually been a decline in both frequency or intensity from 1944 to 1995.[364] As for US tornadoes or twisters, there is no upward trend once you allow for the effects of improved monitoring; and in the case of killer tornadoes, categories 3 to 5 on the Fujita scale, there is a slight declining trend.[365]

Even Good News is Bad News

When it comes to doom and gloom, even goods news can be turned into bad news. This is what happened when Bill Clinton's Secretary of Agriculture, during the 2000 election campaign, hyped a report about how increased CO2 was leading to more ragweed pollen and this would cause more hay fever.[366] This was a very odd perspective given that any encouragement to ragweed from increased CO2 also applies to plants generally. It would have made more sense to announce that increased CO2 was leading to more food and forests.

What about Eco-Catastrophes?

If the impacts of a bit of global warming were extremely severe, there would then be a strong case for immediate and drastic emission reductions. This is where prophesies of eco-catastrophes come in. The most well known of these are (1) the melting of the Arctic permafrost subsoil, and (2) the closing down of the Gulf Stream.

Much of the permafrost - permanently frozen subsoil - of the Arctic regions of North America, Europe and Siberia have a surface covering of peat which holds an estimated 14 per cent of the world's carbon. Peat consists largely of organic residues which have not decomposed because of the high moisture environment. Rising temperatures could thaw the subsoil and lead to a lowering of the water table. This would dry out the peat which would begin to decompose and release CO2 into the atmosphere.

As discussed above there has been no warming to date in the Arctic region as a whole and this gives weight to lower warming projections. Also studies of permafrost in Barrow Alaska[367] and northern Quebec showed no signs of thawing.[368]

A number of studies have found evidence that thawing of permafrost can actually be associated with increased carbon sequestration by peat lands. Warmer climate would lead to greater levels of vegetation as would the fertilizing effect of higher levels of CO2. Research also indicates that plants growing in a more CO2 rich environment decompose less readily.[369]

A deep-enough thaw of permafrost could destabilize underlying methane hydrates leading to the release of methane. This could have a major impact if occurring on a large enough scale. However, to have a long term effect it would need to be sustained because methane only has a life of about 10 years in the atmosphere.

It has been claimed that global warming could close down the Atlantic Gulf Stream which pulls warm water from the tropics to the higher latitudes and is believed to provide western Europe with a far milder climate compared similar latitudes in North America. This would require the 30 million people living in the Scandinavian and Baltic countries to adapt like the Eskimos or move south. For most of the continent it would presumably mean having a climate much like Ontario and Minnesota which arguably is not an eco-catastrophe.

According to the scenario, the warmer currents would cease to travel to European waters because melting ice and increasing rainfall in the North Atlantic would switch off the process of thermohaline circulation. This relies on evaporation and iceberg formation making surface water more salty so that it sinks to the deep.

However, according to Carl Wunsch, an authority on ocean currents:

European readers should be reassured that the Gulf Stream's existence is a consequence of the large-scale wind system over the North Atlantic Ocean, and of the nature of fluid motion on a rotating planet. The only way to produce an ocean circulation without a Gulf Stream is either to turn off the wind system, or to stop the Earth's rotation, or both.

. . . The occurrence of a climate state without the Gulf Stream any time  soon - within tens of millions of years - has a probability of little more than zero.[370]

Some scientists have also raised doubts about whether the Gulf Stream actually has a pivotal role in Europe's weather.[371] Their research indicates a greater role for circulation of the air rather than the ocean. Firstly, because the prevailing winds across the Atlantic blow from west to east, western Europe benefits from the fact that the ocean gradually releases in winter the heat it has stored in summer. Added to this is the effect of the Rocky Mountains which influence the flow of winds within the atmosphere. These tend to bring cold winds from the north into eastern North America and warm winds from the south into western Europe.

All sorts of disasters are possible in future ages, human induced or otherwise. The longer the time frame the more likely they are. In fact, one seems to be a dead certainty, namely, the next Ice Age. However, the further humans travel into the future, the greater will be their ability to understand, adapt to and control natural forces.

Business as Usual for Now

With no signs of eco-catastrophe, there seems to be no strong reason to stray far from a "business as usual" approach, at least for the next couple of decades. That of course does not rule out a strong research and development program for emission free technologies plus some assistance to get them operating on a large scale. This will give us a wider range of options further down the track.

It also leaves open the prospect of keeping within a doubling from pre-industrial levels of greenhouse gas concentrations (i.e., 560 parts per million (ppm) measured in CO2 equivalent). Here is one scenario just to illustrate the point. With greenhouse gas concentrations currently at 430 ppm CO2 equivalent and energy's share of emissions at 57 per cent,[372] let us allot energy another 75 ppm of the remaining 130 ppm prior to all emissions falling to one GtC per year. (This is the level that the various carbon sinks can absorb.) We are assuming here that other sources of greenhouse gases are reined in to the same degree.[373] With annual energy emissions contributing 6.53 GtC[374] at the moment and these increasing by 2 per cent per year until 2025, that will add another 89 GtC (42 ppm) if we make the usual assumption that 50 per cent is retained in the atmosphere. However, by introducing a 5.6 per cent annual reduction after 2025 we would get emissions down to well under one GtC by 2060 with the additional contribution to the atmosphere from now until then of 161 GtC (75 ppm).

Adapting to any Climate Change

Cutting back on greenhouse emissions is not the only approach to possible climate change. Adapting to it is another. And the best way to help people in the future to adapt is to increase the rate of economic progress. The higher the level of economic development and know how, the more they can meet any challenges. Air conditioning and better insulation will protect people from any increase in heat waves. (Although, it should be kept in mind that much of any warming would not take the form of higher afternoon summer temperatures. A lot of it would be warmer winter nights.) Better housing and emergency infrastructure such as warning systems, shelter and rescue services can reduce death and misery if global warming leads to more extreme weather conditions such as floods. Better public health, vector eradication, treatments and cures can counter any climate induced movement of diseases such as malaria.

Adapting to any sea level rise should not be a great strain given the long time frame involved. In the 21st century any increase would be confined to thermal expansion and would perhaps be double or triple the 18 cm (9 inch) rise which we had no trouble dealing with during the 20th century and were generally unaware of. The melting of the ice sheets would only have an effect more than a century down the track and would occur over many centuries. People would have plenty of time to either build retaining works or move to higher ground.

Global warming is not expected to have a net negative effect on agricultural production. While some areas could be adversely affected by increased flooding, soil evaporation or drought, other areas would have better farming conditions such as longer growing seasons and more rainfall. And all regions could benefit from the fertilizing effect of CO2 which increases plant growth and tolerance of poor conditions. Any area adversely affected could respond either by increasing food imports or introducing plants and livestock with higher stress tolerance.

If one is concerned about inequitable effects of global warming because of the greater vulnerability of the poor, a focus on economic development including increased aid has the benefit of assisting people right now and not just when climate change hits 50 years or more from now. Even a relatively small proportion of what it would cost developed countries to seriously reduce CO2 emissions over the next few decades would make a huge difference in developing countries, assuming it was accompanied by the kinds of political changes that are required for economic development. In other words it could not be the old routine whereby the World Bank finances kleptocrats and demagogues. (See the discussion in the final chapter.)

A similar approach can be applied to threatened impacts of climate change on the natural environment. In particular there is a concern that natural systems, and particular flora and fauna, will find themselves with new climates to which they are not suited and lack of time or physical barriers, including human habitation and agriculture, prevent a shift to a more suitable location. Over coming decades we can reduce, halt and slow down human encroachment by expanding and improving national parks and other forms of nature protection, and increase the efficiency of agriculture so that more food can be produced from a given area of land. And our descendents with their higher level of economic development and scientific knowledge will have increasing means to protect biodiversity.

Another low pain way of helping people in the future is to fund research and development into technologies with long lead times which will increase their ability to reduce emissions. This is something we are already doing with government supported research and development into solar, wind, geothermal and nuclear energy and also fossil fuel technologies that allow for the capture and storage of CO2. These efforts should include building up a few decades of experience of full scale operation.

CO2 Capture and Storage

Cutting CO2 emissions does not have to mean abandoning fossil fuels as a major resource if economic means can be found to capture and then store the CO2 they produce. This is a burgeoning area of research and is generally referred to as sequestering. For point sources such as power stations, capture would be part of the production process while for diffuse sources such as motor vehicles and heating it would require extracting the gas from the atmosphere. Emissions from these two sources are roughly equal.[375]

Three different technologies are being considered for emission capture from power plants and other point sources. The one that is nearest to being operational would extract CO2 from exhaust gases. These would be bubbled through a liquid solvent which would dissolve the CO2. The solvent would then have to be heated to release it for storage. This process consumes a lot of energy and could increase electricity costs by as much as 70 per cent,[376] although research in progress promises to reduce exhaust capture costs considerably.

Another approach called oxyfuel technology would separate oxygen from air and then burn it with hydrocarbons to produce an exhaust with a very high concentration of CO2, and so eliminate the need for separation. The main challenge is to develop a less expensive way of producing oxygen. A number of new techniques are being tested at pilot scale.[377]

The third approach is called pre-combustion decarbonization where natural gas or coal is converted to hydrogen and CO2. The CO2 is compressed for storage and the hydrogen is available as a fuel which only emits nitrogen and water. This has the advantage of providing emission free fuel for motor vehicles as well as for electricity generation. FutureGen, a partnership between the US, a number of other governments and private industry, aims to develop this technology over the next 15 years at a cost of $1 billion. The plan is to have a 275 MWe coal fuelled prototype plant operating in a decade which captures at least 90 per cent of CO2 and only increases electricity costs by 10 per cent or 0.2 cents per kilowatt-hour. The program then aims to develop further improvements which would lead to technologies that achieve near zero CO2 emissions and do not add to energy costs.

If we want to capture CO2 emissions regardless of their source, including diffuse ones such as motor vehicles and home heating, we need to extract it from the atmosphere. According to the proponents of this idea, the air would need to be exposed to a sorbent, an agent that absorbs CO2. This would require an array of units distributed across the landscape much like wind turbines. The big difference is that the land area requirements for a particular amount of energy produced from fossil fuel would be two orders of magnitude less than that required to produce the same energy with wind turbines.[378] In these units air would be blown by fans onto a flowing sorbent. The CO2 would then be removed and stored, and the regenerated sorbent recycled.

At the moment the only practical sorbent is calcium hydroxide which would combine with CO2 in the air to produce calcium carbonate CaCO3. This would then be heated in a closed vessel to produce calcium oxide and CO2. The calcium oxide would then be returned to the water to regenerate the sorbent. Proponents estimate that it would cost between 20 to 25 US cents per gallon of gasoline.[379] However, they are hopeful that another absorbing agent can be found which requires far less energy at the processing stage. While most of the CO2 from a point source would be more cheaply removed in-house rather than later from air capture because of the higher concentrations, it may not be the case for that final 10 per cent of so. In other words, the least cost approach may be to achieve less than zero emissions and rely on air capture to pick up what was missed. Air capture could also have the advantage of being more readily placed near CO2 storage facilities.

Enhancing nature's very own air capture is another approach. Plants, microbes and soil normally absorb a considerable amount of CO2 from the air. We can plant trees and other vegetation and encourage life that is particularly efficient at absorbing CO2. For example, it has been suggested that aquatic microalgae which have carbon fixing rates that are higher than those of land-based plants by one order of magnitude, could be installed in photobioreactors arranged much like solar panels. They would produce a stable carbon compound ready for storage.[380] Biotechnology might help things along by breeding plants that grow more quickly or are in other ways more efficient at carbon fixing. And in the case of soil, CO2 capture should be improved by the move to conservation tillage which leaves the soil less disturbed.

Once CO2 is captured it can be stored in the ocean or underground, or converted to solid and harmless rock. The underground option is receiving most attention at the moment and includes storage in deep saline aquifers, depleted oil and natural gas fields and deep coal beds. Of these, the aquifers have the largest capacity with estimates ranging from 2,700 GtC to 13,000 GtC.[381] These saline formations are layers of porous rock that are saturated with brine. This is not just the aquifers with structural traps which have a relatively small capacity, perhaps 10 years worth of emissions but also the more extensive open ones which are thought to be suitable as long as the CO2 is injected far enough from reservoir boundaries that it will dissolve in the water or precipitates out as a mineral as a result of reactions with the surrounding rock before migrating more than a few kilometers towards the basin boundaries.[382] The use of aquifer storage is proving successful in the North Sea where CO2 stripped from natural gas produced at the Sleipner gas field by the Norwegian oil company, Statoil, has been injected for the last 5 years into the Utrisa Formation, some 1,000 meters under the sea bed.

CO2 can also be injected into depleted wells to push out otherwise inaccessible oil and gas, or into unmineable coal seams to dislodge coal-bed methane. Geologists estimate that as much as 500 GtC can be locked away in such sites.[383] This is about two-thirds of all the carbon in the atmosphere today.

CO2 can be disposed of by converting it into solid rocks called carbonates through a reaction with certain kinds of minerals. This would be inherently more stable than storage as a gas or liquid, and more compact. Recent research indicates that a process which naturally takes place over extremely slow geological time scales can be accomplished within minutes under certain temperature and pressure conditions.[384] The raw materials required for this process exist in vast quantities across the globe. Estimated mining and mineral preparation costs are currently not prohibitive, but work still needs to be done to reduce the energy required for the process.[385]

Storage in the ocean depths is another possibility. The amount of CO2 that would cause a doubling of the atmospheric concentration would change the ocean concentration by less than 2 per cent. Although 20 per cent as a general rule would eventually return to the atmosphere after a period of somewhere between 300 and 1000 years[386] and the resulting reduction in pH levels may have environmental consequences. These effects would be obviated if the CO2 could be kept in an isolated form, e.g., if injected in such a way that it turns into a carbonate or CO2 cathrates. Indeed, if methane hydrates from the ocean floor are ever exploited, it may prove possible to store captured CO2 as cathrates in the same deposits from which the methane was extracted, given that they are stable under similar pressure and temperature conditions.[387]

Solar Energy in its Various Forms

Solar energy can be either harnessed directly as it strikes the planet or after it has taken on an earthly garb. The latter forms include wind, waves, falling water (i.e., hydropower) and plant biomass. Wind is the horizontal movement of air caused by the sun's uneven heating of the earth's surface, while waves are created by wind blowing over sea water. Hydroelectric power has its origin in the evaporation of water by the sun and its subsequent precipitation on land at high altitudes. Plants convert solar energy through photosynthesis into chemical energy which can then be burnt for heat.

Direct Solar

The heat from the sun can be used to warm water, to heat buildings and to drive electric generators while its rays can be captured by photovoltaic cells and converted into electricity. Other possible future methods of exploitation are the channeling of light into buildings through optic fibers and the use of solar energy to split water to create hydrogen which can then be used as a fuel.

At the moment the most commonly employed means of harnessing solar energy is in domestic water or space heaters attached to the roof. These are large glass covered boxes which absorb heat and then transfer it to water or some other fluid through a system of pipes. According to the World Energy Council (WEC), only 2 m2 of collector area is required to provide 80 per cent of the water heating demands for a family in a Mediterranean climate.[388]

Space heating can also be provided by 'solar architecture' where buildings are designed to capture the sun's heat. Large windows are positioned to maximize intake of radiant heat during the cooler months. Part of the heat warms the inside air while the rest is absorbed by specially designed inner walls which slowly release the heat once the radiant heat begins to decline late in the day. The escape of heat from the building is retarded by well sealed and insulated walls and windows which freely allow solar radiation in but are slow to conduct the heat out again.

At the moment a very small share of our electricity is provided by photovoltaic (PV) technology which is widely used in niche markets such as powering unmanned equipment or isolated homesteads or communities away from the power grid. In the case of households, PV panels are either attached to the roof or arrayed on nearby land. The panels comprise flat crystal cells made of semiconductor material, usually silicon, which absorbs light and then releases electrons which flow through an external circuit to generate electricity.[389] With the current state of the technology, about 10-15 percent of the solar energy that strikes the cell's surface is converted into electricity.

A number of thermal systems of electricity generation have also been developed, although at this stage these are confined to a handful of trial projects. Some systems focus solar energy at a particular point using reflectors and use heated fluid to drive an electric generator. The reflectors track the sun during the course of the day to maximize the sunlight hitting them. These concentrating technologies can be classified into three types. Reflective parabolic troughs focus sunlight on a fluid filled receiver tube running along their front. Reflective parabolic dishes focus heat at the focal point of the dish where a receiver containing heated fluid drives a smaller generator. So-called power towers use a large number of sun tracking flat plane mirrors to focus sunlight onto a central receiver on top of a tower. Another very different system is the solar chimney. Instead of concentrating sunlight it relies on a greenhouse effect. A small facility has been trialed in Spain and a large scale commercial operation is planned for near Mildura in Australia.[390] In the case of the latter, the chimney will be surrounded by a 7 kilometer diameter 'greenhouse' which creates a hot draft that is sucked up the chimney where it drives electricity generators.

Solar lighting is a technology which is near the operational stage. Dishes on the roof, guided by a tracking system, collect the sunlight and 'feed' it along fiber optic cables to supplement electric lighting. Sensors keep the room at a steady lighting level by adjusting the electric lights based on the sunlight available.[391]

The development of methods that use solar energy to split water and produce hydrogen are presently the focus of research. Three technologies are being looked at. The photoelectrochemical solar cell is the closest to being ready for use, although still requiring a lot of work. The other two would appear to be more long term. The photobiological process would use specialized microorganisms which with the aid of sunlight and water produce hydrogen as a by-product of their metabolism. Because existing organisms such as green algae and cyanobacteria do this too slowly, some of the current research effort involves developing a genetically modified one that will be far more efficient at this task. Another approach, a kind of artificial photosynthesis, would adapt the process employed by plants which transform solar energy into chemical energy with the aid of carbon dioxide and water. The hope is that at least one of these methods would be less costly and more energy efficient than using the electricity from solar energy to produce hydrogen by electrolysis. In this process an electric current passing through water splits it into hydrogen and oxygen.

Nature of the Resource

The sun is a very diluted resource so harnessing it will require a lot of capital equipment spread over large areas. It is also highly variable from hour to hour, day to day, week to week, season to season and location to location. When not relied on too heavily, other resources can fill the breech when there is insufficient insolation. However, if solar is to be major source of energy we need to get round this variability by transmitting electricity over the vast distances from regions with a solar energy surplus to ones with a deficit and by converting solar energy to hydrogen which can be stored and transported. Achieving these objectives will require a lot more research and development.

Extent of the Resource

Below is an assessment of the extent of the various sources of solar energy and the extent that they could meet our energy needs.

Deserts

The deserts of the world are often mentioned as an ideal place to install solar panels. It is land that we don't need for other purposes and nature would usually not feel greatly put upon. Deserts take up about 20 million square kilometers which is about 15 per cent of the ice free land area. The Sahara makes up about 45 per cent of this. Other major desert areas are to be found in Australia, the Middle East, Mexico, south west US, Chile and south west Africa.

To give a world with 10 billion people the average per capita electricity consumption presently found in rich countries would require a total output of around 100,000 TWh per annum which is a 6 fold increase over the 2004 level.[392] With the average annual insolation for these desert regions at around 2,300 kWh/m2, with current technology achieving an energy efficiency of 10 per cent and panels taking up twice their surface area to prevent them casting shadows on each other, we would need 4.3 per cent of this area. This does not take into account the energy losses from long distance transmission and from converting electricity into a portable resource such as hydrogen. So a somewhat larger area would actually be required. If we were to produce all the 2300 EJ (639,000 TWh) of primary energy (and not just the electricity) required by 10 billion people consuming the current rich-country average, we would take up 28 per cent of the desert area. There would be transmission and conversion losses here too. However, these may not be far greater than those incurred in the conversion of coal to electricity and oil to refined fuel.

Attached to Roofs and Other Urban Surfaces

Another place for solar panels which avoids conflict with other uses is the space on roofs and various other surfaces in close proximity to electricity consumers such as walls and unused land beside freeways and train tracks. Being close by, there is not the cost or energy loss from long distance transmission.

The extent of this resource will vary from one region to another depending on the level of insolation.[393] Northern and central Europe, Russia and China fare the worst with insolation ranging from 700 to 1,400 kWh/m2. The best placed are the West and mid-West of the US, Australia, the west coast of South America, most of Africa, the Middle East, South Asia. Here insolation is 1,900 kWh/m2 or above

Even in a country such as Holland with low insolation and fairly high population density, PV cells on residential roofs could provide a significant proportion of household electricity needs. According to a study commissioned by Greenpeace, there are 20 m2 of residential roof space per person in that country, and that with an annual insolation level of around 800 kWh/m2 producing 80 kWh of electricity per m2 this would supply 1,600 kWh per person.[394] That would provide 23 per cent of current consumption given the country's population of 16.27 million and total consumption of 112.67 TWh.[395] Coincidently, with 23 per cent of electricity in Holland going to residential use,[396] this would be equal to current household consumption. If the share of electricity going to residential consumption were the same as the rich country average (31 per cent), [397] and the level of consumption were the same as the rich country average of 9,710 kWh per head, [398] the proportions would be quite a lot less - 16 and 53 per cent rather than 23 and 100 per cent.

For areas with the Dutch level of housing density, annual production from residential roofs would suffice for average rich country domestic consumption at an insolation level of 1,505 kWh/m2.[399] That covers the sunnier regions of the world.

Other urban surface areas can also be employed. The Greenpeace study claims that in Holland non-residential roofs cover 96 km2 or 30 per cent of the area of residential ones. To that we can add building walls and land adjacent to airports and running alongside freeways, highways and train tracks. If we conservatively assume that these other surface areas as a whole are 50 per cent of residential roofs, this gives a total area of 30 square meters per person. This would provide 35 per cent of Holland's present total consumption and 25 per cent if consumed at the average rich country level.

It is assumed in these calculations that any mismatch between the supply of sun and the demand for domestic electricity can be evened out by net additions or subtractions from a much larger electricity grid based on other sources of energy and that there is no need to take into account energy losses which would occur if battery storage was used to provide power at night or during cloudy periods. Also not accounted for is the option of devoting some roof space to thermal units for heating and cooling.

Other Areas

When we move beyond deserts and areas of dense human habitation, solar facilities are more likely to conflict with other uses for land, in particular the natural environment and agriculture. Nevertheless, there are still considerable areas other than deserts which are of limited value to farming and to nature. These include grasslands, savannas and semi-deserts which are on a similar scale to the desert regions and include: the Eurasian Steppes, the US prairies, the Pampas of South America (northern Argentina and Uruguay), the vast sheep and cattle runs, and semi-deserts of Australia and the arid areas south of the Sahara Desert.

Wind

Unlike direct solar, wind energy is already being harnessed on a large scale. In 2005, global wind generating capacity was over 51,000 MW and this generated around 100 TWh per annum.[400] This is more than a tripling over the last five years. However, it is still well under 1 per cent of total power generation. The only country that has so far placed significance reliance on wind is Denmark, with Germany being the biggest producer of wind power in absolute terms, while most of the remaining capacity is found in the US, Spain and India.

Wind turbines typically are equipped with three-bladed rotors which are anywhere up to 100 meters in diameter. These are turned in the direction of the strongest wind with the aid of an onboard computer and drive a generator with a rated capacity between 600 kWe and 2 MWe. These are mounted on towers that are generally between 40 and 100 meters high.[401]

There have been two studies which estimate the on-land resource.[402] Both give a total resource base of around 500,000 TWh/year from regions with wind speeds of more than 5 meters per second. The study by the World Energy Council (WEC) assumes that this resource is found in 27 per cent of the ice-free land area (i.e., 36.2 million square kilometers). The Grubb and Meyer study estimated that 10 per cent of this area was available after allowing for accessibility and competing uses and could harvest just over 50,000 TWh, while the WEC study gave a more conservative estimate of 4 per cent and just under 20,000 TWh.

The total resource potential from these areas would increase with improvements in the technology that allow more effective capture of the available energy. Furthermore innovation would increase our ability to more effectively exploit areas with lower wind speeds.

As well as wind on land we also have wind off-shore. At least in theory this is a much larger resource. About three quarters of the earth's surface is covered in oceans and their wind speeds are higher on average. For the moment, however, the exploitable resource is confined to the coastline and relatively shallow water. As the distance from land increases, the cost of transmitting the power back to shore increases sharply and the deeper the water the higher the construction costs. However, building on the know-how from off-shore oil and gas rigs, wind farms in the future will be able to venture into increasingly deeper water and distance from markets will become less of a concern as methods of long distance transmission and hydrogen conversion improve.

A study carried out in 1993-95, estimated an offshore wind potential in the European Union of 3,028 TWh.[403] This assumed that the wind resource can be used out to a water depth of 40 meters and up to 30 kilometers from land. It would not require all that many similar offshore areas around the world to match the on-land resource. With an ability to provide over 40,000 TWh every year, wind energy could meet a significant proportion of the electricity requirements of 9 or 10 billion people living in affluence and be an important although minor player in meeting total energy needs.

If the entire land resource identified by the WEC study were exploited you would have wind turbines dotting a combined area of 1.45 million km2. This is slightly more than the area of Germany, France, Italy and the UK combined. The actual "footprint" occupied by turbines, permanent access roads and other equipment would only be 5 per cent or less of this area, bringing the figure down to less than 72,000 km2. This is small compared with the 15 million km2 we currently make available for crops.

Wind turbines are generally not competing with other uses when set up on barren land or pasture. There may be a small drop in cattle production because of reduced grass area and a loss of amenity value if wind turbines ruin a popular piece of scenery. In the case of forest land or developed areas, there would be considerable conflict. Turbines are inefficient when located near trees and buildings because of the wind turbulence created and clearing trees and buildings for wind farms would not generally be considered the best use of land! In the case of off-shore turbines, the main concerns are sea lanes, restricted military areas, recreational uses and spoiling the view from the beach. Generally speaking, the closer you are to markets for electricity the more likely that wind power will conflict with these other uses. Consequently, wind energy will have the same, if not more, transmission and storage problems than solar power.

Waves

Wave energy is a potential resource with a range of technologies at the trial stage. The energy is derived from the winds as they blow across the ocean surface. The extent that the wind transfers energy depends on its speed and the distance over which it interacts with the water (the fetch). Once created, waves can travel thousands of kilometers with little energy loss.

Because waves continue to collect energy from the wind over a considerable distance with little dissipation, wave energy is significantly more concentrated than solar or wind energy. Waves tend to have tens of kW of energy per meter of crest compared with 100s of watts per square meter facing a solar panel or wind turbine - two orders of magnitude greater.[404]

Areas with the greatest wave strength are the coasts of large ocean basins, including western US, Europe and Australia, and the southern oceans above Antarctica. The power in the wave fronts in these areas generally varies between 30 and 70 kW/m, but with some areas averaging around 100kW/m.[405] For these more favorable areas the World Energy Councils estimates the resource to be in excess of 2 TW.[406] While a preliminary evaluation for a review of wave energy published in 1999 indicated a resource of more than 1 TW.[407] The same review estimated that this resource, using the latest designs of wave energy devices, could produce over 2,000 TWh of electricity annually. At this level of output, it could only be a modest contributor to electricity production - around 12 per cent of current output and 2 per cent of the 100,000 TWh required to give10 billion the level currently consumed in rich countries.

In recent years there has been important developments in wave technology, particularly with respect to devices that can be used further off-shore in deeper waters before the waves are dissipated by hitting the rising seabed and the contrary winds from the landmass. As well as producing electricity, these wave energy converters can also be used to desalinate seawater through reverse osmosis. This is a technology discussed in the previous chapter under desalination.

As with off-shore wind turbines, wave technologies are benefiting from many of the advances in technology and know-how achieved by the offshore oil and gas industry. This is particularly the case with respect to floating mooring systems and sub sea flexible power cables and connectors, pumps and motors.[408] Modern materials and computer technology are also assisting in the development of designs that can react to the changes in sea conditions, and resist the stresses of the marine environment.[409] Advances in remote monitoring should also help.

A range of devices have been developed over the years. However generally they are far less mature than wind or solar technology, and have generally not gone beyond the trial stage and are less than full scale. The technology has to contend with a very corrosive environment and occasional extreme wave conditions that impose huge strains on the equipment.

Most of the devices currently being developed are small units which would be deployed in large arrays. One of the more promising devices is the Pelamis. Named after a sea-snake, although whale-like in size, this device is a series of cylindrical segments connected by hinged joints. As waves run down the length of the device and actuate the joints, hydraulic cylinders incorporated in the joints pump oil to drive hydraulic motors which drive electrical generators to produce electricity. Power from all the joints is fed down a single umbilical cable to a junction on the seabed. A number of devices can be connected together and linked to shore through a single seabed cable. A full scale prototype pelamis has recently undergone extensive sea trials in the North Sea and an order has been placed for three of these units which will be located off the north coast of Portugal. The 8 million euro project will have a capacity of 2.25 MW, and is expected to meet the average electricity demand of more than 1,500 households. Subject to the satisfactory performance of the first stage, an order for a further 30 machines with a capacity of 20 MW is anticipated.[410]

As well as being more concentrated than wind and solar, wave energy also has the advantage of being less variable on an hourly or daily basis and any variability can be forecasted over the time-scales required in the electricity marketplace. As with wind, waves are generally a lot stronger in the winter months. Monthly average energy levels in winter can be three to five times greater than monthly averages in summer. Where peak demand is dominated by winter heating and lighting loads (northern Europe, for example), wave energy has a good seasonal load match.

Hydroelectric Power

In 2004 hydro produced 2,808 TWh of electricity.[411] This was 16 per cent of the total electricity supply and 2.2 per cent of total primary energy. The full potential of the resource has been estimated at 8100 TWh per year.[412] This means there is room for significant expansion. However, at the maximum it would only provide 8 per cent of the electricity required to give 10 billion people the present the rich country levels of consumption. This means a declining role for hydro in the long term and the creation of a gap that will need to be filled by other resources.

Biomass

Biomass provides around 10 per cent of our energy, with most of it being consumed in poorer countries, often on an unsustainable basis.[413] Types of plant biomass include: perennial crops such as trees, bushes and shrubs; annual crops such as sugarcane, cereal straw and grass; agricultural and forestry residues; and urban waste. This can either be burnt for heating or electricity, or converted into ethanol and used as liquid fuel.

The biomass potential from recoverable and unwanted agricultural and forestry residues have been estimated at around 40 EJ per year, while energy from urban refuse may well be around 6 EJ by 2025.

The crops giving the best annual energy yields are trees and sugar cane. For trees in North America and Europe it is over 200 gigajoules per hectare per year. For trees in the tropics with genetic improvement and fertilizer use they range from 100 to 550 gigajoules, with the top end being achieved where water is plentiful. For sugar cane the range is 400 to 500 gigajoules.

If the average figure is 250 gigajoules per hectare, production of the current total commercial primary energy output of around 470 EJ, would require 18.8 million km2. This is larger than the present area of cropland and about half the area of permanent pasture. Even with twice the yield we would still require a dauntingly large area which would compete in many cases with other uses.

A more realistic prospect is biomass being produced on a few million square kilometers at most. Some of this could be in rotation or in tandem with crop growing and grazing where it would play a soil management role. The rest would be in some of the 42 million km2 of forests and woodlands where it would have to compete with timber production and conservation objectives. A few decades from now this area could produce 10 to 20 per cent of our energy needs. However, as energy consumption increases as the century proceeds, biomass's share would decline accordingly.

Other Possible Resources

There are two other solar based resources which may become significant in the future, although at this stage the technology is experimental. These are the energy from ocean currents and from heat stored in the ocean.

Surface currents are driven by wind while deep ocean ones are driven by density and temperature gradients. A number of technologies are being examined including arrays looking very much like wind turbines except they are underwater. Unlike wind, an ocean current is fairly constant and although slow moving its much higher energy density ensures a larger resource from a given area.

Ocean thermal energy conversion systems capture some of the solar energy which is transferred to the oceans every day. They do this by exploiting the temperature difference between seawater at different depths. Cold water is pumped from the ocean depths to the surface and energy extracted from the flow of heat between the cold water and warm surface water. It is suitable for electricity generation, desalination or a combination of both. Deep equatorial waters are the best because they have the greatest temperature extremes.

Summing up on Solar

While the resource is large, its position as a potential major or dominant supplier of energy still depends on technological improvements in a number of areas. PV cells, wind turbines and wave generators will have to continue becoming cheaper and capturing more of the energy. The energy loss in long distance power transmission will have to decline so that sun, wind and wave some distance from human habitation or activity can still supply electricity. We will have to improve our ability to use the energy from solar resources to split water and produce hydrogen and at the same time improve our ability to transport, distribute and store this gas. This can then be used at any place or time to power vehicles or generate electricity.

Given the need to interfere with vast areas of land, it is hard to imagine that any squeaky clean image that sun and wind hold when a hundred or so TWhs are being produced will remain untarnished when production is in the thousands of TWhs.

Nuclear Power without the Phobia

Nuclear power presently generates about 16 per cent of the world's electricity, which constitutes about 6.5 per cent of commercially produced primary energy.[414] All the major developed countries except Italy[415], rely to a significant extent on nuclear power, ranging from 79 per cent in the case of France to around 20 per cent in the case of Japan, UK and US. It is also important in some of the former Soviet bloc countries. For example, the Ukraine receives 49 per cent and Russia 16 per cent.[416] India and China also have some nuclear power.

The industry has its origins in the military programs of the USA and USSR in the 1940s and 50s which produced nuclear weapons and reactors to power naval ships and submarines. The technology is based on the fission process, which produces energy by splitting atoms. The fuel for the process is provided by uranium which is "enriched" to increase the proportion of the fissile isotope uranium‑235.[417]

Presently there are 441 nuclear reactors generating electricity in 31 countries.[418] These come in a number of varieties which are mainly distinguished by their system of transferring heat from the reactor to the power generator. All the reactors in the US and about 90 per cent worldwide are so-called light water reactors.[419] Of these about two thirds use pressurized water while the rest use boiling water. Virtually all the remaining reactors are either a Soviet design using graphite or a Canadian one using heavy water.

After taking off in the 1960s and 1970s, the industry then sunk into a malaise. This can be attributed both to the increasing competitiveness of coal and gas power and to the emergence of a very unfavorable political climate marked by considerable public opposition and a switch in government policy from active encouragement to definite discouragement, including in some countries a decision to phase out the industry. This change of attitude received major boosts from the accidents at Three Mile Island in 1979 and Chernobyl in 1986 which highlighted the risks from radiation.

The industry is not entirely moribund. Improved methods have enabled existing plants to increase their total output and they are generally getting extensions to their licenses beyond their originally expected lifespan. There are presently 27 new plants under construction, including 8 in India, 5 in China and 4 in Russia, while another 38 are planned. [420] A number of countries in Europe are dragging their feet on phase-out plans particularly in the context of reducing greenhouse gas emissions. The US administration is pursuing plans to encourage new construction during the second decade of the century, a policy that has bi-partisan support. Nevertheless, for the industry to maintain or improve its relative position it would need to undergo a major resurgence.

Nuclear power's competitive position may well improve in the future. Increases in fossil fuel prices could have a considerable impact on competitiveness given that fuel constitutes over half the life-cycle cost of a fossil powered plant. In contrast, prices of nuclear fuel have far less of an effect. While the doubling of uranium ore prices would increase nuclear generating cost by only 5 per cent, the doubling of natural gas prices would increase gas-fired generating costs by some 80 per cent.[421]

Nuclear power would benefit greatly from anything that would reduce construction or capital costs. These typically account for 60 to 75 per cent of total generation costs, compared with 50 per cent for coal plants and 25 per cent or less for gas-fired ones.[422] There are a number of factors that could lead to a reduction in these costs. These include standardized large scale production, new plant designs and more rational safety regulation.

If nuclear power plants were built on a large scale various economies could come into play. First there are the economies that come with experience. Once a few plants have been built and commissioned, the experience gained will reduce the costs of future units.[423] Then there are the economies associated with specialized plant and machinery. Producing a large number of any product or component generally allows the investment in specialized production methods that would be too expensive at low production levels but would reduce costs at a larger scale of output. For example, you would not build a production line to produce a few hundred cars. It would be cheaper to make them 'by hand', i.e., with non-specialized machine tools. It is only when the number reaches a critical level that building a large specialized plant becomes cheaper, seriously cheaper. There are also a range of overhead costs, such as design and administration, that can be spread over a large number of units.

Standardization would also reduce the long delays due to the approval process that in the past have doubled the construction time and greatly increased the interest burden. According to new legislation being adopted in the US and elsewhere, once a standardized design has been certified as safe, all plants built to that design would automatically receive approval. Such prior approval could also be harmonized internationally in much the same way as in the aircraft industry. A power company would then only need to receive approval for the chosen site. However, even this may be unnecessary where the unit is to be built on an existing power plant site. Many sites have room for more reactors.

The new designs being considered for future reactors include various features that could possibly make them cheaper to produce. Many new generation nuclear plants including the Westinghouse AP‑600 and 'pebble bed' would operate on 'passive' safety features which rely on natural forces such as gravity, convection, natural circulation, evaporation and condensation.[424] In the case of the AP‑600, this would mean 35 per cent fewer pumps, 50 per cent fewer valves, 70 per cent less cabling, and 80 per cent less ducting and piping than conventional LWR systems.[425] According to the developers of the pebble bed reactor, their design has no need for an expensive containment shell to prevent the escape of radiation in the case of an accident.[426] These and other new designs are also considered to be more suited to factory production and assembly of modules on site than the old generation of plants.[427]

The competitive position of the industry will be most favorable where transport infrastructure is inadequate or distances from fuel sources considerable because nuclear fuel is a fraction of the weight and volume of fossil fuel. This could tip the balance, for example, in India, northern China and western Russia.[428]

Nuclear power may benefit from a move towards a hydrogen economy. Electricity from existing nuclear power plants can be used for the electrolysis of water. Nuclear reactors could also provide the heat for the steam reforming of natural gas, the method that currently produces 95 per cent of hydrogen. Natural gas reacts with water at high temperature to form hydrogen and carbon dioxide. However, this would require a new generation of reactors that have a far higher coolant outlet temperature. An even higher temperature would be required for thermo-chemical water splitting which converts water into hydrogen and oxygen. While this technology is not yet commercially available, a number of steps are being taken in that direction. A pilot project is being planned in Japan. The Americans and the French are also doing development work.[429] Some breeder reactors would achieve temperatures suitable for these processes, as would the pebble bed reactors currently at the trial stage.

The resurgence of nuclear power would require an improvement in the political climate. If nuclear power begins to make economic sense where it did not before this could undercut opposition and strengthen support. The industry could also benefit from the fact that it does not emit greenhouse gases. This would depend on the extent that global warming fears cancel out radiation fears, and competition from other technologies with the same emission claims such as solar and wind.

For nuclear power to continue playing an important role in the second half of the century, there will need to be a large construction program. Just to maintain current output would require the replacement of existing capacity in coming decades. To maintain the current 16 per cent share in the face of the six fold increase in electricity generation that would be required to bring 10 billion people up to current per capita consumption levels of rich countries, there would need to be 2,646 reactors (441 x 5.3), assuming no change in average output. To produce all of this electricity there would need to be 16,537 of them (2,646/0.16). This is one for every 605,000 people or somewhat more than the level for present-day France where there is one for every million people.

Nuclear power could conceivably meet all energy requirements of this population at the current OECD average, through the production of electricity, heat and hydrogen. To provide 10 billion people with the same annual per capita primary energy at current average rich-country levels, we need to produce 2,300 EJ (55,000 mtoe) a five fold increase. This would require 35,243 reactors, or seven for every two million people. [430]

Resources

The current estimate of conventional resources of uranium is 14.4 million tonnes[431] or over 200 years' supply at today's rate of usage of around 65,500 tonnes per year.[432] A third of this is described as known conventional resources and would last almost 70 years at current usage rates while the remaining two thirds are undiscovered conventional resources, based mainly on estimates of uranium that is thought to exist in geologically favorable, yet unexplored areas.[433] This figure is bound to considerably underestimate the ultimate resource. Investment in exploration has been quite low[434] and a number of countries, such as Australia with significant resource potential in sparsely explored areas, have not compiled figures for undiscovered conventional resources.[435] Furthermore, according to Garwin this resource could be stretched by 25 per cent if more costly extraction methods were adopted that leave less of the uranium in the mining waste (tails).[436]

Thorium is another potential nuclear fuel, although currently not used. It is about three times more abundant in nature than uranium.[437] Furthermore, all of the mined thorium is potentially useable, compared with the 0.7 per cent of natural uranium used in existing reactors, so some 40 times the amount of energy per unit mass might be available.[438] The known resource is around 4.5 million tonnes.[439] However, this is bound to be the tip of the iceberg given the limited extent of exploration and the fact that it does not include data from China, central and eastern Europe, and the former Soviet Union. Thorium processing and reactor technology still needs a lot of development before it could become commercialized. India which has more thorium than uranium is in the forefront of research in this area.

There are also unconventional uranium resources to consider. These include about 22 million tonnes in phosphate deposits.[440] The recovery technology is mature and has been utilized in the past; however, costs are somewhat higher than the present price.[441] Then there is the 4 billion or so tonnes contained in seawater which could possibly become a resource.[442] A number of trials have been performed to extract uranium and other valuable minerals from seawater. They use a special absorbent material and the cost at this stage is estimated to be around $300 per kilogram.[443] (At the time of writing the uranium spot price was $122 per kilogram.) Another plausible method is to take advantage of the fact that life forms have the habit of taking up certain elements that are scarce in the nonliving world and concentrating them within their cells. For example, some sea animals accumulate elements like vanadium and iodine to concentrations a thousand or more times as great as in the surrounding sea water. It has been proposed that certain forms of algae could be cultivated to perform this trick with uranium.[444] No doubt seawater extraction would benefit greatly from a few decades of research and development.

When assessing the extent of nuclear fuel resources, it is important to keep in mind the possible adoption of so-called fast breeder reactors which extract around 60 times more energy from each kilogram of uranium. Conventional thermal reactors can only use uranium‑235 which makes up less than one per cent of natural uranium. However, fast reactors, can harness most of the uranium which takes the form of uranium‑238. They can also make very effective use of thorium.

There was considerable interest in this technology during the early years of nuclear power when it was thought that uranium would turn out to be scarcer and the industry a lot larger than proved to be the case. Around 20 plants were built in various countries including the US, France, the Soviet Union and Japan.[445] Most of them were eventually closed due to high costs, teething problems including safety issues and declining support for the industry. However, there are now signs of renewed interest. India, China and Russia have reactors planned. Also, the Generation IV International Forum, representing governments from many of the nuclear power countries including US, UK, Japan and France, selected a number of fast breeder reactors to be among the six systems to be the focus for collaborative research and development. The objective is to make advances over existing systems in areas such as economy, safety, proliferation resistance and protection from attack and to have a number of systems available to be deployed by 2030.

So, to what extent could we rely on nuclear power? The current estimated resource of 14.4 million tonnes would only provide about 5 per cent[446] of 21st century energy production assuming 2 per cent annual growth and no increase in the energy obtained from each tonne. Furthermore, it would be used up by 2090 if the current share of 6.5 per cent were maintained or not much later than mid century if in a few decades time we pushed out capacity to a 20 per cent share.

However, it does not seem too wildly optimistic to envisage nuclear power being able to provide larger shares of this century's energy. Given more exploration and better extraction technologies the recoverable conventional resource could be considerably bigger than present estimates. Moderate increases in the energy harnessed from each tonne of uranium could also make a difference. Of course, the larger the share contemplated the more it would have to rely on the development of new technologies such as breeder or thorium reactors or the extraction of uranium from sea water. With such innovations the resource could become huge and a major provider of energy later this century or in the next.

The Safety of Nuclear Power

Nuclear power is very much under a cloud because of distinctive safety issues relating to its fuel. It is highly radioactive and some of it can be used to make nuclear bombs. This prompts a number of fears: power plants emit small levels of radiation under normal operations and there is always the possibility of a major accident that releases large amounts of highly radioactive material into the environment as happened at Chernobyl in 1986; spent fuel may leak from its disposal site into the environment at some time in the future; and nuclear fuel may be diverted to terrorists for bomb making. The radiation concern is examined first and then the threat of nuclear terrorism where the principle hazard is the explosion rather than the radiation.

Radiation associated with nuclear power consists of subatomic particles that shoot through space at very high speeds. It is called ionizing radiation because it can penetrate our body, damaging cells in the process. In this way it is different from harmless forms of radiation such as radio waves.

Nuclear power reactors are not the only source of ionizing radiation. To begin with there is natural radiation to which humans always have been and always will be exposed. In the United States, people on average receive an annual dose of 300 millirems.[447] This includes radiation from radioactive elements in rocks and soil, from within our own bodies and from outer space.

Radioactive elements in rocks and soil are principally potassium, uranium, and thorium.[448] As well as naturally emerging from the ground, these can be released by human activities such as burning coal, oil, gas and wood, and by mining, plowing, construction and well-drilling.[449] It means that brick, stone, and other building materials are slightly radioactive. Some types of building materials contain more radioactivity than others. For example, it has been claimed that Grand Central Station in New York City, which is a massive granite structure, provides commuters with a level of radiation exposure well in excess of what they would receive from visiting a nuclear reactor.[450] The radioactive material residing in our own body is from naturally occurring substances such as potassium 40 which are vital to our survival. These irradiate our organs including bone marrow, testicles and ovaries. We even irradiate each other at close quarters. This radiation from our bodies delivers an exposure close to a third of what we receive from rocks and soil.[451] From outer space we receive cosmic radiation. Most of it is absorbed by the atmosphere, so we receive a higher than average dose by living at higher altitudes or by flying, mountain climbing or skiing.

As well as natural radiation, another big source of exposure is from medical radiology. This includes x-rays and a whole host of other diagnostic tools. In the US this source accounts for 35 per cent of all radiation exposure and 90 per cent of the total man-made dose.[452] Other sources include TV sets, smoke detectors and airport X-ray machines.

Radiation exposure from all these various sources is fairly low. However, it is still far higher than routine emissions from the nuclear power industry. According to the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR), estimated doses from nuclear facilities account for less than 0.05 per cent of the total dose from natural and medical sources.[453]

Even for those living near a reactor the exposure is a tiny fraction of what they receive from other sources. According to Cohen, it is comparable to what a typical viewer receives from a television picture tube.[454]

Radiation prompts two health concerns. At very high doses, of the kind that only the nuclear industry (civil or military) can deliver, it can cause radiation sickness which burns the skin and damages the central nervous system, internal organs and bone marrow. This damage allows rampant infection. Whether victims die or survive depend on the dosage, their health and age, and the quality of medical treatment.

The other health effect is an increased risk of cancer some time in the future, with the risk depending on the dose. In most cases the latency is 20 years or more. The exceptions are some childhood cancers and leukemia that may occur 3-5 years after exposure.[455]

There had in the past been concerns that radiation exposure could have genetic effects that could be passed on to future generations. However, the available evidence suggests that this is not the case. Research has shown that radiation can cause genetic mutation in plants and test animals including fruit flies.[456] However, these have not yet been detected in people. Studies of the children of Hiroshima and Nagasaki atom bomb survivors show no excess of genetic defects.[457] Nor is there any increased incidence in areas of high natural radiation.[458] Radiation is presumably a weak mutagen for humans, just one of thousands of known mutagens in the environment which, combined, result in about 10 per cent of all new-born children showing some evidence of genetic defects.[459] There is no evidence that radiation causes any other illness and this is in keeping with our knowledge of radiation and the causes of illness generally.[460]

The increased likelihood of contracting cancer depend on the level of radiation. The risk is known with a fair degree of certainty at higher levels of exposure. However, at the lower end there is far less certainty and quite a lot of controversy.

Given the extremely large number of people who contract cancer, it is difficult to determine statistically with epidemiological studies the extent to which radiation could be a contributor. In developed countries about half the population will get cancer from one cause or another and about half of those will die from it. This means that thousands of extra cancer cases in a particular population would cause a quite small increase in the rate and would be difficult to attribute to radiation rather than random variation in cancer rates or other factors that make this population different from others.

At the same token it is difficult to draw conclusions from unusually high or low cancer rates for small groups that have been exposed to higher than average levels of radiation. Small groups can be atypical for a range of reasons that are difficult to take into account.

At this stage our knowledge of how radiation does its nasty work is far too inadequate for us to assess its effect from first principles. More research has to be done into the nature of radiation induced cell damage and how it causes cancer.

In the past there was a general acceptance of what is referred to as the linear no threshold hypothesis (LNTH). This is based on a linear extrapolation from cases where high levels had been experienced and the risk level known with some degree of certainty. Most of the information is provided by studies of cancer incidence among those exposed to radiation from the atom bombs dropped on Hiroshima and Nagasaki. These survivors were exposed to an instantaneous dose of 100s of rems plus subsequent longer-term exposure from fallout. Information has also been gained from the medical records of people subjected to heavy doses of X-rays as a treatment for spinal diseases, a misconceived practice that ceased in the early 1950s.

Based on these studies, scientists have estimated the cancer death risk from a radiation exposure of 100 rems to be 5 per cent.[461] According to the LNTH, this can be extrapolated to much lower doses. So if a 100 rem exposure gives you a 5 per cent risk of developing a fatal cancer, a one rem (1000 millirem) exposure will give you a 0.05 per cent risk. In other words, halve the dose, halve the risk; double the dose double, the risk.

The LNTH also allows us to talk in terms of collective doses or 'person rems'. For every 2,000 person rems, there is one death. This can be achieved by a infinite number of combinations of dose and population. For example, one person receiving 2,000 rems, 2,000 people each receiving one rem or 2 million people each receiving one millirem will all lead to one death. This is an effect that does not apply to most things we are exposed to and there is generally a threshold below which exposure is harmless. For example, 30 sleeping pills taken at once may be enough to kill an individual, however, that does not mean that if you take one tablet that you have a one in thirty chance of dying or if 30 people each take one that one of them will die.

The following examples should give a good idea of the kinds of risks implied by the LNTH. The background exposure of 300 millirems per year received by the US population of 300 million people would result in 45 thousand deaths per year. If we assume the same level of exposure for a world population of 6.5 billion people, this would result in 975,000 cancer deaths per year. As part of their background exposure, the average American receives about 31 millirems of radiation per year from cosmic rays.[462] This would kill about 4,650 Americans per year. The levels of radiation naturally in our bodies is about 39 millirems. This translates into about 125,000 deaths annually worldwide.

It did not take long for a general dissatisfaction with LNTH to emerge. Most scientists think it overstates the risk. In other words a dose that is for example 50 per cent lower than another dose will have a more than 50 per cent lower cancer risk. Furthermore, below some level the cancer hazard is zero or so low that it is effectively zero. The general view among scientists is that there is a lack of conclusive evidence of low level radiation effects below total annual exposures of about 5 to 10 rems.[463]

It has been suggested that a threshold exists because up to a certain level our body has a capacity to repair a whole range of different kinds of damage. It is only when the attack reaches a certain intensity that the repair systems starts to be overwhelmed and the system is increasing degraded as the dose increases.

On the other hand a small number of researchers believe that LNTH understates the risk for low level radiation. They supports the supra-linear hypothesis that more damage is caused per rem at low doses than at high doses. They theorize that perhaps low doses weaken and damage cells (which live on to damage other healthy cells), whereas high doses simply kill cells.[464]

While we need to keep in mind the problems with epidemiological studies, proponents of the prevailing view have a large amount of evidence which at least on the face of it supports their position. This includes the experience of Chernobyl, plant workers, medical patients, Japanese atomic bomb survivors who received relatively light exposure and the effect of differences in natural radiation levels.

In the case of survivors of the Hiroshima and Nagasaki bombings, those who received instantaneous radiation doses of less than 20 rems have not suffered increased cancer rates.[465]

A UN study 14 years after the Chernobyl accident concluded that up to then, there have been no increase in deaths from leukemia even among recovery workers who received fairly high doses of radiation, despite its latency period of only 5-10 years after radiation exposure.[466]

Extensive studies by radiation protection bodies have been unable to detect any sign that workers dealing with radioactive material have cancer mortality rates which are higher than those for the general population.[467] A study of over 20,000 men who took part in the UK atomic bomb tests in Australia and the Pacific in the 1950s showed no detectable effect on their life expectancy or on the incidence of cancer or other fatal diseases.[468] A study of the mortality rate among 30,000 persons exposed to radiation while working with nuclear ship propulsion systems was lower than the mortality rate among another 30,000 persons in a control group who received a more normal amount of radiation per year.[469]

There is no sign that regions with higher levels of natural radiation have higher cancer rates. These are at a higher altitude and more exposed to cosmic rays and/or have higher than normal uranium content in their soil. The cancer death rate in seven western states in the US is 15 per cent lower than in the rest of the continental US even though the level of radiation is almost twice as high.[470] In some parts of India and Brazil the natural background is over ten times the world average, due to the presence of radioactive rocks, but the population shows no signs of being affected.[471]

It is even possible that small radiation doses are beneficial. An explanation offered for this is that low radiation stimulates the body's repair mechanisms.[472] Experiments indicate that the irradiation of mice by gamma rays increases their survival rate by one week per rem, and that the irradiation of salmon eggs increased the number of viable eggs and the rate of return of the adult fish to their birthplace to breed.[473] There are also statistical studies of human exposure that support this proposition. Twenty years ago in Taiwan, recycled steel, accidentally contaminated with radioactive cobalt-60, was used in the construction of more than 180 buildings which were occupied by about 10,000 people for between 9 and 20 years.[474] With seven cancer deaths, the cancer mortality rate for this population was 3.5 per 100,000 person-years compared with the rate in the general population of Taiwan over these 20 years of 116 persons per 100,000 person-years. Assuming that the people concerned were fairly typical of the population at large in terms of factors such as income and age - and this needs to be confirmed - the experience of these people seems to suggest that long-term exposure to radiation, at a dose rate of the order of 5 rem per year, greatly reduces cancer mortality. This is more than 10 times what people are presently receiving.[475]

Plant Safety

Normal Operations

During the normal operation of a nuclear power plant, gaseous and liquid discharges containing very low amounts of radioactive material are released into the environment. The extent of these emissions (both absolutely and per unit of generated electricity) has been reduced considerably since the early days of the industry by the use of improved technology, and this is continuing.[476]

Government regulations place limits on these emissions which keep exposure to a minute fraction of the natural background radiation levels that people already experience. In many countries permissible levels are set so that a hypothetical person who stood at the boundary fence, drank the plants cooling water and consumed food grown nearby, would not receive more than some minute increase in their normal exposure level.[477]

In reality, of course, no one experiences even that minute increase. No one lives at the reactor fence and even if they did they would not experience the maximum exposure because most plants keep their emissions well below this level.[478] The average lifetime exposure for people living in the nuclear power regions such as North American and Europe is next to nothing and less than the increased natural radiation exposure from a long plane trip.[479] On an annual basis, people living near to a nuclear power plant receive about one millirem of extra radiation exposure.[480]

Studies of populations surrounding nuclear reactors also suggest no health effects. A US survey sponsored by the National Cancer Institute studied cancer deaths in 107 counties with nuclear facilities within or adjacent to their boundaries.[481] Each county was compared to three similar 'control counties.' Their report, published in 1990, found 'no evidence to suggest that the occurrence of leukemia or any other form of cancer was generally higher in the study counties than in the control counties.' Studies in other countries generally supported the NCI's findings, including ones in France and Canada.[482]

Reactor Accidents

A much bigger concern than routine emissions is the threat of reactor accidents that release large amounts of radioactive material into the environment and endanger the health of thousands.

Any serious accident would have to be the result of a string of mishaps. In the first instance there would have to be a breakdown in the cooling system which leads to the melting of the reactor core - a "meltdown". This in turn would have to cause an explosion which spreads radioactive material into the environment.

Fortunately the risk of such accidents occurring is quite low. This is really most clearly in the truly remarkable safety record for nuclear reactors over the past half century. In the case of the water reactors used in most of the world's nuclear power plants, there has been over 12,000 reactor years of service without an accident endangering the public. There has also been a similar amount of service by research and marine reactors with an equally unblemished record. The US navy alone has accumulated more than 5500 reactor years of operational experience with its nuclear submarines, icebreakers and aircraft carriers.

There has been one serious accident and that was at Chernobyl in the Ukraine in 1986 during a misconceived experiment. However, that said more about the state of the Soviet Union in its dying days than it does about the safety of nuclear reactors. The accident would not have happened but for a string of procedure violations. According to the Soviet investigators, there had been six separate contraventions of procedures by the reactor operators during the experiment. If any one of these had not been committed, the accident would not have happened.[483] There was a strong culture of disregarding safety rules and a complacency encouraged by the fact that past accidents and mishaps were kept secret from those in the industry as well as the public at large. This was reflected in the fact that an electrical engineer with limited knowledge of reactor operations was in charge of the "experiment," and there was no one in the control room who understood the risks they were taking. Furthermore, the Chernobyl reactor was of a Soviet design that was far more vulnerable to criminal negligence and incompetence than types used elsewhere. The reactor had graphite as a moderator instead of water, so that the loss of water coolant can increase the chain reaction and resulting heat, whereas with a light water reactor loss of coolants brings the chain reaction to a halt and limits the heat that can be reached by the reactor core. Also, the reactor did not have the massive containment structure common to most nuclear plants elsewhere in the world. According to some analysts, this would have withstood the steam pressure that caused the explosion.

Generally speaking a disaster in a reactor is remote because of a range of emergency safety features and the need for a number of unlikely and unrelated mishaps. Of particular importance are the back up arrangements for the cooling system which prevents the reactor core from overheating. There are backup pumps and massive flywheels that keep water circulating even if the power is cut. If the main cooling system fails, emergency core‑cooling systems which are independent of the primary core-cooling system come into operation. In some cases this system is a pressurized water tank that does not need pumps but simply dumps large amounts of water on the reactor. It is very unlikely that both the primary core and emergency core cooling systems would fail.

The rods that control the chain reaction also have a number of emergency features. Firstly there is a number of independent clusters of control rods that can be inserted by gravity into the core to stop the reaction. Any one of the clusters would be enough to achieve this.[484] In the case of a power failure, the control rods are immediately released because they are only held above the reactor by electromagnets.[485]

If there is damage to the reactor core as a result of a failure in the cooling system what happens then? In the first instance, radioactive fuel has to escape from the steel pressure vessel into the containment dome and then set off a steam or hydrogen explosion that breaches the dome encasing the reactor. This would have to be quite a powerful explosion given that the dome is made of steel-reinforced concrete about a meter thick.

Experiments suggests that it is not easy to set off a steam explosion. For example, in 1980 scientists at the Sandia Laboratory in New Mexico unsuccessfully attempted to create a large one by dropping molten uranium into water.[486] Nevertheless, the inside of the protective dome is equipped with water spray nozzles or refrigeration systems that will condense the steam and reduce the pressure.[487] Some reactors have large volumes of ice on hand.[488]

The concern about hydrogen stems from the fact that it may be released by a chemical reaction if extremely hot steam comes in contact the fuel-casing material.[489] However, according to Cohen, the research seems to indicate that even if all the hydrogen that could possibly be generated by core damage were to explode all at once, the force would not be powerful enough to break most containments.[490] In nearly all scenarios the hydrogen would be produced gradually and ignited in a series of small explosions or fires caused by sparks from various sources such as electric motors. This is assisted in some cases by the installation of devices that constantly create sparks. Some reactors have an inert gas in the containment so depriving any hydrogen of the oxygen needed for an explosion.

If the dome can hold out for a few hours from the initial release of radioactive material, a lot of it will either become stuck to the dome walls or equipment, or be removed by various systems in place for that purpose. The latter includes ventilation systems and water sprinklers for removing particles from the air.[491]

As well as the prospect of reactors exploding radioactive material into the air and surrounding landscape, there is also a concern about groundwater contamination if the fuel melts through the thick concrete floor. This has been colorfully dubbed the 'China syndrome'. However, according to Cohen, if molten fuel were to come into contact with groundwater it would flash into steam which would build up sufficient pressure to keep the rest of the groundwater away.[492] Once the fuel cooled it would be in the form of a glassy insoluble mass. If there were a problem, measures could be taken in good time to prevent any ongoing contact.

Various studies suggest that nuclear facilities would withstand external impacts such as World Trade Center style attacks. An aircraft may look quite solid but it is actually fairly light and flimsy. Given the small size of the protective dome the worst that could happen is to be hit by one of the engines. Studies also show that spent fuel storage pools are also able to withstand such attacks and experimental evidence proves that dry storage and transport casks would retain their integrity.[493]

Earthquakes are another concern. In the popular imagination there are visions of plants splitting in two or being swallowed up by cracks in the earth. The reality is very different. Like many modern structures, nuclear power plants in or near earthquake prone regions are built to withstand the worst expected earthquakes. They also have equipment that monitors seismic activity constantly and would be shut down in the event of an earthquake. [494]

The Three Mile Island (TMI) semi-meltdown is the only serious incident involving a water reactor. However, there was no loss of life nor significant radiation emissions that could cause future health problems. Nevertheless, it was still a major mishap because the facility was completely disabled, and some argue that it could have easily turned into something a lot worse. Furthermore, it gained added significance from the fact that it was perceived to be a lot worse than it really was and contributed to a growing opposition to nuclear power.

Investigations revealed that the problems were mainly in the way the plant was operated rather than in the technology.[495] This has generally lead to better training, improved controls and instrumentation and practices such as always having an engineer on duty. By closing off the TMI route to meltdown, these measure have made reactor operations safer.

Serious technological and management failings cannot of course be ruled out. This is exemplified by the recent incident at the Davis-Besse plant in Ohio, US where the pressure vessel had been badly corroded by the boric acid in the cooling water. This represented both a technological chink in the armor given that it arose from an unanticipated problem and a management failure in that a flawed inspection regime failed to pick it up. The Nuclear Regulation Commission deemed it to be a serious incident in that it involved a serious loss of defense in depth capability. According to the Commission, a worst-case failure scenario would have been a high-pressure leak of slightly radioactive primary cooling water (as steam) into the reactor containment building. The plant operators have replaced local management and spent large amounts of money on repairs and improvements. Other plants have been inspected to check for similar structural degradation but nothing has been found.

What about the increased risk from a growth in the number of reactors, in the event of a resurgence of the industry? What may seem like a low risk when there are only 441 reactors, may be looked at differently when there are thousands of them. This concern should be allayed by the fact that future reactors will be even safer than present ones. There will be greater reliance on safety systems that employ perfectly reliable 'passive' natural forces such as gravity, natural circulation, convection, evaporation, and condensation.[496] In the case of the pebble bed design, each fuel pebble is surrounded with its own outer shell that traps all radioactivity inside and if the helium coolant completely leaked out of the core, the fuel would not get hot enough to melt the uranium oxide within the fuel pebbles.[497]

The next question is - if a reactor blows how likely is it to lead to some incomparable disaster rather than one of more normal proportions? Certainly to achieve the more disastrous outcomes require less likely events or combinations of events. We need a failure by emergency workers to stem the emissions. This could be followed by evacuations being blocked by floods or snow storms. The situation could then be made even worse by an atmospheric temperature inversion concentrating radioactive material over a large trapped population down wind of the reactor. While we can conjure up such mega-death scenarios, they should not necessarily influence our actions if they are no more likely than other risks of disaster that are an inevitable part of life and of doing the things we want to do. For example, the government does not prohibit major sporting events because of the minute risk that the stadium will collapse from a construction fault or be hit by a falling Boeing 747.

What can Chernobyl tell us about the possible impact of a nuclear reactor disaster? According to the 2000 report by the UNSCEAR, there were 134 confirmed cases of radiation sickness among reactor and emergency workers who were on the scene at the time of the accident and received very high radiation doses. Of these 29 died within four months of the accident. A further 11 died between 1987 and 1998. The survivors from this group have a range of illnesses and a raised risk of cancer in future years.

Among the 240,000 recovery workers exposed to fairly high doses in the initial cleanup phase, there has to date been no raised cancer rate. This is surprising in the case of leukemia which usually emerges within a few years of high radiation exposure and gives weight to the anti-LNTH position. On the other hand, there is still the prospect of a raised rate for other cancers in the future given that it takes 20 years or more for radiation to have its effect. With the doses received by this group, and assuming LNTH, we can expect to see a thousand or so extra cancer deaths in the future. Among the few hundred thousand recovery workers who arrived more than a year after the accident, the radiation dose was much lower and any increase in cancer deaths would not be high enough to be detected in any epidemiological study.

Beyond the reactor site and its immediate surrounds, radioactive contamination mostly affected an area of 150,000 km2 with a population of about five million people. However, the only public health impact of radiation within this area that is in evidence 20 years after the accident are 1800 mostly treatable cases of thyroid cancer due to childhood exposure to radioactive iodine. The thyroid gland of young children is particularly susceptible to the uptake of radioactive iodine which has a half-life of 8 days and was a major component of the fission products released from the reactor. Indeed, many of these cases could have been avoided. A low iodine diet made the children more susceptible. Also the authorities could have been more effective in their distribution of stable iodine to prevent the uptake of radioiodine by the thyroid and in restricting the consumption of milk and fresh leafy vegetables in the vicinity of Chernobyl. People who were children at the time of the accident continue to have an increased risk of thyroid cancer, especially those who were under five years old. In the case of other cancers, no statistically noticeable rise in death rates is expected in the future because of the low radiation dose received.

As we can see this is a tale of human tragedy and hardship but it is not Armageddon. If anything in this sorry business is in line for that title it is the psychological and medical trauma caused by the gross over-reaction to the accident. Hundreds of thousands of people have had their lives disrupted by being relocated from regions where the radiation level had been raised to levels that were still less than the natural level in many parts of the world. This has lead to high rates of unemployment, depression, hypochondria and stress-related illnesses such as heart disease and obesity. Then there is a grossly exaggerated fear of getting cancer which is way out of proportion to what is actually a fairly small risk. Anti-nuclear fear mongering must take some of the blame for this state of affairs.

So, where in the spectrum of nuclear accidents does Chernobyl belong? It was certainly a very bad accident. The roof was blown off and large quantities of radioactive material were scattered over the surrounding region. One can imagine how it could easily have been worse. For example, the reactor could have been in a more heavily populated location and the weather crueler in where it delivered rain from radiation filled clouds.

However, these factors would appear to be overshadowed by the range of ways that it could have turned out a lot better if it had not been for the circumstances particular to the Soviet Union. To begin with the reactor was a type that used graphite as a moderator rather than water. This type is not used in the West. Burning graphite proved to be an excellent means of distributing radioactive material into the atmosphere. Emergency workers were poorly protected by western standards and the performance of emergency measures was not always "best practice". Evacuations were delayed because the authorities did not want to admit to a serious accident until absolutely necessary.

However, even if there were a good chance of an accident turning out worse than Chernobyl, it would have to be a lot worse to be a disaster of unusual proportions. Events worse than Chernobyl on a fairly regular basis would probably compare favorably with the deaths from coal mining and fossil fuel emissions or from motor vehicle accidents. Possibly, the deaths from Chernobyl will turn out to be less than what would result from installing millions of wind turbines or solar panels.

Nuclear Waste

A common arguments against nuclear power is that there is a "waste problem." It is claimed that we are unable to safely dispose of the radioactive waste created both in mining and uranium processing, and in reactor operations.

Mining and Uranium Processing Waste

The main form of radioactive waste from the 'backend' of the process is the ore body after the removal of the uranium. This is called tailings and there is about 400 tonnes of if for every tonne of uranium[498] and it is 50 to 100 times more voluminous than all other radioactive wastes combined.[499]

Its radioactivity is similar to that of natural uranium, however it potentially constitutes more of a hazard because it is on the earth's surface rather than in the ground and in a pulverized form. It generates radon gas and radioactive particles that can get into the air or contaminate streams. However, emissions from uranium tailings would still only be a mere fraction of natural emissions from the soil and considerably less than that released by tillage of the soil by farmers.[500]

Tailings emerge from the uranium milling process dissolved or suspended in water. This liquid is pumped into ponds or dams which have to meet certain design specifications to prevent contaminating the ground underneath. The process water is then decanted into a settling pond while the remaining tailings dry out leaving what looks like piles of sand. This is covered over with enough rock, clay and soil to reduce radiation to levels naturally occurring in the region and then a vegetation cover is established.[501]

Various processes are employed to remove chemicals and radioisotopes from the decanted water. These are retained as a sludge that settles on the bottom of the pond. The water evaporates or is released according to stringent rules on radiation and other contaminant levels. The sludge is collected and disposed of when the site is decommissioned.[502]

There are some problems with mine and mill waste from a time when procedures were less strict. These still require efforts to stabilize, protect or relocate the waste.[503] Past mining practices were also a hazard to miners, with high radon exposure leading to higher incidence of lung cancer. However, this has not been a feature of uranium mining for some decades.[504]

Reactor Waste

The other form of waste results from reactor operations - 'front end' waste. It is a much smaller quantity, however, it includes medium and high level waste.

In terms of volume most of it is low level waste that decays fairly quickly, with most of it fading to background levels within months or years.[505] This waste includes filters and the radioactive material that they have collected from air and water in the reactor, and things that have been contaminated by contact with radioactive material, for example, gloves, clothing, pipes and valves. Not all of the low-level radioactive waste is from the nuclear industry. A significant proportion is from other users of radioactive material such as hospitals and research laboratories.

Intermediate-level wastes include chemical processing resins, fuel rod casings and metal from spent fuel assemblies[506] and makes up less than 20 per cent of reactor waste by volume.[507]

While only comprising 5 per cent of the total volume, high level waste contributes 95 per cent of the radioactivity. This is the spent fuel from the fission process. The entire US nuclear power industry produces about 2,000 tonnes annually[508] and has produced about 50,000 tonnes since the industry began.[509] This is equivalent in weight to a medium sized cruise ship.

Storage or Disposal of Waste
Current Arrangements

Presently most high level waste is kept in temporary storage facilities at the plant site. It is placed in specially designed containers and stored in pools of water that keep it cool and prevent radiation emissions. In those countries with fuel reprocessing plants, their high level liquid wastes are stored in cooled multiple-walled stainless steel tanks surrounded by reinforced concrete.[510] In both cases temporary storage is designed to do its job for many decades to come. Most intermediate waste is also kept in temporary storage.[511]

Low-level waste is typically stored on-site, either until it has decayed away and can be disposed of as ordinary trash, or until amounts are large enough for shipment to a low-level waste disposal site in special containers.[512] The waste is typically packaged in steel drums and buried in shallow trenches at licensed burial grounds.[513] Some countries use engineered facilities such as concrete lined trenches or vaults and there is some move towards deep disposal.[514]

Permanent Disposal

Temporary storage of high level waste has proven quite effective and is designed to last indefinitely. Furthermore, it would be easier to dispose of a few decades down the track, when the heat and radioactivity has dropped to a small fraction of the level when it was first removed from the reactor.

The view that we need permanent and inaccessible storage that requires no action by future generations seems to be based on the following two premises. Firstly, the waste is a burden we should not pass on to future generations. We enjoyed the benefits so we should bear all the costs. Secondly, future generations may regress to Mad Max barbarism or 'advance' to a low tech utopia of 'simple living' and be incapable of dealing with the waste. (By the way, it is easy to imagine how the latter could quickly degenerate into the former.)

The first premise fails to recognize the huge debt that our descendents will owe us for the their inheritance of accumulated capital, and technical and scientific knowledge. Looking after some ancestral waste is a small recompense. Furthermore, any burden will be greatly reduced by the onward march of science and technology which will provide increasingly cheaper methods of storage or disposal.

It is hard to worry about the second premise. If we revert to barbarism or feudalism, radiation exposure in some areas would be a small problem compared with all the other sources of increased death and misery accompanying this new state of affairs. Furthermore, at least in the case of medium and low radiation doses, there would be less impact in a society that has regressed to a life expectancy of 35 years or so. Most people would have died of something else before any increased risk of cancer had had time to kick in. And in the case of high doses, people would soon learn to stay away from its source and incorporate it into their myths and legends.

Also, by not going down the inaccessible permanent disposal route, we would be retaining what may turn out to be a valuable resource if available for use in future reactors that make full use of the uranium and not just uranium‑235. Another argument for continued accessibility which should appeal to the worriers is that it would allow future generations to make disposal super super super safe and not just super super safe:

The ability to monitor and gain access to waste once it is in a permanent disposal site is seen as increasingly important to public acceptance of disposal plans. This would allow future generations to determine whether the site is still safe. Maintaining some access to the site could be useful for two reasons related to public acceptance. First, it would make it easier to correct problems if they arise. Second, it would allow future generations to apply new methods of waste disposal.[515]

Despite the strong case against it, inaccessible permanent disposal is the policy in ascendance. Given this, ocean dumping should be the preferred method because it is cheap as well as safe. The waste would simply have to be converted into an insoluble form and placed in containers designed to last for thousands of years. In the unlikely event of a canister failing, any radiation would be released slowly and be diluted in the ocean where it would be scarcely noticed given that the ocean already contains 4 billion tonnes of uranium and other radioactive elements.[516] Besides, ocean dumping is done by nature all the time. Uranium ore is continually being eroded into rivers and finally discharged into the sea.

Even cases of accidental or rogue dumping indicate that concerns are overblown. Russia has dumped sixteen complete nuclear reactors from old submarines and ships into the Kara Sea north of Siberia. Six of them still contained spent fuel. These were not encased in concrete or carefully buried in the ocean floor. They were just dumped. However, despite this rather insouciant manner of disposal, researchers have been unable to detect any measurable radiation from these reactors anywhere in this fairly small area of water.[517] Over time they will be buried by the silt which is delivered in great quantities by the Yenisey River. In 1968 a US B52 bomber armed with plutonium fueled nuclear bombs crashed off the coast of Greenland. The recovery team were only able to retrieve around 90 per cent of the plutonium with the rest dispersed into the shallow coastal waters. Subsequent research indicated no increase in plutonium concentrations suggesting that it had been encased by the sediments on the sea floor. Seven nuclear submarines are currently sitting safely on the sea floor. One of the them, the Soviet submarine K-8, it is feared left 20 nuclear mines at the bottom of the Gulf of Naples before sinking under tow in the Bay of Biscay. The Soviet wreck which sunk in 1989 did, however, require 'repair' work to prevent radiation leaks. Of course, as things stand ocean disposal is politically impossible because it pushes all the phobia buttons and is now even prohibited under the London Dumping Convention. This leaves geological disposal.

A number of countries have identified potential underground storage sites and have conducted geological and geophysical tests to determine their suitability. These include Belgium, Canada, Finland, France, Germany, Spain, Sweden, Switzerland and the United States. Possibly the first cab off the rank will be America's site at Yukka Mountain in Nevada, although when it will finally open is still unclear. Compared with ocean disposal this method is appallingly expensive, although not prohibitively so given that it still would be only a couple of percent of the cost of nuclear power.

The primary safety concern with underground storage is that the waste would eventually be dissolved by ground water and carried by it into wells, rivers, and soil. This could then get into human stomachs through drinking water supplies or through food plants that have picked up contaminated water in the roots.[518] The chance of exposure through inhaling contaminated dust is far less because groundwater only occasionally breaks the surface and 95 per cent of the dust we inhale is filtered out by hairs in the nose, pharynx, trachea, and bronchi and removed by mucous flow.[519] Direct irradiation by radioactive materials in the ground would not be a problem because rock and soil are excellent shielding materials that radiation cannot penetrate.[520] Other concerns relate to possible disturbances such as earthquakes, erosion, volcanic activities, mining and meteor impact.

Geological disposal is based on the strategy of multiple barriers, working from the innermost to the outermost. Firstly, the waste is in a form, possibly glass, that is not readily dissolved. Both archaeological and experimental evidence suggests that dissolving glass is an impossible task.[521] Secondly, the waste is sealed in corrosion-resistant containers. In the case of Yukka Mountain, these will have an outer layer of titanium. Tests have shown that this metal would prevent water penetration for thousands of years when immersed in a very hot and abnormally corrosive solution, while under more normal groundwater conditions, containers would retain their integrity for hundreds of thousands of years. [522] So the containers alone provide a rather complete protection system even if everything else fails. Thirdly the containers are surrounded by a backfill of clay that would swell if wet and form a tight seal keeping any water flow away from the package.[523] The clay would also insulate the waste from minor earth movements.[524] The fourth barrier is provided by placing the waste in a suitable geological environment or geosphere. Any waste which overcame the first three barriers would need to encounter conditions which would not provide opportunities to travel to the surface in groundwater. This means low rainfall to limit the means of transmission and a poor transport medium such as impervious rock with no fractures. It also means the depository being well above any water table which in turn would need to be long way from lower lying ground where it can come to the surface. The chosen sight would need to be in a region that was unlikely to be subject to future volcanic eruptions and it would have to be sufficiently deep so that neither meteorite impact nor surface erosion would expose the waste.

Cohen argues that virtually any deep ground storage would provide all the protection you would need in the totally unlikely event that the first three barriers were breached. [525] He points out that even if exposed waste were surrounded by ground water it would take an extremely long time to reach the surface, if ever. Firstly, groundwater near the waste would take a thousand years or so to emerge because it moves very slowly and travels horizontally following the rock layers and hence typically must travel many miles before reaching surface land at a lower altitude. Secondly, the radioactive material will move far more slowly than the groundwater because it would constantly be filtered out by the rock material. Furthermore, it may well become a permanent part of the rock.

Finally, one needs to ask whether it would matter much if radiation from a particular location got into the food or water supply. If it was at a level that caused concern it would be quickly picked up by routine monitoring programs and fairly simple countermeasures could be taken such as refraining from growing crops, grazing animals or drinking the water. Furthermore, future advances in medical science will greatly reduce and possibly eliminate the threat posed by radiation. So, at some point radiation exposure may cease to be a health concern.

Transporting Radioactive Waste

The specter of radiation being released from nuclear waste while in transit is made much of by the radiophobes. The record to date, at least in the US has been incident free. Over the past 40 years the US industry has managed to move more than 3,000 shipments without a single radiological release.[526]

Certainly movements will increase considerably once centralized geological storage facilities are brought into operation and/or greater use is made of reprocessing plants. So any risk, if there is one, will increase. However, a serious radiation leak in transit is a very remote possibility. This is ensured by the tight regulations governing the activity, particularly concerning the form the waste must take and the method of containment.

The containers used are subject to tests to assess their ability to deal with a range of accidents or attacks. These include ensuring that they can withstand the effects of a high speed truck or train crash, burning jet fuel and the high pressures of deep water. Even if these containers were somehow breached, contamination would be greatly limited by the fact that the waste takes a solid form. It is unable to leak out like a liquid or a gas. Significant radiation exposure would be limited to people who chose to linger in the immediate vicinity of the accident.[527]

Nuclear Terrorism

There is a concern that having more nuclear power reactors would increase the risk of terrorists getting their hands on the material required to make a nuclear weapon and so cause the level of death and misery achieved with the Hiroshima and Nagasaki bombs.

Achieving such a result would face a number of hurdles. Firstly, they would need to get together a small group of physicists, engineers, chemists, metallurgists and explosive specialists. They would not have to be experienced nuclear weapons makers but could rely on what is available in the open scientific literature. However, the more relevant their background the more smoothly the operation would proceed. There would then be the job of obtaining all the required equipment. Some of this would be difficult and in some cases would likely arouse suspicion

And finally there is the acquisition of the required nuclear material. Either highly enriched uranium or plutonium would fit the bill. Virtually all nuclear power reactors use only lightly enriched uranium and as long as plutonium produced in the fission process remains in the spent fuel, the level and type of radioactivity prevents it from being diverted to bomb making. Plutonium only exists separately where it is awaiting to be reprocessed into new fuel. This is carried out in Britain, France, India, Japan, and Russia, while the US at least up until now has opposed it because of proliferation concerns. Highly enriched uranium and plutonium are only used as a fuel in fast reactors. At the moment there are only a few in operation and a similar number are planned for China, India and Russia. Relatively large amounts of material would be required and any theft is unlikely to go unnoticed. The massive manhunt would mean that the time between the theft and the detonation would need to be fairly short.

The risk can be reduced in a number of ways. At the political level, the US is presently pursuing a course in the Middle East which should undermine the position of Jihad fascism, the main terrorist threat. At the time of writing, mainstream Islamists have already been brought into a democratic political process in Afghanistan, Iraq and Lebanon, with Egypt not far off. At the same time, the US induced Israeli withdrawal from the West Bank is now an inevitable event just waiting to happen. At the regulatory level, it is a matter of ensuring that there is an adequate reviewing process which will detect any weaknesses in the internationally agreed arrangements for the storage and handling of nuclear material. New technologies can also play a role. For example, there is talk of reactors with tamper proof fuel which is returned to special facilities for storage, disposal or reprocessing.

Concluding Comments on Nuclear

Nuclear power could play an important although not dominant role in energy production during this century by simply relying on conventional uranium resources and moderate increases in the amount of energy extracted from each tonne of uranium. Playing a major role both in this time frame and in the longer term will depend on the adoption of new technologies such as breeder or thorium powered reactors and seawater extraction of uranium.

Given the health-giving qualities of economic growth and affluence, there is a limit to how much heed we should take of remote risks from nuclear power, if it otherwise makes economic sense.

Geothermal Energy

Beneath a relatively thin cool outer layer, the earth is a furnace with a central core as hot as the sun. This is due mainly to the initial heat from gravitational collapse, when the earth was formed some 4.5 billion years ago, and to the on-going radioactive decay of potassium, thorium and uranium. The amount of heat beneath our feet is so great that the ability to exploit even a small fraction of it would belie any doubts about our ability to vastly increased the level of energy consumption.

To date, exploitation of the resource has been confined to its hydrothermal and geoexchange forms. The hydrothermal resource is the subterranean store of heated water and steam, and is the more important of the two. It is more readily exploited, the closer it is to the surface and the higher the heat gradient, i.e., the increase in temperature with each unit increase of depth. It is mainly located near where the tectonic plates meet, resulting in considerable volcanic activity and the placement of magma at higher than usual levels. The main areas are located in New Zealand, Japan, Indonesia, Philippines, the western coastal Americas, the central and eastern parts of the Mediterranean, Iceland, the Azores and eastern Africa.[528]

Hydrothermal electricity generating capacity is about 9,000 MWe.[529] This modest contribution is equivalent to 10 to 15 coal or nuclear power plants. Six countries are responsible for over 80 per cent of capacity, with the USA and the Philippines well out in front.[530]

Where the water is above 150oC steam is created which can be directly fed into a turbine connected to a generator. If the temperature is between 100oC and 150oC, electricity can still be generated using binary plant technology. The steam heats, through a heat exchanger, a secondary working fluid (isobutane, isopentane or ammonia), which vaporizes at a lower temperature than water. The working fluid's vapor turns the turbine and is condensed before being reheated by the geothermal water, allowing it to be vaporized and used again in a closed-loop.[531]

Undertaken on a similar scale to electricity generation is direct use of the heat.[532] This is primarily for space heating. For example, in Reykjavik in Iceland pipes carry hot water for tens of kilometers to homes and other buildings. The resource can also be put to a range of industrial uses such as drying food crops, lumber, and bricks, heating fish ponds and greenhouses, and pasteurizing milk.

Geoexchange systems or heat pumps also provide space heating by taking advantage of the fact that the ground immediately under the surface stays at a fairly constant temperature all year round even while the temperature above changes with the seasons. In the temperate regions the ground temperature stays between 10 to 16 degrees Celsius (50 to 60 degrees Fahrenheit). By circulating water or some other fluid through pipes, thermal energy is extracted from the ground during the coldest times of the year and deposited in the ground during the hottest times. Pipes can be buried vertically, if the ground is not too rocky, or, if space permits, horizontally in shallow trenches a couple of meters underground. While the system requires electric power, this is only needed to move the heat rather than produce it. As a result it delivers 3 to 4 times more energy than it consumes.[533] Currently the use of this technology is quite limited. Just over half a million systems have been installed of which about half are in the US.[534]

While the hydrothermal and geoexchange resources have the potential to grow and continue to be a significant sources of energy in some regions, they could never be a major player. If the nether regions are to perform that role, we will need to exploit the much larger sources of heat. At this stage hot dry rock in the earth's crust is technologically within reach. Further down the track we should be able to tap into the pockets of extremely hot molten rock - or magma - which are widespread throughout the earth's crust and also the area beneath the crust called the mantle which begins 5 to 10 kilometers beneath the sea and 20 to 70 kilometers beneath the continents.

The approach being developed to exploit hot dry rock involves creating a man-made reservoir by drilling a deep well bore down into high-temperature, low-permeability rock and then forming a large heat exchange system by hydraulic or explosive fracturing. Water is then injected into this original well and retrieved from one or more production wells after circulating through the fractured rock. As with the hydrothermal resource, the hot water or steam can be used to generate electricity or to supply combined heat and power systems.

The technology has been mainly developed at the Hot Dry Rock Test Facility in Fenton Hill, New Mexico. A hot dry rock reservoir was successfully created which generated thermal energy continuously at a rate of about 4 MW in two test phases lasting 112 and 55 days. About 10 per cent of the power produced was consumed by the injection and production pumps.[535]

While trials such as these have proven the concept, a great deal of development would still be required to make it commercially viable on a wide scale. These include: (1) the development of inexpensive high-temperature hard-rock drilling techniques, (2) improvements in three-dimensional rock fracturing, (3) mastery of methods for maintaining low-impedance fluid circulation through the fracture system and (4) improvements in power generation methods appropriate to water at temperatures considerably lower than that in fossil or nuclear powered plants.

Drilling costs account for one third to one half of the total costs of a geothermal project[536] and the cost of reaching between 5 and 10 kilometers has to be reduced significantly for hot dry rock to compete with other energy sources. At the moment costs shoot up dramatically as those depths are approached. Basically, drilling has to be faster and less prone to break downs under increasingly hostile conditions. The prospects for improvements seem quite good.

To begin with, the sharp end of the system can be improved in a number of ways. Drill bits can be made of new harder materials that allow them to operate at much higher rotary speeds and weight-on-bit loads. Or a bit that simply tries to grind through hard rock can be replaced by one that shatters it, possibly assisted by applying heat to create thermal stresses. Down hole motors can be developed which apply more power to the bit than the more traditional rotary power transmitted from the surface. Improvement can also follow from basic research into the physical and chemical processes associated with penetrating rock.

The development of so-called smart drilling should also make a big difference. This will involve a high-speed broadband data link to the drill bit where sensors will report in real-time on the conditions around and ahead of the bit and so enable the operator to avoid problems and maximize the drilling rate. Real time knowledge of drilling conditions such as the strength and composition of the rock will allow appropriate changes to be made in weight on bit and drill speed. Knowledge of the precise location of the drill bit will mean it can be steered around undesirable zones. And information on the state of the entire drilling unit, including wear of tools, state of other mechanical components and the flow of coolant would allow timely corrective action. Expected advances in computer science and miniaturization should be able to provide this technology.

The energy content of hot dry rock is huge. It is everywhere under the earth's surface, although more accessible in some places than others because of the different thermal gradient.

While the average thermal gradient is around 25oC/km,[537] about 11 per cent of the land area is classified as high grade with gradients substantially above normal.[538] In these areas rocks hot enough for electric power generation - usually taken to be at least 150oC but preferably higher - can be found at depths of less than 5 kilometers. Lower grade resources would need anywhere up to a depth of 10 kilometers. Mining for direct uses such as space heating can start at much lower depths.

Armstead and Tester have identified an energy resource of 105 million quads.[539] This is their estimate of the resource in rock with temperatures greater than 85oC, to a depth of 10 kilometers and lying beneath the 100 million square kilometers of land area not covered in ice or mountain ranges.[540] Of this resource, 26.5 million quads are moderate to high grade (a gradient higher than 40oC/km) while 78.5 million quads are low grade.[541] It is a bit under a quarter of a million times the 2004 level of energy production of 445 quads (or 470 EJ). Current production is the equivalent of the average energy beneath 400 square kilometers, in other words a square with 20 kilometer sides.

It is important to keep in mind that energy losses would be larger when using geothermal rather than fossil resources. This would be the case both in the direct use of hot water for washing and heating and in the creation of secondary forms of energy such as electricity and transport fuel. In the case of electricity generation, there would be lower thermal efficiency because of the lower temperatures at which the conversion takes place. Until we can easily extract heat at depths greater than 10 kilometers, the temperature will always be far lower than that created by the burning of fossil fuel. In the production of hydrogen as a transport fuel, via electricity production or some other method, the energy loss will always be far greater than that in the conversion of crude oil or gas to the refined fuel.

So, how long would the hot rock last? If we magically switched to 100 per cent reliance on it tomorrow and our energy consumption increased annually by 2 per cent, it would last almost 400 years on the assumption that two quads were required to replace one quad from fossil fuel because of the greater energy losses. If three quads are required, the resource would last 370 years. Employing the two quad assumption, the resource would last over 17,000 years if consumption increased annually by 2 per cent until 2100 (providing a 6.7 fold increase in annual output) and then remained constant. (It is over 11,000 years with three quads.) Using the two quad assumption again, just 1 per cent of the resource would last over 160 years, with a 2 per cent growth rate. (It is 140 years with three quads) This would reduce the average temperature by 0.5oC, given that a 1oC cooling provides 0.00215 quads of energy from every cubic kilometer.[542]

The area that would need to be exploited at any time will depend not only on the output but also on the draw down rate. So, if, for example, the regions being exploited were cooled annually by on average of one twentieth of a degree Celsius to a depth of 10 kilometers, a total of 413,953 square kilometers would be required to provide 445 quads per year. [543] This is equivalent to a square with 643 kilometer sides and is less than 3 per cent of the area of crop land. (Here no allowance is made for the greater energy conversion losses compared with fossil fuels.)