Apr 22, 202255.761° 60.746°

The Skeletal Remains of the Nuclear Anthropocene

An alarming impact or a mere perturbation? Comparing the approaches to radioactive fallout in US and USSR public health, radiation historian and archival vagabond Kate Brown retraces the ways in which radio biologists and ecologists assessed, rated, and contorted radioactive contamination in studying the resilience of ecosystems and human bodies. Brown contends that even buried human bones contain persistent markers from the mid-twentieth century.

Arrow north-east
 
Click To Interact

One day in 1951, a dozen prisoners checked into a GULAG medical unit at the Maiak plutonium plant in the Southern Russian Urals. Dr. Angelina Gus’kova, a young medic, found the prisoners were nauseous and vomiting. Gus’kova treated them for food poisoning and sent them back to work. A few weeks later, the men returned complaining of weight loss, fevers, and internal bleeding. This time, Gus’kova learned that the men had dug trenches in highly radioactive soil near the plant’s radiochemical processing factory. Monitors went to the site and took a count. They estimated that three of the men had received about 600 rem or six gray (today known to be a fatal dose for fifty percent of the population). Gus’kova turned to her patients with renewed interest. The medical staff took blood samples, performed neurological assessments, and looked at cardiac and circulatory functions. The dozen incarcerated men served as some of the first human indicators of exposure to ambient radioactive waste. Their exposures differed from the more direct, easier-to-measure effects of gamma radiation from x-rays or nuclear bombs. During the same years that American officials were denying the existence of radioactive fallout in Japan and the Pacific and Nevada test sites, Gus’kova and her colleagues began to study with abandon indirect exposures of radioactive fallout which redounded from environments into human bodies.1 In Siberia, the year 1951 indexes a moment when human bodies became intelligible markers of the dawning nuclear age.

  • Radiation danger warning signs and barbed wire at the river Techa, near the Maiak plant, in 2007. Photo courtesy Ecodefense, Heinrich Boell Stiftung Russia, Alla Slapovskaya, Alisa Nikulina, Wikimedia

The controversy over Chernobyl deaths (with estimates ranging from 33 to 90,000 fatalities) indicates that there is still no scientific consensus on radioactive fallout. Why, if the impact of radioactive fallout was so readily visible to Soviet doctors in a Gulag clinic using elementary equipment, has it taken so long for scientists and the public elsewhere to recognize radioactive fallout as a defining feature of the landscapes and human bodies? Uncertainty can be generated as deftly as knowledge.2 How did the uncertainty over the existence of radioactive fallout come about? If not an indicator of damage, what did the spread of radioactive fallout signify to scientists who studied it? To answer these questions, it helps to look across the Cold War divide to see the differences in the approach to radioactive fallout in the US and USSR. So, back to Siberia.

Prisoners carting radioactive waste grew listless, lost their appetites, suffered from severe anemia. Many of them died. In the Soviet system, prisoners were expendable, but when educated, young female workers in the radiochemical plant showed similar symptoms, and when a few of them in their twenties died, the medical staff grew yet more alarmed. They puzzled over these deaths. They knew their patients worked in radioactive environments, but after the first few years of production, radiation measurements at the plutonium plant quickly became state secrets. Security officials would not disclose to doctors ambient radiation levels or estimated doses of their patients. The doctors were at an impasse. Their job was to keep workers safe. How could they do that working blindly?

Gus’kova, trained as a neurologist, drew on a long tradition in Russian science, starting with Pavlov and his dogs, of looking to the central nervous system for signs of health problems. She and her colleagues reasoned that radioactive toxins would first appear as damage to the vulnerable central nervous system before other organs.3 They learned to detect the effects of radiation on nerves at very low doses. They also noticed chromosomal breakages in the quickly reproducing bone marrow cells of exposed patients. They figured out how to estimate doses of radioactivity their patients received from the extent of cell damage. They performed autopsies. Turning bones to ash, they used a gamma-ray spectrometer to detect and measure radioactivity embedded in bones. As they worked, Gus’kova and her colleagues used their patients’ bodies as biological barometers. They came to estimate their patients’ dose from bodily indicators—the marks radioactive contaminants left behind on the body.

Exposed employees might look and feel fine, but changes in blood cells could be sharply telling. One clinician remembered looking at a blood smear of a woman exposed in a criticality accident, a self-sustaining neutron chain reaction. She was appalled to find, instead of a multitude of white blood cells, only one lone lymphocyte swimming on the glass.4 When doctors recorded alarming cellular changes in a worker, they requested that that employee be removed from contaminated jobsites. In 1953, Gus’kova coauthored a book called Radiation Sickness in Man. For twenty years, the publication appeared only in classified editions, circulating in restricted libraries. Little of this information was shared abroad because Soviet security officials considered radiation medicine an important Cold War secret for surviving nuclear war.5

American leaders at the same time also grew concerned about the problem of radioactive fallout, which from the 1945 nuclear bombing of Japan they publicly denied. In 1952, the Atomic Energy Commission secretly contracted with the RAND Institute to examine the ecological impact of radioactive fallout, especially on soil and water.6 This project grew in the following years to the collection of agricultural produce and human bones globally. The resulting Sunshine Report concluded that it would require the detonation of 3,000 bombs in one growing season to cause damage to humans from crop contamination.7 Case closed; once US officials acknowledged radioactive fallout, they determined it was not a concern for human health. At the same time that Soviet researchers were growing increasingly worried about spreading radioactive contaminants, American researchers concluded there was no cause for anxiety. In fact, radiation was, in the minds of many officials in the Atomic Energy Commission, a fantastic tool for medicine, health, and ecology.8

Historians such as Joel Hagen, Laura Martin, and Helen Curry have explored how American scientists studied the impact of radioactive isotopes released in producing and testing nuclear weapons on desert and tropical landscapes.9 Scientists tracked radioactive cesium and strontium and watched how it attached to organism after organism with an incredible alacrity. While this news might have caused alarm for humans at the top of the food chain, it didn’t. Rather than thinking in terms of damage, researchers transformed the paths of radioactive chemicals into arrows on flow charts depicting a caloric flow of energy. Following the traces of radioactive energy, American scientists saw connections in what they began to elaborate as an “eco-system.” Describing movement—or metabolism—Hagen points out, through a system became more important than understanding individual organisms within it or damage caused by manmade radioactivity.

Hagen and Martin describe how, after working at test sites, American scientists themselves began to deploy radioactive sources to destroy discreet environments in order to watch organisms rebuild. As they did, they demarcated their study boundaries according to the spread of radioactive isotopes.10 They conceived and named in the field of ecology the concept of “homeostasis.” Because ecosystems involve organisms linked and working together, they theorized, organisms in a fluctuating environment remain stable and reach equilibrium in an ecosystem. Howard Odum studying ecosystems under stress looked for “perturbations.” He noticed after irradiation a reduction in biodiversity similar to other types of disturbances such as fire and floods. After 1951, we now are well aware, perturbations accelerated and caused long-term damage, but that was not the conclusion Odum and most of his colleagues in radio-ecology drew even when working in disaster zones like the Marshall and Bikini Islands. Odum conjectured optimistically that all organisms are capable of maintaining internal stability. An irradiated forest healed itself by spreading new trees. Each successional species was a step in recovery. The healing properties of a forest were similar to that of the human body, he conjectured. All share a common self-regulatory property. In sum, Laura Martin argues that destruction as a method of study crystalized homeostasis in biology as a concept.11

Some scientists, such as George Woodwell, argued that homeostasis, while important, could be impermanent. Some species could multiply crazily; others could disappear altogether. Woodwell argued that self-righting systems could be pushed too far, like the flipping of a lake into toxic eutrophication. But Woodwell was for many decades an outlier. Instead, the concept of homeostasis levitated to a planetary level. James Lovelock and Lynn Margulis ran with the idea in the 1970s. They argued that adaptation and cooperation between species occurred on a global level. Lovelock branded earth-scale homeostasis the “Gaia hypothesis,” asserting that the biosphere is an “active adaptive control system able to maintain the Earth in homeostasis.” Lovelock embraced homeostasis as an evolutionary tool, suggesting, for example, that coral reefs are “adaptations” to enhance earth’s fitness’ by controlling global temperature.12 Leah Aronowski shows that Lovelock supported his research in part with funding from the Shell Oil Company. Lovelock had a personal relationship with his sponsors. He wrote affirming and soothing letters to Shell executives, extolling the self-righting properties of earth systems at the same time that Shell and other oil companies were digging deeper into oil and gas reserves while becoming aware of global warming.13

For many scientists and non-scientists, the “Gaia” resiliency of earth systems came as a form of affirmation of human actions that slowed the recognition that human activities were changing organisms, stressing earth systems, and bending or breaking the arrows on the metabolic flow charts. As such, very few established scientists in the United States asked the next question; what happens when radioactive isotopes moving through ecosystems (self-righting or not) enter and mark human bodies?

This question during the same years preoccupied Soviet researchers. From 1966 to 1970, a Soviet radio-biologist, A. N. Marei, led a team that traveled through the Pripyat Marshes, bordering Ukraine and Belarus. They measured radioactive cesium and strontium, which they claimed came from global fallout. In 1962–63, the two superpowers blasted the earth in a last-minute race to discharge bombs before the 1963 atmospheric nuclear test ban treaty went into effect. That last grand finale of radioactive fireworks emitted an eye-popping twelve billion curies of radioactive iodine into the Northern Hemisphere and left its mark widely, even on fine French wine.14

Marei’s team found that the swampy, sandy soils of the Pripyat Marshes were the most conducive of any soil type for transmitting radioactive isotopes into the food chain. Swamps in conditions of continual re-saturation accumulate peaty soils that are rich in organic substances but poor in minerals. Plants searching for potassium, iodine, calcium, and sodium readily take up radioactive strontium, cesium, iodine, and plutonium, that mimic these minerals. Marei found that the indigenous berries, mushrooms, and herbs of the marshes showed a very high transfer coefficient of radioactive nuclides from soils to plants. His team also discovered that seasonal floods spread radioactive contaminants “in a mosaic pattern” to places where floodwaters surged. As the boggy soils delivered radionuclides to plants, grazing farm animals magnified radioactive elements in the milk they produced. For Marei, the pathway was clear: water, soil, plants, animals, milk, humans.15

Quizzing villagers, Marei discovered that swamp dwellers’ diets consisted almost exclusively of wild game, berries, mushrooms, and milk—lots of milk, for adults two liters a day. Nearly everything the villagers ate contained man-made radioactivity. Marei’s team ran a thousand people through whole-body counters. The scientists recorded levels of cesium-137 in villagers that were ten to thirty times greater than the cesium-137 measured in people in nearby Minsk and Kyiv.16 The ingested cesium-137 would still be evident, Marei wrote, in Polesians’ bodies in the year 2000. Even so, Marei concluded cheerfully, 78 nCi (nanocuries) (~2.9 kBq) in a body was not dangerous. In the same years, American scientists measured 3,000 nCi (~110 kBq) in Alaskan Eskimos exposed to fallout. The Americans showed no great alarm.17 Marei, likewise, concluded in his censored 1974 publication that Polesians required no protective measures. Only if cesium-137 in local soils escalated would they need protection. “And that scenario in our country,” Marei brightly projected, “is highly unlikely.”18 In 2004, Gary Hancock et al. argued that radioactive anthropogenic nuclides, especially plutonium-239 (with a half-life of 24,110 years), released in nuclear tests from 1945–1980 provide excellent chronological markers in ice, sediment, and coral.19 I contend that if scientists were to look into buried human bones, they would also find persistent markers of plutonium from mid-century, and with it some clues of the impact of radioactive contaminants in the food chain and air streams.

After the 1963 nuclear test ban treaty, anti-nuclear activism in the United States died down despite mounting evidence of disturbing increases of thyroid cancers and leukemias in areas downwind of US nuclear test sites.20 From the 1950s, cancer rates climbed nationally, especially among children. Despite the many possible anthropogenic causes of rising cancer rates, a belief in the self-righting properties of bodies, contaminated ecosystems (i.e. a “thriving” Chernobyl zone), and of planetary systems in general held sway until the end of the twentieth century.21 The clearly detectable saturation of the Northern Hemisphere with radioactive fallout ironically served not to ring an alarm about anthropogenic change and possible harm it was causing, but as a mollifying balm about the resiliency of earth systems.

Kate Brown is a Thomas M. Siebel Distinguished Professor in History of Science, Program in Science, Technology and Society at MIT. Her research interests illuminate the point where history, science, technology and bio-politics converge to create large-scale disasters and modernist wastelands.

Please cite as: Brown, K (2022) The Skeletal Remains of the Nuclear Anthropocene. In: Rosol C and Rispoli G (eds) Anthropogenic Markers: Stratigraphy and Context, Anthropocene Curriculum. Berlin: Max Planck Institute for the History of Science. DOI: 10.58049/adh6-rv07