Category Archives: U.S.

Wordcatcher Tales: Datsu-A Nyuu-Ou vs. Datsu-Bei Nyuu-A

One of the frequent catch-phrases in Japanese foreign policy discussions these days is 脱米入亜 datsu-Bei nyuu-A ‘leave America join Asia’, one of many trial balloons floated by the new DPJ-led government. This phrase (r)evokes an older formulation attributed to one of the most avid Westernizers of the Meiji era, Fukuzawa Yukichi, who must hold the world record in Sinographic neologism. (One of the neologisms sometimes attributed to him is minshuushugi [people-master-ism] ‘democracy’.) His policy prescription for Japan in the late 19th century was 脱亜入欧 datsu-A nyuu-Ou ‘leave Asia join Europe’.

How feasible for Japan is 脱米入亜 datsu-Bei nyuu-A ‘leave America join Asia’? Kyushu-based blogger Ampontan is translating and hosting a series of columns by Shimojo Masao, one of Japan’s top specialists on Korea (whose second language is Korean), who weighs in on the issue. Here is Ampontan’s translation of Shimojo’s first column, in its entirety.

The Preconditions for an East Asian Entity

There has been a change of government in Japan for the first time in half a century, and a Democratic Party of Japan administration has taken power under the leadership of Hatoyama Yukio. Among his policy initiatives, the concept of an East Asian entity or community similar to the European Union is receiving widespread attention. The alliance with the United States has been the cornerstone of international relations for Japan since the Liberal Democratic Party came to power. People are discussing whether the change of government might mean Japan has chosen to turn away from the U.S. and place a greater emphasis on Asia.

A full understanding of the distinctive historical characteristics of East Asia is required before embarking on such a course, however. While Japan, the Korean Peninsula, and China on the continent are close geographically, the history of their social systems is different. They have less in common than the members of the European Union, which had shared Christian beliefs and intermarriage of the ruling classes.

In Japan’s case, a social system that incorporated regional authority was formed after the establishment of the Kamakura Shogunate in the 12th century, and the foundation of a market economy was created. That is why Japan, with a system closely resembling capitalism, was quickly receptive to Western civilization after the Opium War of 1840.

In contrast, a system of centralized authority was maintained in China and on the Korean Peninsula despite the arrival of modernization. For many years, they had what amounted to planned economies. The history of Japan vis-à-vis China and the Korean Peninsula is that of relationships similar to the one between the United States and the Soviet Union during the Cold War.

The achievement of an East Asian entity depends on whether Prime Minister Hatoyama is possessed of the awareness of those historical differences and the insight to perceive what is necessary to overcome them.

Leave a comment

Filed under Asia, China, Europe, Japan, Korea, U.S.

American Independence & Chinese Silver Imports

The June 2009 issue of Journal of World History has an enlightening bit of historical revisionism by Alejandra Irigoin entitled The End of a Silver Era: The Consequences of the Breakdown of the Spanish Peso Standard in China and the United States, 1780s–1850s (Project MUSE subscription required). Here are her conclusions (pp. 238-239).

This article argues for revision of traditional views of the global silver trade with China in the late eighteenth and early nineteenth centuries. Section I shows that the existing historiography tends to ignore that silver imports into China continued for longer than normally acknowledged and at increased levels up to the 1820s. New evidence shows that the structure of the silver trade changed substantially when US merchants became central intermediaries between Spanish American silver “producers” and Chinese “consumers,” when Chinese imports of silver consisted increasingly of Spanish American coins, the so-called pillar and bust dollars.

Section II explores the role of Americans as intermediaries who increased trade with Spanish America in order to obtain silver coins needed to trade with China. The timing of the flow of silver out of China to pay for opium purchases is challenged, as is opium as a cause for the desilverization of China. This article also questions received wisdom that reduction in the supply of silver owing to Spanish American independence was the root cause of silver scarcity in China in the early nineteenth century. This received wisdom ignores a fundamental fact: Spanish America itself was a significant reservoir of silver coins in the world. Thus, (relatively minor) interruptions in the production of silver—at different points in time and in distinct places—in South America during Independence were unlikely to account for supply shortages in China, and continued exports of silver into the United States confirm this view. Hence, the fall in Chinese silver imports must be a function of demand-side forces in addition to supply-side problems.

Spanish American independence presented a different problem to the global economy. The Spanish Empire broke up into a multitude of distinct states in the wake of independence, each fiscally and monetarily autonomous. In other words, the largest monetary union of the premodern world had collapsed. The resulting fragmentation of coinage and seigniorage across postindependent Spanish America terminated a silver standard that had organized international trade throughout the early modern world, East and West and in between. New republican governments, especially in regions with silver endowments, took over mint houses in the service of local and regional interests. Coins minted in various mint houses began to diverge in quality and fineness, whereupon the universal standard of the Spanish silver peso was definitively lost.

Section IV advances the central argument of this paper, namely that Chinese demand for silver, at least since the late eighteenth century, involved demand for a certified and reliable means of payment, as opposed to silver in some generic sense. “Good” colonial Spanish American coins traded at a premium over the sycee [ingot] equivalent, clearly confirming this point. Fragmentation of the Spanish monetary standard after independence had a devastating influence on Chinese demand. The impact of Spanish American independence on China’s economy operated through deterioration of coin quality, not through quantities of silver per se. By contrast, the United States used Spanish dollars as legal tender under the control of central monetary authorities, thereby succeeding in keeping new peso coins in circulation for a decade or more.

The end of the silver standard following independence in Spanish America during the 1810s and the 1820s had major consequences for development of the global economy before the gold standard. On one hand, termination of the silver era contributed to the poor economic performance of the Chinese economy. A lack of high-quality, reliable Spanish pesos between the 1820s and the 1850s, rather than insufficient silver mining, largely explains the fall in Chinese silver imports. Hence, I argue that the Chinese silver trade in these decades was demand-side rather than supply-side (mining) driven. Consequences for the internal market in China were manifold, including increased transaction costs, fragmentation of markets, and credit shortages. On the other hand, the United States reacted differently—and with a different timing—to termination of the silver standard. Immediate detrimental effects were weathered by workings of a well-integrated banking system, a quasi–monetary authority, and assay by the mint. Ultimately, this article poses an important comparative question for economic historians: in light of the US response, why did the Chinese empire never monopolize seigniorage, and why did it fail to provide reliable control of its currency system in the face of high costs for the domestic Chinese economy? Answers fall well beyond the scope of this article, of course, but the question should at least be framed in a global context.

Leave a comment

Filed under China, economics, Latin America, opium, Spain, U.S.

Pacific Annexations, 1840-1906

From Sailors and Traders: A Maritime History of the Pacific Peoples, by Alastair Couper (U. Hawai‘i Press, 2009), pp. 140-141:

The managers of the major merchant companies based at the main entrepôts in the [Pacific] islands were often ex-sailors. Several acted as consuls for their governments and supported the companies in many ways, including evoking gunboat diplomacy. A prime example is John Bates Thurston. He served at sea in the island trades, was wrecked at Rotuma in 1865, became British consul in Fiji in 1867, was highly influential in the negotiations for the ceding of Fiji to Britain in 1874, and became governor of Fiji in 1887. The companies, the new settlers, and their sympathetic consuls pressed for annexations. The French were the first to act [but Waitangi was 1840—J.] and took Tahiti, the Marquesas, and the Tuamotus as French protectorates in 1842 and New Caledonia in 1853. These were declared colonies in 1880, and the Australs and Wallis and Futuna in 1887.

The British annexed Fiji in 1874 and established protectorates over southeast New Guinea in 1884, Gilbert and Ellice in 1892, most of the Solomons soon after, and Ocean Island in 1900. They agreed that New Zealand would exercise authority over the Kermadecs in 1887, the Tokelaus in 1889, and the Cooks and Niue in 1901. The Dutch took western New Guinea in 1848. Germany annexed northeast New Guinea in 1885, along with the Bismarck Archipelago and the northwest Solomons; took possession of most of the Carolines in 1885; and ultimately purchased Yap and other islands in the Carolines and Marianas from Spain in 1899. The Germans also acquired the Marshall Islands in 1884 and took over Nauru in 1888. Chile obtained Easter Island in 1888.

America, after its disastrous Civil War, had not recovered a significant merchant fleet and showed little inclination for acquiring Pacific territory. American guano companies had already secured legislation in 1856–1860 that allowed claims over some small Pacific islands, and the US government went on to secure others, including Baker, Jarvis, Johnson, Midway, Palmyra, and Wake. In 1893 the influential American maritime geostrategist Alfred Mahan wrote that it was “imperative to take possession, when it can be righteously done, of such maritime positions as can contribute to secure command.” In 1898, Hawai‘i was annexed (US citizenships were granted in 1900), as was eastern Samoa with Pago Pago as a main naval coaling station, while Guam was captured from Spain by the US Navy in 1898.

The Pacific was now effectively divided between several colonial powers mainly by agreements. In the final carve-up, it was confirmed that Western Samoa was a German colony separated from American Samoa in the east. In turn Germany agreed to relinquish claims for Tonga. As a result, in the closing days Tonga appeared to survive as the only independent Polynesian kingdom, although not quite. It was declared a British protectorate in 1900, and in 1905 it was decreed mandatory for the king of Tonga to take advice from the British consul on all matters of importance. Finally, in 1906 New Hebrides was divided as a condominium between Britain and France.

I’m not sure why Couper omits the 1840 Treaty of Waitangi, which made British subjects of the Maori. Maybe he considered both New Zealand and Australia to be colonial powers by the 1840s, even though both were earlier annexed by another colonial power. (Like the Americas, of course.)

Leave a comment

Filed under Britain, Fiji, France, Germany, Hawai'i, Micronesia, Netherlands, New Zealand, Pacific, Papua New Guinea, Polynesia, Spain, U.S.

Preference for Pacific Island Seafarers

From Sailors and Traders: A Maritime History of the Pacific Peoples, by Alastair Couper (U. Hawai‘i Press, 2009), pp. 102-103, 106:

As a result of continued shortages of crew, British and American ships frequently sailed shorthanded for the Pacific. The trips involved passages that were four to five months long, via the Cape of Good Hope or Cape Horn. American ships sometimes picked up a few sailors in the Atlantic Islands but generally shipowners were not unhappy with depleted crews, which reduced labor costs during these unproductive legs of voyages. Not so for the disgruntled seafarers whose lives were endangered from shortages of experienced shipmates in bad weather and when beating around Cape Horn against strong headwinds.

Arrival in the trading and whaling areas of the Pacific entailed supplementing the crew, all the more necessary because ships would lose many of the original crew during the three to four years the men were employed in the Pacific. Most losses were due to desertion….

The cautionary note on the recruitment of Samoans as sailors reflected the persistent bad reputation of those islands, arising from the massacre of the boat’s crew of La Perouse in 1787. [I believe La Pérouse was the name of the commander, not the name of his vessel.—J.] Whalers by the 1820s were likewise returning with stories of treachery and savagery experienced in parts of Melanesia and Micronesia. Such tales led to more misgivings regarding taking crew from several of these islands. The situation was different in Tahiti and Hawai‘i, where local seamen were encouraged by chiefs to serve and showed reliability even in difficult Arctic voyaging. Several Hawaiians are recorded to have been on that coast in 1788 under Captain John Meares. The New Hazard increased her crew from twenty-four to thirty-three in 1811 for voyages to the northwest coast, additions that were simply designated as “kanakas” in logbooks and journals. The ill-fated Tonquin had a Hawaiian crew of twenty-four when it was destroyed possibly by the captain after Indians boarded on the coast, and the fur trading ship Beaver took on ten “kanakas” in 1812, together with an experienced island sailor, bosun Tom. American whalers subsequently obtained most of their crews in Hawai‘i and Tahiti and also periodically at the Marquesas, the Carolines, and New Zealand….

Captains clearly preferred Pacific seafarers, who were used to compliance toward chiefs and thus unlikely to give captains trouble by demanding seafaring customary rights on board. The islanders were useful too as interpreters and understood the Pacific ways of trade. As sailors they were skillful at handling loaded boats through heavy surf when ships had to stand off and on. On whalers they acquired reputations as good harpooners and for boldness in closing on a whale. The keen eyesight of island sailors earned them the tobacco bonuses for spotting whales, and this, along with reading the signs of the sea for sudden squalls and reefs, made them invaluable as masthead lookouts.

Swimming and diving proved other important assets. Turnbull was impressed when, on approaching Hawai‘i, he encountered people a mile offshore supported only by “a thin feather-edge slice of wood.” He refers also to Hawaiians diving from topgallant yards and swimming under the ship. This skill of deep diving was employed on pearling and bêche-de-mer ships, as well as for making underwater hull repairs and clearing fouled cables. The extent to which island men and women were at home in the sea is further alluded to in dramatic rescues. Copping describes how, when the Harriet of Sydney was totally lost near Te Puna in April 1840, “the crew would have been lost also if it had not been for the Maori women on board the ship swimming them ashore.” He relates also that when his own whaleboat broached to, and he was knocked overboard and trapped under the boat, a shark “lay hold” of his shoulder, but “my harpooner a Maori jumped overboard after me.” Similarly when James Bagley fell from the topgallant crosstrees, a Hawaiian seaman, John Mowhee, dived after him and told Bagley to hold on to his shoulder until they were rescued.

For the shipowners a more compelling reason for employing Pacific seafarers was their lower costs in wages and victualing. The whaleship owner F. Parbury, who gave evidence at the British House of Lords Select Committee on the Navigation Laws, readily attested to this and expressed preferences for New Zealand (Maori) crews.

Leave a comment

Filed under Britain, economics, Hawai'i, labor, Pacific, Polynesia, travel, U.S.

Zhao Ziyang on the “Birdcage Economic Model”

From Prisoner of the State: The Secret Journal of Premier Zhao Ziyang, trans. by Bao Pu and Renee Chiang (Simon & Shuster, 2009), Kindle Loc. 2442-56:

Comrade Hu Yaobang was similarly unenthusiastic about the planned economy. According to my observations, he believed it was the highly concentrated top-down planning model that had limited people’s motivation and creativity and restricted self-initiative at the enterprise and local levels. He believed that building a socialist society entailed allowing people, enterprises, and local governments to act independently, while the state continued to direct and mobilize them with social campaigns.

Chen Yun and Li Xiannian, however, emphasized the importance of a planned economy, especially Chen Yun, whose views had not changed since the 1950s. He included the phrase “planned economy as primary, market adjustments as auxiliary” in every speech he gave. The tone of his speeches didn’t change even after reforms were well under way. His view was that dealing with the economy was like raising birds: you cannot hold the birds too tightly, or else they will suffocate, but nor can you let them free, since they will fly away, so the best way is to raise them in a cage. This is the basic idea behind his well-known “Birdcage Economic Model.” He not only believed that China’s first Five-Year Plan was a success, but also, until the end of the 1980s, he believed that a planned economy had transformed the Soviet Union in a few decades from an underdeveloped nation into a powerful one, second only to the United States. He saw this as proof that economic planning could be successful. He believed that the reason China had not done well under a planned economy was mainly the disruption caused by Mao’s policies, compounded by the destructive Cultural Revolution. If things had proceeded as they had in the first Five-Year Plan, the results would have been very positive. In terms of foreign affairs, Chen Yun retained a deep-rooted admiration for the Soviet Union and a distrust of the United States. His outlook was very different from that of Deng Xiaoping, and there was friction between the two.

Leave a comment

Filed under China, economics, U.S., USSR

R.I.P. Norman Borlaug: Forgotten Benefactor

The man who sparked the Green Revolution has just died. Gregg Easterbrook profiled him in the January 1997 issue of The Atlantic Monthly. Here’s an excerpt.

AMERICA has three living winners of the Nobel Peace Prize, two universally renowned and the other so little celebrated that not one person in a hundred would be likely to pick his face out of a police lineup, or even recognize his name. The universally known recipients are Elie Wiesel, who for leading an exemplary life has been justly rewarded with honor and acclaim, and Henry Kissinger, who in the aftermath of his Nobel has realized wealth and prestige. America’s third peace-prize winner, in contrast, has been the subject of little public notice, and has passed up every opportunity to parley his award into riches or personal distinction. And the third winner’s accomplishments, unlike Kissinger’s, are morally unambiguous. Though barely known in the country of his birth, elsewhere in the world Norman Borlaug is widely considered to be among the leading Americans of our age.

Borlaug is an eighty-two-year-old plant breeder who for most of the past five decades has lived in developing nations, teaching the techniques of high-yield agriculture. He received the Nobel in 1970, primarily for his work in reversing the food shortages that haunted India and Pakistan in the 1960s. Perhaps more than anyone else, Borlaug is responsible for the fact that throughout the postwar era, except in sub-Saharan Africa, global food production has expanded faster than the human population, averting the mass starvations that were widely predicted — for example, in the 1967 best seller Famine — 1975! The form of agriculture that Borlaug preaches may have prevented a billion deaths.

Yet although he has led one of the century’s most accomplished lives, and done so in a meritorious cause, Borlaug has never received much public recognition in the United States, where it is often said that the young lack heroes to look up to. One reason is that Borlaug’s deeds are done in nations remote from the media spotlight: the Western press covers tragedy and strife in poor countries, but has little to say about progress there. Another reason is that Borlaug’s mission — to cause the environment to produce significantly more food — has come to be seen, at least by some securely affluent commentators, as perhaps better left undone. More food sustains human population growth, which they see as antithetical to the natural world.

The Ford and Rockefeller Foundations and the World Bank, once sponsors of his work, have recently given Borlaug the cold shoulder. Funding institutions have also cut support for the International Maize and Wheat Center — located in Mexico and known by its Spanish acronym, CIMMYT — where Borlaug helped to develop the high-yield, low-pesticide dwarf wheat upon which a substantial portion of the world’s population now depends for sustenance. And though Borlaug’s achievements are arguably the greatest that Ford or Rockefeller has ever funded, both foundations have retreated from the last effort of Borlaug’s long life: the attempt to bring high-yield agriculture to Africa.

The African continent is the main place where food production has not kept pace with population growth: its potential for a Malthusian catastrophe is great. Borlaug’s initial efforts in a few African nations have yielded the same rapid increases in food production as did his initial efforts on the Indian subcontinent in the 1960s. Nevertheless, Western environmental groups have campaigned against introducing high-yield farming techniques to Africa, and have persuaded image-sensitive organizations such as the Ford Foundation and the World Bank to steer clear of Borlaug. So far the only prominent support for Borlaug’s Africa project has come from former President Jimmy Carter, a humanist and himself a farmer, and from the late mediagenic multimillionaire Japanese industrialist Ryoichi Sasakawa.

Reflecting Western priorities, the debate about whether high-yield agriculture would be good for Africa is currently phrased mostly in environmental terms, not in terms of saving lives. By producing more food from less land, Borlaug argues, high-yield farming will preserve Africa’s wild habitats, which are now being depleted by slash-and-burn subsistence agriculture. Opponents argue that inorganic fertilizers and controlled irrigation will bring a new environmental stress to the one continent where the chemical-based approach to food production has yet to catch on. In this debate the moral imperative of food for the world’s malnourished — whether they “should” have been born or not, they must eat — stands in danger of being forgotten.

THE LESSON OF THE DUST BOWL

NORMAN BORLAUG was born in Cresco, Iowa, in 1914. Ideas being tested in Iowa around the time of his boyhood would soon transform the American Midwest into “the world’s breadbasket,” not only annually increasing total production — so methodically that the increases were soon taken for granted — but annually improving yield, growing more bushels of grain from the same amount of land or less. From about 1950 until the 1980s midwestern farmers improved yields by around three percent a year, more than doubling the overall yield through the period. This feat of expansion was so spectacular that some pessimists declared it was a special case that could never be repeated. But it has been done again, since around 1970, in China.

Entering college as the Depression began, Borlaug worked for a time in the Northeastern Forestry Service, often with men from the Civilian Conservation Corps, occasionally dropping out of school to earn money to finish his degree in forest management. He passed the civil-service exam and was accepted into the Forest Service, but the job fell through. He then began to pursue a graduate degree in plant pathology. During his studies he did a research project on the movement of spores of rust, a class of fungus that plagues many crops. The project, undertaken when the existence of the jet stream was not yet known, established that rust-spore clouds move internationally in sync with harvest cycles — a surprising finding at the time. The process opened Borlaug’s eyes to the magnitude of the world beyond Iowa’s borders.

At the same time, the Midwest was becoming the Dust Bowl. Though some mythology now attributes the Dust Bowl to a conversion to technological farming methods, in Borlaug’s mind the problem was the lack of such methods. Since then American farming has become far more technological, and no Dust Bowl conditions have recurred. In the summer of 1988 the Dakotas had a drought as bad as that in the Dust Bowl, but clouds of soil were rare because few crops failed. Borlaug was horrified by the Dust Bowl and simultaneously impressed that its effects seemed least where high-yield approaches to farming were being tried. He decided that his life’s work would be to spread the benefits of high-yield farming to the many nations where crop failures as awful as those in the Dust Bowl were regular facts of life.

UPDATE: Easterbrook’s follow-up in the Wall Street Journal on 16 September is entitled The Man Who Defused the Population Bomb.

Leave a comment

Filed under Africa, China, economics, food, science, U.S.

Baptist Becomes Buddhist U.S. Army Chaplain

In The Tennessean of 8 September 2009, Bob Smietana profiles a new type of chaplain for the U.S. Army:

When Thomas Dyer heads to Afghanistan in December, the former Marine and one-time Southern Baptist pastor won’t take a rifle with him. He won’t take a Bible, either.

Instead, Dyer, a Tennessean National Guardsman from Memphis and the first Buddhist chaplain in the history of the U.S. Army, hopes to bring serenity and calm, honed by months of intensive meditation.

That preparation, he says, will help him bring spiritual care in the midst of a war zone. “We’re going to put it to the test,” Dyer said.

Dyer’s deployment is another step in the U.S. military’s attempt to meet the diverse spiritual needs of America’s fighting forces. It’s no easy task. For one thing, the military chaplaincy is facing all the complications that have affected American religion over the past 40 years. The decline of mainline Protestants and their aging clergy. The ongoing Catholic priest shortage. The explosion of religious diversity. The emergence of people with no faith. The ease with which people move from one faith to another.

The military is trying to adapt to these changes, while trying to find ministers willing to serve in a war zone, and who can minister to American troops without offending Muslim allies.

My elder stepbrother is a chaplain in the U.S. Army—and the son of a chaplain. And one of my Southern Baptist missionary “uncles” in Japan became very interested in Japanese Buddhism, later publishing a book entitled Zen Way, Jesus Way. One of his daughters is a believer in Tibetan Buddhism. Whenever Christians ask me why I am not a believer, I usually respond, “In which religion?”

UPDATE: There were Christian chaplains in the Imperial Japanese Army, along with Buddhist and Shinto chaplains. (The pastor of the Hiroshima Baptist Church, where my parents served as missionaries, was a Christian chaplain with the Japanese Army in China.) However, there were no Buddhist or Shinto chaplains in the U.S. Army’s 442nd Regimental Combat Team, only Protestants, even for all the “Buddhaheads” from Hawai‘i.

Leave a comment

Filed under Buddhism, Islam, military, religion, U.S., war

Wordcatcher Tales: Jerkinhead, Shreadhead

Koko Head Avenue Tudoresque, Honolulu
I’ve finally found a couple of wonderful names for those squashed ends on gable roofs that can be found on some of the Honolulu Tudor–French Norman Cottages I’ve been documenting for the WikiProject National Register of Historic Places.
Tudor-revival cottage on Kiele Avenue at Coconut Avenue
The most fetchingly archaic-sounding terms are jerkinheads or shreadheads. More prosaic (but “hipper”) names for them are clipped gables, hipped gables, half-hips or barn-hips.
Tudor-revival cottages, Waikiki
As the Wikipedia hip roof article notes, a half-hip is a hip roof on top of a gable roof, while a gablet roof is a gable on top of a hip roof, like the roof type known as irimoya in Japanese.
Side view, Hawaii Shingon Mission, Honolulu
Irimoya roofs can be seen on a lot of the newer homes built in upper Manoa Valley, whose residents are nowadays far more likely to be of East Asian than European descent.
Home with East Asian design motifs, incl. irimoya roof

1 Comment

Filed under England, Hawai'i, Japan, language, U.S.

Fractured Historiography of the Confederacy

In the latest issue of Civil War History (Project MUSE subscription required), University of Virginia professor Gary W. Gallagher reviews major trends in the historiography of the Confederacy. Here are a few excerpts about some of the key earlier trendsetters. Explaining defeat is always more challenging than explaining victory.

Thirty years have passed since Emory M. Thomas’s The Confederate Nation, 1861–1865 appeared on the historiographical landscape. Some of its themes had been present in his earlier The Confederacy as Revolutionary Experience, and together the two books heralded the emergence of a major figure in the field. Factors weakening the Confederacy loomed larger than evidence of Rebel persistence or strength in the scholarly literature at that time, but Thomas took seriously the idea of national sentiment in the seceding states. When defeat apparently stalked the slaveholding republic in the spring of 1862 and “their national experiment seemed almost a failure, Confederate Southerners began to respond to their circumstances by redefining themselves—or, more precisely, by defining themselves as a national people.”…

David Williams’s Bitterly Divided: The South’s Inner Civil War traverses much of the same ground as Thomas’s work, offering a convenient point of departure to consider the trajectory of recent scholarship on the Confederacy. The author or editor of four previous books dealing with various aspects of Confederate history, Williams complains that generations of historians have emphasized the war “waged with the North” rather than exploring how the “South was torn apart by a violent inner civil war, a war no less significant to the Confederacy’s fate than its more widely known struggle against the Yankees.” Resolutely focused on that “inner civil war,” Bitterly Divided creates an impression of overwhelming internal fracturing that renders the presence of U.S. armies strangely irrelevant….

Internal fissures serve as the interpretive touchstone of a rich body of older work, a brief review of which reveals that Bitterly Divided plows in deep existing furrows. As early as 1867, editor Edward A. Pollard of Richmond’s Examiner denied that northern manpower and resources had settled the issue. “The great and melancholy fact remains,” Pollard observed in The Lost Cause, “that the Confederates, with an abler Government and more resolute spirit, might have accomplished their independence.”…

In 1937, while Margaret Mitchell’s pro-Confederate epic Gone with the Wind sold in huge numbers, pioneering African American historian Charles H. Wesley challenged the Lost Cause narrative of noble Rebels struggling against impossible odds. “Historians of the Confederacy have based their works mainly upon the military subjugation of the South and the heroic actions of its defenders and have neglected the contributing social factors,” maintained Wesley in The Collapse of the Confederacy….

Twenty-eight years later, Carleton Beals reprised much of Wesley’s argument in War within a War: The Confederacy against Itself. “This book is about those people who resisted, because of their love for the Union, or civil rights, or because they believed the struggle to be a ‘rich man’s war, poor man’s fight,’” wrote Beals, who featured “mountain people,” opponents of conscription, African Americans, and others at odds with the Confederate government….

Two historiographical waves established a durable framework within which many advocates of internal failure have examined the Confederacy. Between the mid-1920s and the mid-1940s, a number of scholars joined Wesley to mount a powerful collective assault on Lost Cause mythology. Although they sometimes deployed simplistic class models to support the idea of a rich man’s war and a poor man’s fight, their findings contributed importantly to topics such as conscription, state rights as a divisive ideology, desertion, persistent unionism, resistance among slaves (what W. E. B. Du Bois called “The General Strike”), class tensions, and corrosive guerrilla warfare. The fact that all major titles by these authors have been reprinted at least once suggests their continuing influence.

A flurry of studies in the 1970s and 1980s, spurred in part by the new social history’s emphasis on people outside the traditional power structure, expanded on the earlier literature. Some of this work can be read as a direct or indirect response to Thomas’s The Confederate Nation, 1861–1865. Authors and editors drove home the point that no one should think of the Confederacy as a society united across boundaries of region, class, race, and gender. In a category by itself was Why the South Lost the Civil War, by Richard E. Berenger, Herman Hattaway, Archer Jones, and William N. Still—a detailed and thoughtful, if not ultimately persuasive, brief for the centrality of internal causes of Confederate failure. This prize-winning study attributed defeat to the impact of southern religion, an absence of nationalism, and, despite a level of commitment that absorbed the deaths of approximately one-quarter of all military-age white males in the Confederacy [emphasis added], weak popular will….

Drew Gilpin Faust weighed in on the topic of Confederate nationalism at the end of the 1980s. Suggesting that the “creation of Confederate nationalism was the South’s effort to build a consensus at home, to secure a foundation of popular support for a new nation and what quickly became an enormously costly war,” she identifies religion as critical to a conception of nation predicated on defining Confederates as God’s chosen people. Faust also notes the centrality of slavery to the Confederate consciousness and warns against working backward from Appomattox to yoke discussions of nationalism to those about why the Rebels failed. Her conclusions, however, stress the ultimate weakness of nationalistic sentiment in the southern republic….

The more recent “cutting-edge” literature on internal dissent … has appeared at a steady rate over the past dozen years. A full discussion lies beyond the scope of this essay, but some trends are evident. It has long been a commonplace that the hill country and mountains of the Confederacy functioned as centers of antiwar and anti-Davis administration activity. An array of recent scholarship has examined the war in Appalachia, confirming deep divisions in mountainous regions but also finding evidence of strong support for the Confederacy. Works on North Carolina, Virginia, Tennessee, and Georgia create a composite picture affirming John C. Inscoe and Gordon B. McKinney’s observation that “within the southern highlands, the war played out in very different ways for western North Carolinians than it did for East Tennesseans or north Georgians or western Virginians or Eastern Kentuckians.” The authors might have added that within each of these five populations the variety of reactions to the war and its trials also defy easy characterization.

Leave a comment

Filed under nationalism, scholarship, slavery, U.S., Virginia, war

Outlying Islands of Shrinking Dixie

The latest issue of Southeastern Geographer (Project MUSE subscription required) has an article by Shrinidhi Ambinakudige of Mississippi State University about changes in two “vernacular regions”: “the South” and “Dixie.” (Vernacular regions are those identified from popular usage.) Its abstract and its resumen (!) follow, along with a few excerpts.

Abstract: John Shelton Reed’s maps of the South and Dixie in 1970s and 1980s indicated the shrinking boundaries of these two vernacular regions. This study revisits the South and Dixie. Using electronic telephone directories, this study collected all business names with “Southern” and “Dixie” in all the cities in the US. A univariate local indicator of spatial association (LISA) analysis was used to identify the clusters of high and low values of normalized values of the terms. These analyses helped identify the current core regions of Dixie land and the South. The results indicate that “the South” and “Dixie” boundary erosion is noticeable. The study identified a previously unnoticed island of “Dixie” in Utah. Southern and Dixie identities are stronger in non-metropolitan counties compared to metropolitan and micropolitan counties. Southern and Dixie identities are eroding gradually: while the erosion of southern identity is very slow, the erosion of Dixie identity seems to be faster. Overall, it may be more appropriate to refer “Dixieland” as “Dixie islands” today, but the South is still the South.

Resumen: Los mapas del Sur y Dixie en los 70s y 80s de John Shelton Reed indicaron una reducción en los límites de esas dos regiones autóctonas. Este análisis estudia al Sur y Dixie. Usando directorios telefónicos electrónicos, este estudio recopila los nombres de negocios con “Southern” y “Dixie” en todas las ciudades de Los Estados Unidos. Un análisis univariable LISA (Indicador Local de Asociación Espacial) fue usado para identificar los conglomerados de valores altos y bajos de los valores normalizados de los términos. Este análisis ayudó a identificar hoy las regiones de Dixie Land y el Sur. Los resultados indican que la erosión de los límites del “Sur” y “Dixie” es notable. El estudio identificó en Utah una “Isla de Dixie” que no había sido notada anteriormente. Las identidades de Southern y Dixie son más fuertes en los condados no-metropolitanos que en los condados metropolitanos y micropolitanos. Las identidades de Southern y Dixie se están deteriorando gradualmente; mientras que el deterioro de las identidades del sur es bastante lento, el deterioro de Dixie parece ser más rápido. En general, sería mas apropiado referirse hoy a “Dixieland” como “Islas Dixie”, pero el Sur todavía es el Sur….

People’s sense of place creates a vernacular region. According to Jackson (1984), the sense of place is a permanent position in the social and topographical sense that gives people their identities. The sense of place can be perceived in both physical and cultural landscapes: it is embodied in folklore, personal narratives and oral histories—but very rarely do these descriptions appear in “official” documents, so locating the boundaries of these senses of places is difficult….

In this study, to delineate the boundaries of the South and Dixie, occurrences of the terms “South or Southern,” “American,” and “Dixie” in business names will be used. To ascertain the relative frequency of the terms “southern,” “south,” “American,” and “Dixie” appearing in the names of businesses, business names including any of these terms were used….

Only southern hot-spots, which include Alabama, Arkansas, Northern and Central Florida, Georgia, Mississippi, most of North Carolina, South Carolina, West Virginia, Virginia, Louisiana, Tennessee, part of California, the eastern part of Texas, are shown in Figure 1. Oklahoma also indicated a Southern identity. The Southern identity is still strong in most of these traditional Southern states; however, a gradual shrinkage is apparent, especially in North Carolina and Oklahoma. As Reed (1990) observed, many people in North Carolina now identify themselves as Easterners rather than Southerners. Southern Florida continues non-southern.

The top ten counties having the highest score of Southern to American ratio are listed in Table 3. Dixie County in Florida had the highest score, followed by Hall County in Georgia. Washington County in Utah also showed a significant Southern identity and ranked 6th among all counties in the US.

The top ten states that scored the highest Southern to American ratio are listed in Table 4; Mississippi, Alabama, Louisiana, Georgia and South Carolina are the top five States.

The LISA analysis redefined the boundaries of “Dixie”. The zip codes with high concentration of Dixie identity and high values of Dixie to American ratios are clustered (Figure 2). Unlike the results reflecting the Southern identity, Dixie seems to be eroding into “islands.” The results also identified two interesting Dixie core areas—one on the Utah/Arizona border, the other in Ohio (Figure 2). Washington County, Utah has historically maintained Southern and Dixie identities—it is known as Utah’s Dixie. According to Cahoon and Cahoon (1996), in 1857 a group of people from the South migrated to Utah, before the bitter fighting of the U.S. Civil War. These immigrants were asked to move to Southern Utah as it was reportedly a more fertile land to grow cotton. Another group consisting of Robert D. Covington and 28 Southern families joined the first group. These two groups formed Washington City. They built dams to provide water to irrigate their crops. To keep their Southern identity, they decided to name their land “Dixie;” later this became “Utah’s Dixie.” The actual reason for a strong Dixie identity in Ohio is unknown. It may be related to the fact that Ohio is the birthplace of Dan Emmett, writer of the famous song “Dixie.”

2 Comments

Filed under language, nationalism, U.S.