Category Archives: Europe

The Failed Soviet Invasion of Romania, Spring 1944

From Red Storm over the Balkans: The Failed Soviet Invasion of Romania, Spring 1944, by David M. Glantz (U. Press of Kansas, 2007), pp. 372-378 (reviewed here and here):

Strategic Implications

Every officially sanctioned Soviet and, more recently, Russian history of the Soviet-German War published since war’s end categorically asserts that, immediately after the Red Army completed its successful winter campaign in the Ukraine during mid-April 1944, Stalin ordered his Stavka and General Staff to begin preparations to conduct a series of successive strategic offensives through Belorussia and Poland during the summer of 1944, which, from a military and political perspective, were designed to hasten the destruction of the Wehrmacht and Hitler’s Third Reich in the shortest possible time by exploiting the most direct route into the heart of Germany. Only after completing these more important offensives, these sources argue, did Stalin finally unleash the Red Army on an invasion of Romania and the Balkan region. According to this strategic paradigm, when the Red Army actually implemented the Stavka’s plan, it began its offensive into Belorussia in late June, its offensive into southern Poland in mid-July, and its offensive into Romania in late August.

Furthermore, these same histories argue that, just as the Balkan region was a secondary strategic objective for Stalin during the Red Army’s summer-fall campaign of 1944, it remained of secondary importance when the Red Army conducted its offensives during the winter campaign of 1945. Therefore, just as the Red Army invaded Romania in late August 1944, but only after its offensives in Belorussia and eastern Poland succeeded, likewise, during its winter campaign of 1945, the Red Army captured Budapest and western Hungary and invaded Austria in February and March 1945, but only after its offensive through Poland to the Oder River succeeded.

However, the “discovery” of the Red Army’s attempt to invade Romania in mid-April and May 1944 casts serious doubts on this prevailing strategic paradigm. In short, the precise timing, immense scale, complex nature, and obvious objectives of the Red Army’s offensive into Romania during April and May 1944 now clearly indicate that Stalin and his Stavka were paying considerable attention to strategic imperatives other than those described in this prevailing strategic paradigm. Simply stated, vital military, economic, and political factors prompted Stalin to order his Red Army to mount a major offensive of immense potential strategic significance into Romania between mid-April and late May 1944….

In addition to these purely military considerations, there were also strategically vital economic and political motives for Stalin and his Stavka to mount an invasion of Romania during April and May 1944. Economically, for example, as von Senger pointed out, if successful, a full-fledged Red Army invasion of Romania could deprive the Axis of its vital oilfields in Romania, thereby seriously degrading Germany’s ability to continue the war. More important still from a political standpoint, a successful invasion of Romania would likely topple the pro-German Romanian government and drive Romania from the war, and perhaps even force Bulgaria to abandon its looser ties with Hitler’s Germany. In fact, the loss of a significant portion of Romania to the Red Army would shake if not shatter the Axis’ defenses throughout the entire Balkans, inject a sizeable Red Army presence in the region, and end all hopes by Stalin’s “Big Three” counterparts, Roosevelt and Churchill, that they could halt the spread of Soviet influence into the Balkan region.

In short, since Stalin’s Western Allies were already planning Operation Overlord to land their forces on the coast of France, the Red Army’s entry into Romania would end, once and for all, Stalin’s anxiety over his Allies establishing a “second front” in the Balkans. Ever the realist, Stalin judged that the potential political gains associated with the Red Army’s advance into Romania during April and May 1944 more than outweighed any associated military risks. Nor was it coincidental that, after his spring 1944 venture failed and the Red Army’s summer offensives to the north succeeded, Stalin unleashed the Red Army forces on a new invasion deeper into Romania and the Balkans during August 1944.

Furthermore, although it will be the subject of a future book, it is now quite clear that Stalin continued to pursue a similar “Balkan strategy” during the winter of 1945 after his Allies assured him at the Yalta Conference in early February that Berlin would be his for the taking. As a result, within hours after receiving these assurances, Stalin abruptly halted the Red Army’s advance on Berlin along the Oder River, only 30 miles from Berlin, and instead shifted its main axis of advance—first, into western Hungary and, later, into the depths of Austria—for essentially the same political reasons that had motivated him to invade Romania during April, May, and August 1944. Just as Stalin had altered his strategy for a drive on Berlin by attempting to invade Romania in April and May 1944 only to resume his advance along the Berlin axis in June, a year later the Red Army began its final drive on Berlin on 16 April 1945, the day after Vienna fell. Therefore, the Red Anny’s failed offensive into Romania during April and May 1944 is remarkably consistent with Stalin’s strategic behavior during 1945.

Lesson Learned

Regardless of Stalin’s motives for authorizing the offensive into Romania, for a variety of reasons, the Red Army’s first Iasi-Kishinev offensive ended as a spectacular failure. After failing to overcome Axis defenses from the march during mid-April, Konev’s 2nd Ukrainian Front was equally unsuccessful in its better-prepared offensive aginst Axis forces defending in the Tirgu-Frumos and Iasi regions in May. During the same period, although Malinovsky’s 3rd Ukrainian Front was able to seize some bridgeheads across the Dnestr River in early April, its twin efforts to expand those bridgeheads later in the month achieved little more. Complicating the Stavka’s strategic plans, while Konev and Malinovsky were organizing a third effort to capture Iasi and Kishinev during mid-May, for the first time since late 1942, counterattacking German forces actually managed to inflict serious defeats on major Red Army forces defending bridgeheads across a major river….

The defending German forces had also been fighting for as prolonged a period as their Red Army counterparts and had suffered many serious and costly defeats and heavy losses in men and equipment. Furthermore, when Konev’s and Malinovsky’s forces invaded Romania, in many sectors they faced green and poorly motivated and equipped Romanian troops. Despite this fact, fighting with a determination born of desperation, the Axis forces were able to hold firmly to most of their defenses in April and early May and, thereafter, mount successful counterstrokes of their own during early May and early June.

Difficult spring weather conditions and the adverse effect of the heavy rains and flooding on the terrain also certainly exacerbated the already significant logistical problems the two fronts were experiencing as they operated at the end of their overextended lines of communications characterized by a rickety patchwork logistical network that was just being constructed. First, the two Ukrainian fronts were conducting offensive operations in a region whose hilly, broken, and often lightly wooded terrain differed substantially from the rolling grass-covered flatlands of the Ukraine to which their troops were long accustomed.

Second, for the first time in the war, the two fronts were attempting to conduct offensive operations after warmer weather melted the icy surface they had exploited to conduct mobile military operations in previous winters. Predictably, the rasputitsa proved as formidable an obstacle to the two fronts’ advancing forces as the Germans’ resistance and, in some cases, even more formidable.

Third, compounding the problems cited above, pursuant to orders, as they conducted their fighting withdrawal, the Germans systematically destroyed everything of value both for destruction’s sake and to create obstacles to the Red Army’s forward movement. They blew up railroads, beds, tracks, and culverts alike; they cratered roads and demolished dams; and they destroyed every building or installation regardless of military value. In short, they left a vast wasteland for the Red Army to traverse in their wake.

As a result, whether attacking or defending, in addition to experiencing customary shortages of food, which made soldierly foraging an essential art, and the normal effects of prolonged combat attrition, virtually every formation and unit within the 2nd and 3rd Ukrainian Fronts suffered significant losses in weaponry and heavy equipment and experienced severe ammunition and fuel shortages. For example, archival documents indicate that, prior to its offensive along the Tirgu Frumos axis on 2 May, the 2nd Ukrainian Front’s 2nd Tank Army was supplied with between two and five combat loads of ammunition and two to two and one-half refills of gasoline and diesel fuel, which was not excessively low to conduct such an operation. However, it would be disingenuous to offer these realities as excuses for Konev’s and Malinovsky’ offensive failures, since, as was always the case, the two front commanders, as well as their subordinate officers and soldiers alike, frequently relied on sheer ingenuity or “native wit” to resolve their logistical dilemmas.

Leave a comment

Filed under Germany, Romania, USSR, war

Opera Boxes, Salons, and Bedrooms, 1700s

From The Opera Companion, by George Martin (John Murray, 1961), pp. 133-134:

In Italy in the eighteenth century the center of operatic activity shifted from Venice to Naples, where a school of outstanding composers arose specializing in two styles: “opera seria” and “opera buffa.” “Opera seria” was on a grand scale with historical or mythological themes; it was a close cousin to the Viennese Baroque opera except that the music was of greater importance and the scenery of less. The aria, generally sung by a castrato, was the crux of every scene, and over the years it became extremely stylized, so that for any situation there was a certain type of aria that was appropriate. The singer was expected to embellish the aria with extemporaneous runs, trills and flourishes, and this—more than the drama, scenery or composed music—was what excited the audience. Thus the composer wrote a vehicle for a particular singer rather than searching his soul in nineteenth-century romantic style to produce an immortal masterpiece. Grout, in A Short History of Opera, reports that forty leading composers in the eighteenth century wrote fifty operas each: Verdi wrote twenty six, Wagner thirteen, and Puccini twelve. There was no repertory as today. The audience wanted new music each year, although it was perfectly willing to have the same librettos used over and over; for example, Mozart’s was the seventh setting of Metastasio’s La Clemenza di Tito. The scores of the operas were almost never published and, in any event, were extremely sketchy. Only the favorite arias might be published and, as there was no copyright, the composer was far less interested in preserving his old work for posterity than in receiving a commission for a new one, which he could complete in four to six weeks.

One result of this approach to opera, so different from today’s, was that no one really listened much; opera was still a social rather than a musical event. A Frenchman, De Brosses, writing in 1740 described what went on at Rome: “The ladies hold, as it were, at homes in their boxes, where those spectators who are of their acquaintance come to call on them. I have told you that everyone must rent a box. As they are playing at four theatres this winter, we have combined to hire four boxes, at a price of twenty sequins each for the four; and once there I can make myself perfectly at home. We quiz the house to pick out our acquaintance, and if we will, we exchange visits. The taste they have here for the play and for music is demonstrated far more by their presence than by the attention they pay. Once the first scenes are past, during which the silence is but relative, even in the pit, it becomes ill-bred to listen save in the most interesting passages. The principal boxes are handsomely furnished and lighted with chandeliers. Sometimes there is play, more often talk, seated in a complete circle as is their custom, and not as in France, where the ladies add to the show by placing themselves in a row in the front of each box; so you will see that in spite of the splendour of the house and the decoration of each box, the total effect is much less fine than with us.”

Besides visiting in the opera, the Romans also played cards and chess. In Milan the diversion was faro. Florence offered hot suppers served in the boxes. At Turin each box had a room off it with a fireplace and all the conveniences for refreshments and cards. At Venice the boxes could be closed off from the theater by a shutter.

All travelers reported that the gabble and noise were deafening except during two or three favorite arias which, greeted with wild applause, were repeated. One visitor, Lalande, estimated that the typical Milanese spent a quarter of his life at the opera. It is not surprising then that the archduke’s box in Milan had attached to it not only a private sitting room but also a bedroom.

Leave a comment

Filed under Italy, music

Changing Court Costumes under Meiji

From “Cultural Change in Nineteenth-Century Japan,” by Marius B. Jansen, in Challenging Past and Present: The Metamorphosis of Nineteenth-Century Japanese Art, ed. by Ellen P. Conant (U. Hawai‘i Press, 2006), pp. 40-41:

The need for practicality and efficiency affected cultural policy in many ways. The early Meiji years saw the court trying to do business in the garments of antiquity. Albert Craig writes that “when the government structure was first promulgated, officials rushed out to secondhand bookstores to buy copies of the commentaries on the Taiho code (702) so they would know what the new office titles meant.” Many adopted Heian-period names, and “[e]ven the clothing worn by the councilors at certain court ceremonies was dictated by the new ethos. High-ranking samurai officials were required to dress as nobles; and all, including nobles, were required to wear swords. On one occasion the Saga samurai Eto Shinpei, late for a ceremony, dashed into the court uncapped by an eboshi—a small, black, silly-looking hat that perches forward on the head. A noble asked him, ‘Where is your hat?’ Eto retorted, ‘Where are your swords?’ Both hastened out for the proper accouterments.”

But the work of modernization could not be carried out at a costume party. In 1870 the Daigaku Nanko, ancestor of the Imperial University of Tokyo, still ruled out Western clothing, but that same year the imperial court appointed a Western-clothing specialist to its staff. By 1874 Kido Takayoshi, hero of the Restoration and powerfully influential government minister, was agonizing in his diary over the pain caused by “my shoes.” A year later Mori Arinori (1848-1888), natty in a Western suit, was bantering with the Qing statesman Li Hungzhang. Did he not find it unpleasant to wear such foreign clothes? Li asked solicitously. Had not Mori’s ancestors preferred Chinese costume? Yes, answered Mori, but he was doing as his ancestors had done by choosing the better garb. And, he went on, had Li’s ancestors worn Manchu robes like those his host had on? No, was the reluctant answer, they had not.

Before long the Meiji emperor’s Western military uniform was made court dress, and things moved so rapidly that at a birthday ball in 1885, itself remarkable, only two of the ladies did not appear in Western dress. Westerners usually thought this regrettable. In 1887 Herr von Mohl, a specialist in Western protocol hired for the court, suggested going back to Japanese dress for formal occasions but found that “Count Ito let me know that in Japan the costume question was a political issue in which the imperial household advisors had no voice; he requested that the matter should be viewed as settled and not to waste further time in discussing what is, in fact, a fait accompli.”

By the time the Meiji constitution was promulgated in 1889, Tokyo newspapers reported that Western-style tailors were being swamped with business by prefectural officials who had come crowding into the capital. Eboshi had given way to top hats, which alternated with bowlers in the uneasy combination of dress and footwear that is recorded in many Meiji photographs.

Leave a comment

Filed under China, Europe, Japan, nationalism

Modernizing Music under Meiji

From “Cultural Change in Nineteenth-Century Japan,” by Marius B. Jansen, in Challenging Past and Present: The Metamorphosis of Nineteenth-Century Japanese Art, ed. by Ellen P. Conant (U. Hawai‘i Press, 2006), pp. 44-45:

Gagaku gained increased prominence, but at the cost of stultification. By the end of the Tokugawa period it was associated primarily with the imperial court; professionals performed at court and the larger Shinto shrines. In 1871 a Gagaku Bureau was established within the Imperial Household Office (later Imperial Household Ministry), and thereafter its representatives served on all commissions charged with musical policy. Gagaku practice became archaized and codified in the process of defining as a “tradition” what must at one time have been considerably more varied. Nagauta, which had deep roots in popular culture, flourished. It gradually became more independent from the kabuki theater, developing a concert format and spread into commoner homes as an amateur skill. Instrumental music was freed from special restrictions. Koto had been a special art reserved for blind performers, while shakuhachi had been associated with Fuke Buddhism, which was banned in 1871. Both skills became middle-class accomplishments. Satsuma and Choshu biwa music, previously considered provincial, now acquired a popularity corollary to the political dominance of those southwestern domains in the new regime. Small wonder that former Tokugawa retainers often sneered at their Meiji successors as imo (potato) zamurai.

Western music had made its entry in Bakumatsu times, sometimes under unlikely circumstances. The captain’s clerk aboard Commodore Perry’s Saratoga wrote that Japanese guests who were treated to a band concert in 1854 courteously asked to hear the first number again, but proved to mean the tuning-up period, whose sounds they found more interesting than the marches that followed. Satsuma samurai were sufficiently impressed by the martial strains that came to shore from the British band celebrating the bombardment that had just burned Kagoshima in 1863 to want to introduce Western military music into their own forces. An English bandmaster of the marine battalion guarding the Yokohama legation was asked to instruct thirty Satsuma militiamen, and in 1871 these formed the core of the new navy band, its English bandmaster’s salary shared by the navy and the Gagaku Bureau. In 1877 the Englishman Fenton was replaced by a German, Franz Eckert. The harmonization and orchestration of “Kimi ga yo,” which came to function as the new national anthem, was the product of the combined efforts of these bandmasters.

Military songs and marches quickly became popular. “Oh My Prince!” (Miyasan! Miyasan!) was ascribed to the armies that marched against the shogun’s capital. Words could be changed to fit new themes and occasions. “Battōtai” (The Drawn Sword Unit), composed in 1885 by a French instructor about the Satsuma Rebellion, became “The Sinking of the Normanton” in 1887 for the disaster off Kii in which all the Japanese, and no foreigners, were lost, and emerged again as the “Rappa-bushi” of the Russo-Japanese War. Still other songs adapted the melodies of Stephen Collins Foster to a Japanese mode, as with “Tobe Tobe Tonbi Sora” (Fly, Kite, Fly, High in the Sky!), whose tune turns out to be a version of “Way Down upon the Sewanee River.”

Appropriately enough, some of the last strains of late-Edo chant and song were suppressed with the people’s rights movement, which adapted them to political uses. Dainamaito bushi, satirical pieces designed to be explosive, were composed, sung, and sold by street-singer activists deploring official arrogance and government tyranny in the 1880s. The victories of the state in domestic politics and foreign wars, however, speeded the production of a new and less divisive national culture, homogenized by mass education and literacy, which emerged by the end of the century.

The Ministry of Stultification (or Zombification) would certainly be an appropriate name for the Imperial Household Ministry, even today.

2 Comments

Filed under Britain, France, Germany, music, nationalism, U.S.

Cambodia’s Thirty Years War

From After the Killing Fields: Lessons from the Cambodian Genocide, by Craig Etcheson (Texas Tech U. Press, 2006), pp. 2-4 (footnote references omitted):

It is an extraordinary situation. Cambodia is a country where as much as a third of the population died in one of the worst genocides of modern times, and many Cambodians do not believe it happened. How can it be that so much destruction occurred so recently, yet so few are aware of this history? In order to explain how this peculiar situation came about and perhaps to help to correct it, we must start at the beginning of the Thirty Years War.

That war began in 1968, when the Communist Party of Kampuchea—popularly known as the “Khmer Rouge”—declared armed struggle against the government of Cambodian leader Prince Norodom Sihanouk. Over the course of this war, the conflict took many different forms, went through many phases, and involved a list of participants nearly as long as the roster of the membership of the United Nations. The country changed its name six times during the Thirty Years War, beginning as the Kingdom of Cambodia, changing to the Khmer Republic in 1970, Democratic Kampuchea in 1975, then the People’s Republic of Kampuchea in 1979, the State of Cambodia in 1989, and finally back to the Kingdom of Cambodia again in 1993. These contortions reflected the extraordinary violence of the underlying turmoil. Cambodia finally emerged from the Thirty Years War in 1999, with the capture of the last Khmer Rouge military leader still waging armed resistance.

The Thirty Years War wrought upon Cambodia a level of destruction that few nations have endured. At the epicenter of all this violence, from the beginning until the end, there was one constant, churning presence: the Khmer Rouge. Though they have now ceased to exist as a political or military organization, Cambodia continues to be haunted both by the influence of the individuals who constituted the Khmer Rouge and by the legacy of the tragedy they brought down on the country. The social, political, economic, and psychological devastation sown by the Khmer Rouge will take generations to heal, if indeed it ever can be healed. This epic saga of havoc is so complex and confusing that scholars do not even entirely agree on how to name all the ruin.

Many historians describe the conflicts in Southeast Asia during the second half of the twentieth century in terms of three Indochinese wars. The First Indochina War was the war of French decolonization in Vietnam, Laos, and Cambodia, beginning in 1946 and ending with the Geneva Conference of 1954. The Second Indochina War can be said to have run from 1954 to 1975; it is typically known in the United States as the “Vietnam War” and in Vietnam as the “American War,” a dichotomy that reveals much about who was centrally involved. In this war of Vietnamese unification, as the United States attempted to prevent the consolidation of communist rule over all of Vietnam, the war also spread to engulf both Laos and Cambodia. The Third Indochina War began hard on the heels of the second, when from 1975 to 1991, the issue of who would rule Cambodia and how it would be ruled drew deadly interest from virtually every country in the region and from all the world’s major powers.

From 1968 onward, it appeared to many Cambodians that these wars flowed from one into the other, as inexorably as the Mekong River flows into the sea. The 1991–1993 United Nations peacekeeping mission in Cambodia marked the end of the Third Indochina War, but the fighting in Cambodia continued for nearly another decade afterward. The outlines of the conflict in Cambodia changed with the United Nations intervention, but the basic issue underlying the war—the Khmer Rouge drive for power—was not resolved by the peace process. Combat continued between the central government and the Khmer Rouge until the government finally prevailed in 1999. Thus, what historians characterize as distinct wars with distinct protagonists appeared to many Cambodians to be simply one long war, with one central protagonist—the Khmer Rouge—driving the entire conflict.

2 Comments

Filed under Cambodia, China, France, Laos, Thailand, U.N., U.S., Vietnam, war

Niall Ferguson on Current Economic Prospects

Economic historian Niall Ferguson weighs in on China’s and America’s role in the current global economic crisis under the provocative headline, What “Chimerica” Hath Wrought.

The most important thing to understand about the world economy over the past decade has been the relationship between China and America. If you think of it as one economy called Chimerica, that relationship accounts for around 13 percent of the world’s land surface, a quarter of its population, about a third of its gross domestic product, and somewhere over half of the global economic growth of the past six years….

Yet commentators should hesitate before prophesying the decline and fall of the United States. It has come through disastrous financial crises before—not just the Great Depression, but also the Great Stagflation of the 1970s—and emerged with its geopolitical position enhanced. That happened in the 1940s and again in the 1980s.

Part of the reason it happened is that the United States has long offered the world’s most benign environment for technological innovation and entrepreneurship. The Depression saw a 30 percent contraction in economic output and 25 percent unemployment. But throughout the 1930s American companies continued to pioneer new ways of making and doing things: think of DuPont (nylon), Proctor & Gamble (soap powder), Revlon (cosmetics), RCA (radio) and IBM (accounting machines). In the same way, the double-digit inflation of the 1970s didn’t deter Bill Gates from founding Microsoft in 1975, or Steve Jobs from founding Apple a year later….

But the most important reason why the United States bounces back from even the worst financial crises is that these crises, bad as they seem at home, always have worse effects on America’s rivals. Think of the Great Depression. Though its macroeconomic effects were roughly equal in the United States and Germany, the political consequence in the United States was the New Deal; in Germany it was the Third Reich. Germany ended up starting the world’s worst war; the United States ended up winning it. The American credit crunch is already having much worse economic effects abroad than at home. It will be no surprise if it is also more politically disruptive to America’s rivals.

Among the other developed economies, both the Eurozone and Japan are already officially in recession, ahead of the United States. The European situation is especially precarious because, contrary to popular belief, European banks are in worse shape than their American counterparts. Average bank leverage in the United States is around 12:1. In Germany the figure is 52:1. Short-term bank liabilities are equivalent to 15 percent of U.S. GDP; the British figure is 156 percent. Indeed, the United Kingdom runs a real risk of being Greater Iceland—an economy crushed by a super-sized financial sector.

Moreover, unlike the United States, there is no single European Treasury that can implement multibillion-dollar fiscal stimulus. Monetary policy may be uniform throughout the Eurozone, but fiscal policy is still a case of every man for himself.

Emerging markets, too, have been hammered harder by the crisis than the “decoupling” thesis promised. In the year to the end of October 2008, the U.S. stock market declined by 34 percent. But Brazil’s was down 54 percent, China’s 58 percent, India’s 64 percent and Russia’s 66 percent. When Goldman Sachs christened these four countries the BRICs, they little realized that their equity markets would one day be dropping like bricks. These figures are scarcely good advertisements for the more regulated, state-led economic models favored in Beijing and Moscow.

via A&L Daily

Leave a comment

Filed under China, economics, Europe, Japan, U.S.

Early Research on New Guinea-area Preposed Genitives

In my dissertation on word-order change in the Austronesian languages of New Guinea, I reviewed some of the earliest published typological research in the area. Here is a glimpse of what obsessed some of the earliest European researchers.

Over most of the territory occupied by Austronesian (AN) languages, genitive (or “possessor”) nominals (= GEN) follow head (or “possessed”) nominals (= N) in noun–noun genitive constructions. However, the reverse order (GEN + N) prevails in the neighborhood of New Guinea and nearby islands of Indonesia (from Sulawesi and Flores east). The distinctive “preposition of the genitive” in the AN languages of the latter area engendered much discussion by European scientists during the earliest era of their work in the area, when hardly any other syntactic information was available.

Various explanations were offered. Kanski and Kasprusch (1931) reviewed some of these explanations and concluded that the preposed genitive most likely resulted from the influence of Papuan languages, which also have preposed genitives and which share a very similar geographical range. This still seems the most likely explanation, although it remains a mystery why the preposed genitive has a wider distribution than any of the other grammatical features attributed to Papuan influence.

Even leaving Papuan influence aside, however, the narrower and contiguous geographical distribution of the preposed genitive, when compared with the unrestricted distribution of the postposed genitive, suggests that the former is innovative and originated somewhere in “extreme eastern Indonesia” (Blust 1974:12). (Genitives may have been optionally preposed in Proto-Austronesian, as they are in many Philippine languages. This would have made it easier for preposed genitives to become the dominant pattern in New Guinea-area Austronesian languages.)

The boundary between languages with preposed genitives and those with postposed genitives forms a wide arc running to the west, north, and east of the island of New Guinea. The southwest-to-northwest portion of this arc is frequently referred to as the “Brandes line” (after Brandes 1884), and the northwest-to-southeast portion has been called the “line of Friederici” (after Friederici 1913). (See, for example, Kanski and Kasprusch 1931, Cowan 1952).

The Brandes line was first assumed to be a genetic boundary (a linguistic analog of the Wallace Line perhaps). However, there was some disagreement about which genetic units it separated. Brandes (1884) himself thought it set off two groups of Indonesian languages. Jonker (1914) argued that two such Indonesian subgroups could not be distinguished. Schmidt (1926) thought the Brandes line marked the border between Indonesian and Melanesian languages.

(Brandes and Jonker were more familiar with the languages of Indonesia and were impressed with how similar in other respects the languages with preposed genitives were to Indonesian languages in general. Schmidt was more familiar with Melanesian languages and was impressed with how similar the genitive-preposing languages were to Melanesian languages in general. See Kanski and Kasprusch 1931:884.)

Kanski and Kasprusch (1931) offered a compromise. They identified four groups of languages:

    1. Indonesian, west of the Brandes line
    2. Papuan-influenced Indonesian, east of the Brandes line
    3. Papuan-influenced Melanesian, west of Friederici’s line
    4. Melanesian, east of Friederici’s line

Like Kanski and Kasprusch (and Jonker), most scholars today would not consider genitive word order to be a valid criterion for subgrouping. Another feature of genitive constructions was put forth as a better basis for distinguishing Indonesian from Melanesian languages. In western Austronesian (“Indonesian”) languages, genitive pronouns can be suffixed (or encliticized) to all nouns. In eastern Austronesian (“Melanesian”) languages, genitive pronouns can be suffixed directly to nouns only in case the possessed entity is an inalienable relationship to the possessor. In practice, this means that most body-part and kin terms are directly suffixed. Most other nouns are not. Instead, head nouns (denoting possessed entities) in constructions expressing an alienable relationship are preceded by genitive pronouns.

Schmidt (1926:424) and Kanski and Kasprusch (1931:889) regarded the influence of Papuan languages as responsible for the origin of the grammatical distinction between alienable and inalienable possession in eastern Austronesian languages as a whole. Under Papuan influence, they argued, the AN languages in transition from Indonesia to Melanesia began to lose their original postposed genitives and to acquire preposed ones. Nouns denoting alienables formed the vanguard of this change. Nouns denoting inalienables, such as body parts and kin terms (which involve animate—usually human—possessors, one could add), were slower to lose the original genitive pronouns because reference to an inalienable almost always requires reference to a possessor as well. The inalienables retained their postposed pronouns long enough for the latter to become an integral part of the noun itself. The general change was thus arrested, with inalienables forming a relic category.

One major weakness of this hypothesis is that it ignores the distinction between pronominal and nominal genitives. In eastern AN languages generally, it may be more common for independent genitive pronouns to precede head nominals in cases of alienable possession. (There is considerable variation.) In all but the more narrowly defined “Papuan-influenced” languages, however, genitive nominals follow head nominals (N + GEN), whether or not alienable possession is involved. This hypothesis, then, leads one to the false expectation that genitive nominals precede head nominals in all languages in which the alienable–inalienable distinction exists.

The alienable–inalienable distinction is reconstructible for Proto-Oceanic (Pawley 1973:153–169), the ancestor of most of the languages of eastern Austronesia. However, it is not unique to that group. It also occurs in many languages of eastern Indonesia that are not daughters of Proto-Oceanic (Collins 1980:39 ff., Stresemann 1927:6). So even the presence or absence of the alienable–inalienable distinction does not adequately indicate genetic affiliation.

The traditional recognition of differences between “Indonesian” and “Melanesian” languages is now generally phrased in terms of “Oceanic” and “non-Oceanic” languages. The former term denotes what is generally recognized as a genetic unit (primarily on the basis of phonological criteria). The negative term “non-Oceanic” lumps together all other AN languages without implying that they form a single genetic unit. The boundary between the two groups of languages distinguished by the new phraseology has also shifted somewhat farther to the east since the time of Brandes, Schmidt, and Friederici. The new boundary, which in an earlier era would have been called “Grace’s line” (after Grace 1955:338, 1971:31), is assumed to have a firmer genetic basis than the two earlier boundaries. Grace’s line, separating Oceanic from non-Oceanic languages, runs north-northeast to south-southwest, intersecting 140° E longitude between New Guinea and Micronesia. The scope of this thesis is restricted to the AN languages west of Friederici’s line and east of Grace’s line. These languages can be characterized as “Papuan-influenced Oceanic.” However, before restricting discussion to these languages, it will be helpful to review the various boundaries and the nature of the groups of languages they set apart.

The Brandes line marks the western boundary of a group of languages with innovative genitive word order. This group of typologically similar, but genetically not so closely related, languages is bounded on the east by Friederici’s line. Somewhere east of the Brandes line is the western boundary of a group of languages showing an innovative grammatical distinction between alienable and inalienable genitives. Most of these languages are members of the Oceanic subgroup, a genetic unit, but the westernmost languages are not. East of this boundary lies Grace’s line, the western boundary of the Oceanic subgroup. The eastern boundary of the Oceanic subgroup, and of all AN languages showing the alienable–inalienable distinction, is the eastern border of Austronesian as a whole (east of Easter Island). (I am assuming that the distinction between a and o genitives in Polynesian can be considered somewhat akin, semantically but not structurally, to the alienable–inalienable distinction in the rest of Oceanic.)

References

Blust, Robert A. 1974. Proto-Austronesian syntax: The first step. Oceanic Linguistics 13:1–15.

Brandes, Jan Lourens Andries. 1884. Bijdrage tot de vergelijkende Klankleer der westersche Afdeeling van de Maleisch-Polynesische Taalfamilie. Utrecht, P.W. van der Weijer. 184 pp.

Collins, James. 1980. The historical relationships of the languages of Central Maluku, Indonesia. Ph.D. dissertation, University of Chicago.

Cowan, H. K. J. 1951–1952. Genitief-constructie en Melanesische talen. Indonesië 5:307–313.

Friederici, Georg. 1912. Wissenschaftliche Ergebnisse einer amtlichen Forschungsreise nach dem Bismarck-Archipel im Jahre 1908, vol. 2, Beiträge zur Völker und Sprachenkunde von Deutsch-Neuguinea. Mitteilungen aus den Deutschen Schutzgebieten, Ergänzungsheft 5. Berlin, Mittler und Sohn.

Friederici, Georg. 1913. Wissenschaftliche Ergebnisse einer amtlichen Forschungsreise nach dem Bismarck-Archipel im Jahre 1908, vol. 3, Untersuchungen über eine melanesische Wanderstrasse. Mitteilungen aus den Deutschen Schutzgebieten, Ergänzungsheft 7. Berlin, Mittler und Sohn.

Grace, George W. 1955. Subgrouping Malayo-Polynesian: A report of tentative findings. American Anthropologist 57:337–339.

Grace, George W. 1971. Notes on the phonological history of the Austronesian languages of the Sarmi coast. Oceanic Linguistics 10:11–37.

Jonker, J. C. G. 1914. Kan men bij de talen van den Indischen Archipel eene westelijke en eene oostlijke afdeeling onderscheiden? Mededeelingen der Koninklijk Akademie van Wetenschappen, afdeeling Letterkunde, 4e Reeks, deel 12, pp. 314–417.

Kanski, P., and P. Kasprusch. 1931. Die indonesisch melanesischen Übergangssprachen auf den Kleinen Molukken. Anthropos 26:883–890.

Klaffl, Johanne, and Friederich Vormann. 1905. Die Sprachen des Berlinhafen-Bezirks in Deutsch-Neuguinea. Mitteilungen des Seminars für Orientalische Sprachen 8:1–138. (With additions by Wilhelm Schmidt.)

Pawley, Andrew K. 1973. Some problems in Proto-Oceanic grammar. Oceanic Linguistics 12:103–188.

Schmidt, Wilhelm. 1900, 1902. Die sprachlichen Verhältnisse von Deutsch Neuguinea. Zeitschrift für afrikanische und oceanische Sprachen 5(1900):354–384; 6(1902):1–99.

Schmidt, Wilhelm. 1926. Die Sprachfamilien und Sprachenkreise der Erde. Heidelberg, C. Winter. 595 pp.

Stresemann, Erwin. 1927. Die lauterscheinungen in den ambonischen Sprachen. Zeitschrift für Eingeborenensprachen, Beiheft 10. Berlin, Reimer. 224 pp.

Leave a comment

Filed under Germany, Indonesia, language, Netherlands, Papua New Guinea, scholarship

Wordcatcher Tales: Hodohodo, Czechia, Kanakysaurus

A recent article in the Wall Street Journal about a “shocking” new slacker attitude among Japanese workers referred to such workers as the hodohodo-zoku ‘so-so folks’. By itself, the word hodo (程) translates into ‘degree, limit, distance, status, amount’, and its reduplication, 程程, suggests ‘moderation’ or ‘judiciousness’. Grammatically, hodohodo behaves like an ideophone, but then ideophones in Japanese generally behave like nouns. To make it into a verb, you have to add -suru ‘do, be’, to make it into an adverbial you add the postposition ni, and so on. But I suspect hodohodo fails one test for onomatopoeic ideophones in Japanese: the ability to occur before -to ‘with’, in the equivalent of English ‘with a [plop-plop, fizz-fizz, etc.]’. I await correction from Matt of No-sword.

Last weekend, I also had the opportunity to meet a scholar visiting from the Czech Republic, who repeatedly referred to her nation as Czechia—a most sensible formulation which I subsequently found to have had official sanction since 1993 (along with Česko, the Czech equivalent), but which seems to be very slow to spread among English speakers, who perhaps still feel guilty about agreeing to carve up Czechoslovakia in 1938 and want to compensate by resisting any attempt to shorten the fuller form of its current name. However, feeling no guilt on that score despite my English heritage, I henceforth resolve to refer to that glorious center of historic dissidence as Czechia, plain and simple. In fact, I’ve just added Czechia to my list of country categories for this blog. I had already added Bohemia before, but that does no justice to Moravia, which has, if anything, an even greater tradition of religious dissidence.

Finally, I see that the latest issue of Pacific Science (vol. 63, no. 1, 2009, but already online at BioOne) reports the discovery of a new species of a lizard genus indigenous to New Caledonia, a viviparous skink genus with the wonderfully appropriate name, Kanakysaurus.

1 Comment

Filed under Czechia, Japan, language, nationalism, Pacific, scholarship, science

Rebranding British Royalty, 1914-1917

From Indian Summer: The Secret History of the End of an Emperor, by Alex von Tunzelmann (Picador, 2008), pp. 43-45:

ON 28 JUNE 1914, AN AUSTRIAN ARCHDUKE AND HIS WIFE were shot in Sarajevo by a nineteen-year-old terrorist. Assassinations were not unusual at the time. Victims in recent years had included the presidents of Mexico, France and the United States, the empresses of Korea and Austria, a Persian shah and the kings of Italy, Greece and Serbia. Portugal had two kings assassinated on the same day in 1908. But the murder of Archduke Franz Ferdinand would swiftly assume its legendary status as the trigger for the Great War. Swift to feel its tremors was the fourteen-year-old great-grandson of Queen Victoria, His Serene Highness Prince Louis of Battenberg….

Four months to the day after Franz Ferdinand’s death, the elder Prince Louis of Battenberg was removed from his position as First Sea Lord. Prince Louis had been British since 1868 and had served in the Royal Navy since he was fourteen years old. But by October 1914 Britain was at war with Germany, and there were far too many Germans visible in high places. For King George V, of the house of Saxe-Coburg-Gotha, the public tide of anti-German feeling was alarming. He was largely German; his wife, the former Princess May of Teck, was wholly German; his recently deceased father, King Edward VII, had even spoken English with a strong German accent. It was uncomfortably obvious where all this might lead, and a high-profile sacrifice was required to satisfy the public. Prince Louis was at the top of the list.

And so the king and his First Lord of the Admiralty, Winston Churchill, agreed to throw one of their most senior military experts onto the pyre at the beginning of the war, because his name was foreign….

But the humiliation of the Battenbergs was not complete. On 17 July 1917, a mass rebranding of royalty was ordered by George V. The king led by example this time, dropping Saxe-Coburg-Gotha (which was, in any case, a title — nobody knew what his surname was, though they suspected without enthusiasm that it might be Wettin or Wipper), and adopting the British-sounding Windsor. Much against their will, the rest of the in-laws were de-Germanized. Prince Alexander of Battenberg became the Marquess of Carisbrooke; Prince Alexander of Teck became the Earl of Athlone; Adolphus, Duke of Teck, became the Marquess of Cambridge. The unfortunate princesses of Schleswig-Holstein were demoted, in the king’s words, to “Helena Victoria and Marie Louise of Nothing.” And the unemployed Prince Louis of Battenberg would be Louis Mountbatten, Marquess of Milford Haven…. Henceforth, Prince Louis would be a marquess, and Battenberg a cake.

Leave a comment

Filed under Britain, Europe, Germany, language, nationalism, war

TGA on Criminalizing Memory

In last Thursday’s Guardian, Timothy Garton warns, “The freedom of historical debate is under attack by the memory police: Well-intentioned laws that prescribe how we remember terrible events are foolish, unworkable and counter-productive”:

Among the ways in which freedom is being chipped away in Europe, one of the less obvious is the legislation of memory. More and more countries have laws saying you must remember and describe this or that historical event in a certain way, sometimes on pain of criminal prosecution if you give the wrong answer. What the wrong answer is depends on where you are. In Switzerland, you get prosecuted for saying that the terrible thing that happened to the Armenians in the last years of the Ottoman empire was not a genocide. In Turkey, you get prosecuted for saying it was. What is state-ordained truth in the Alps is state-ordained falsehood in Anatolia.

This week a group of historians and writers, of whom I am one, has pushed back against this dangerous nonsense. In what is being called the “Appel de Blois”, published in Le Monde last weekend, we maintain that in a free country “it is not the business of any political authority to define historical truth and to restrict the liberty of the historian by penal sanctions”. And we argue against the accumulation of so-called “memory laws”. First signatories include historians such as Eric Hobsbawm, Jacques Le Goff and Heinrich August Winkler. It’s no accident that this appeal originated in France, which has the most intense and tortuous recent experience with memory laws and prosecutions. It began uncontroversially in 1990, when denial of the Nazi Holocaust of the European Jews, along with other crimes against humanity defined by the 1945 Nuremberg tribunal, was made punishable by law in France – as it is in several other European countries. In 1995, the historian Bernard Lewis was convicted by a French court for arguing that, on the available evidence, what happened to the Armenians might not correctly be described as genocide according to the definition in international law.

People who indulge in this kind of high-minded overreach by criminalizing particular memories, policies, and thoughts they consider beyond the pale seem to have forgotten the lessons of Stalinism, Maoism, and religious wars of all ages. (I don’t mean to let off the Nazis, who criminalized irredeemable status offenses—being Jewish, Gypsy, Slav, homosexual, genetically disabled, etc.—for which there was no possibility of reeducation, only eventual extermination.)

Leave a comment

Filed under Europe, nationalism, religion, scholarship, Turkey