Page 451

Chapter 17: Bombs

Of all aircraft weapons of World War II, bombs were the most widely used in air-to-ground operations. Bomb design, unlike design of antiaircraft equipment and weapons for air-to-air combat, was little affected by the rate at which planes of the 1940s could travel. Though the speed of the modern bomber made accurate aiming difficult, the swiftness of the ship’s flight concerned Ordnance ballisticians only in preparing appropriate bombing tables. Before the end of the war, progress in aeronautics did introduce changes in the development program when bombers appeared that were capable of carrying 10,000-pound bombs and bigger, and did present new problems of bomb ballistics when stratosphere flights became feasible. But through most of World War II the Ordnance Department’s major difficulty in bomb development was in adjusting to the frequent changes of doctrine of air warfare. Yet no change in strategic or tactical planning lessened the importance of bombs; in relation to weight and cost, bombs had a higher destructive potential than any other one weapon. They required no complicated, heavy, launching devices as did air cannon, and, under favorable conditions of weather, could be dropped on the target from heights from which rocket fire became inaccurate. And before the war ended, the Allied air forces found that small bombs used to strafe in tactical support of ground troops were more effective than machine gun fire.

Developments to 1940

The bomb is the oldest aircraft weapon. The first ever dropped in combat from an airplane fell among a group of Arabs in a Tripolitan oasis on 1 November 1911. How the Arabs felt about it we do not know, but the bombardier, Lieutenant Gavotti of the Italian Army, reported that from his altitude of about 2,300 feet he saw a cloud of black dust and running men. The bomb he dropped was a round grenade, a little larger than an orange, filled with potassium picrate. Holding it between his knees, Lieutenant Gavotti fuzed it, armed it, and dropped it over the side with one hand, while he guided the plane with his other hand.1 Military men everywhere were quick to perceive the promise of this new kind of warfare. Targets unreachable by other means now came within range.

World War I bombers carried grenades and small-caliber shells and before the war was over dropped bombs of 1,000 kilograms. In the United States three kinds of bombs were developed. The largest, weighing from 50 to 1,600 pounds, was the demolition bomb, a light steel case made

Page 452

of sections welded together and filled with TNT. Its purpose was to demolish buildings by blast, that is, the pressure or shock waves sent out by the explosion. The second type was the fragmentation bomb of about twenty pounds, consisting chiefly of rejected artillery shells, while the third type was the incendiary bomb, loaded with oil emulsion, thermite, or metallic sodium. The three types differed in kind and amount of filler and in thickness of case but had operational features in common. Fins designed to provide stability in flight usually extended almost half the length of the case. Shortly after the Armistice, Americans began work on a safety device, used by the French and British during the war, a mechanism that allowed the bomb to be dropped unarmed and to arm itself in flight. When the bomb left the airplane, a small wind vane in the fuze began to turn and after a certain number of revolutions armed the bomb. In the airplane the vane was restrained by an arming wire threaded through it; the wire was withdrawn as the bomb was released. The arming wire could be left on the bomb if it became necessary to unload over friendly territory.2

In 1921 the War Department convened a Bomb Board to conduct an extensive program for testing bombs against various kinds of structures and surfaces. The tests, running over a period of two years, provided data that guided the Ordnance Department and the Air Corps through the 1930s, Ordnance engineers strengthened demolition bomb cases by forging them as nearly as possible in one piece, with a minimum of welding, and substituted for the long fins of World War I short box fins that gave greater stability in flight.3 Uniformity of fragment size of the fragmentation bomb was achieved by encasing the body in rings cut from steel tubing or in wound steel coil. For low-level bombing, experiments with means of delaying the action of the fragmentation bomb sufficiently to permit the airplane to get to a safe distance before the bomb detonated produced a parachute attachment in place of fins. The parachute slowed descent and caused the bomb to strike the ground with its axis nearly vertical so that the fragments tended to be scattered above ground instead of being buried. Collaboration with the Chemical Warfare Service developed bombs that could be filled either with a fire-producing substance or with gas or smoke. The filling was the responsibility of the Chemical Warfare Service, the case of the Ordnance Department. The case had thin walls like the demolition bomb but had a burster tube running down its center. Shortly before the United States entered World War development of the incendiary bomb became entirely the responsibility of the Chemical Warfare Service.4

The filling for demolition and fragmentation bombs was trinitrotoluene. TNT was eminently stable, capable of being stored for long periods of time without deterioration, and, as it was relatively insensitive to blows and friction, it could be safely handled and shipped. It was easily melted for casting into bombs. Another virtue was its ready susceptibility to detonation by tetryl or the other highly sensitive explosives used in boosters. At the beginning of World War II, shortage of TNT for a time necessitated substitution

Page 453

in large bombs of amatol, a mixture of TNT and ammonium nitrate. Amatol had slightly less shattering power—brisance—than TNT, and somewhat less sensitivity to detonation. Later, increased production permitted the use of straight TNT.5

Blast Versus Fragmentation, 1940–41

As World War II approached, the War Department, realizing the need for speeding the manufacture of demolition bombs and for making interchange of bombs possible between the Army and the Navy, called together a committee made up of representatives from the Navy Bureau of Ordnance, the Army Ordnance Department, and the Air Corps. On the committee’s recommendation, the Army’s 600-pound and 1,100-pound demolition bombs were discarded, and new 500-pound and 1,000-pound bombs that could be carried on aircraft of either service were standardized.6 In determining basic policies, a further and more far-reaching step was taken early in 1941 with the creation of a subcommittee of the Joint Aircraft Committee. The latter had been established in September 1940 to insure systematic and equitable allocation of aircraft between the British and the US Air Corps. The function of the special subcommittee, composed of members of the Ordnance Department, the Air Corps, the Navy, and the Royal Air Force, was to recommend standard types of aircraft bombs and test programs for developing them.7 From this committee’s discussions first emerged the arguments pro and con on whether blast effects of bombing were more destructive than fragmentation effects. Long after the committee ceased to be active, some controversy over this matter endured and, indeed, majority opinion swung back and forth several times during the war.

The findings of the subcommittee were initially influenced by the experiences of the British in the Battle of Britain. The British member, Group Captain Charles Crawford, thought American bombs too fragile and therefore likely to fracture on impact, becoming duds of a low order of detonation. Largely because of this opinion and Group Captain Crawford’s report that fragment damage was in many instances greater than the blast damage inflicted by German bombs dropped on England, the subcommittee recommended that a series of bombs with walls thicker than those of the old demolition bomb be developed by the Ordnance Department in 250, 500, 1,000, and 2,000-pound sizes. They would be about 30 percent high explosive by weight, in contrast to the demolition bomb’s 55 percent. Following British nomenclature, they were called general purpose bombs, since the burst of their thick cases into damaging fragments would make them effective against a variety of targets. For targets that general purpose bombs could not penetrate, such as concrete fortifications and the decks of most ships, the subcommittee recommended that the Ordnance Department develop 500-pound and 1,000-pound semi-armor-piercing bombs, the Navy armor-piercing bombs for use against the very heavily armored decks of capital ships and depth bombs to be used against submarines. Two types of special purpose bombs already in existence were recommended for standardization, the 20-pound

Page 454

fragmentation bomb as the M41, and the 100-pound gasoline incendiary bomb as the M47. Early in March 1941 the subcommittee’s final recommendations on the standardization of aircraft bombs, “Case 217,” were approved by the Joint Aircraft Committee.8

The general purpose bombs, modeled on the British thick-walled bombs of low-explosive content, had been accepted with some reservations by the American members of the subcommittee. Tests of United States prewar demolition bombs, which had a 55 percent explosive charge, had not indicated any great degree of malfunction on impact.9 Moreover, reports coming from London and Berlin indicated in the spring of 1941 that German bombs, with an explosive charge of about 50 percent and a case no stronger than that of the American demolition bomb, accomplished more by blast than did the British bomb by fragmentation. An observer in London reported: “The British have learned by bitter experience that the havoc caused by blast is far more destructive in towns and cities than the localized splinter effect and relatively little blast effect of small, thicker-walled bombs.10 Finally, with a shift of earlier attitudes, the British themselves supported the case for blast. On 17 April 1941 the RAF dropped on Berlin the famous 4,000-pound blockbuster. Since it had a thin case, it could carry 2,990 pounds of high explosives and the blast effect was unlike anything seen before. The bomber crew reported that the shock could be felt in the aircraft as high as 14,000 feet and that flame, debris, and smoke were seen to spread over large areas.11

Impressed by the reports of the destruction inflicted by this tremendous bomb, General Arnold, Deputy Chief of Staff for Air, requested the Chief of Ordnance to develop a 4,000-pound light-case bomb not later than 1 July 1941. The bomb that resulted, developed on schedule and tested between 2 and 5 July, weighed 4,166 pounds of which 3,221 pounds was the weight of the explosive charge. Its performance was very satisfactory, but the Air Corps retreated from its position, stating that it would rather carry two 2,000-pound GP bombs than one 4,000-pound blast bomb. General Barnes and General Somers, then chief of the Technical Staff, also doubted whether the 4,000-pounder would be as effective as two 2,000-pound bombs. Nevertheless, Robert A. Lovett, Assistant Secretary of War for Air, sensed the importance of the larger bomb and, on a directive from the War Department, development continued. In August the 4,000-pound light-case bomb was standardized as the M56.12

A more immediately productive result of reports from abroad in 1941 was the

Page 455

development of general purpose bombs containing a larger charge of explosive. The case strength of their thick-walled British prototype that carried only 30 percent explosive was partly a virtue of necessity: limited British forge capacity necessitated fabricating the bomb by casting. American manufacturers could spin and forge steel bombs equal to the English in case strength yet able to take explosive charges of 50 percent. Essentially a modification of the old demolition bomb with a stronger case, general purpose bombs in 250-pound, 500-pound, 1,000-pound, and 2,000-pound sizes were standardized in the fall of 1941.13 The Army estimated that this series would fill about 90 percent of its requirements and could be used effectively against all land targets except those with armor or heavy concrete protection. For such highly resistant targets, the JAC subcommittee recommended a 1,500-pound solid-nosed bomb similar to the Navy’s 1,600-pound armor-piercing bomb and containing about 15 percent explosive filler. As supply of this type could not possibly meet the combined demands of the Army, the Navy, and the British, the 500-pound and 1,000-pound thick-walled general purpose bombs originally authorized by the Joint Aircraft Committee, containing 30 percent explosive filler, were tested to see whether they would fill the requirement. They were found satisfactory and, provided with steel nose plugs instead of nose fuzes and with tail fuzes like those in general purpose bombs, they were standardized as SAP, that is, semi-armor-piercing bombs.14

Chemical Bombs

At the opposite extreme in case strength from the semi-armor-piercing bombs were the thin-walled chemical bombs. Except for photoflash and target-identification bombs for which the Ordnance Department had sole responsibility, development for chemical fillers was assigned to the Chemical Warfare Service. One bomb at first sufficed for incendiary, gas, and smoke purposes. Made of light sheet metal, it was about 70 percent filler. Its blunt rounded nose distinguished it from explosive bombs. Later somewhat modified, it was manufactured in large quantities. It was employed with excellent effect as an incendiary but as a gas bomb was unsatisfactory because its welded construction made it subject to leakage. This did not matter in the case of the incendiaries, which were shipped empty and filled in the field with a gasoline and rubber solution. But gas bombs had to be shipped loaded and then stored for long periods. An attempt to develop a leakproof, gas-filled bomb resulted in a slightly larger and heavier model made of ‘As-inch steel tubing, somewhat longer than the 100-pound general purpose bomb but of much the same construction.15

The trend toward larger chemical bombs with thicker cases continued. At the request of Chemical Warfare Service,

Page 456

Bombs

Lt

Lt. M. S. Crissy with the first bomb to be dropped from the air, San Francisco, California, January 1911. The Wright airplane pilot is Philip O. Parmalee

Bombs developed by the 
Ordnance Department

Bombs developed by the Ordnance Department

Page 457

Ordnance later developed 500-pound and 1,000-pound sizes for different types of fillers, incendiary as well as gas. After experiments with several thin-case models, Ordnance engineers in the end simply converted general purpose bombs to chemical by welding in longitudinal burster walls and by making holes in the base plugs through which the cases could be filled. The designers took great care to avoid even the most minute crevices, especially at the filling hole, which was closed by a special plug and gasket. The burster, consisting of a waterproof fiber tube filled with about two and a half pounds of tetrytol, was given a tight seal. With these changes, general purpose bombs taken from existing production lines became chemical bombs. They proved strong enough to withstand shipment and rough handling and had much better ballistic characteristics than the first chemical bombs.16

Fuzes

The fuzes for all bombs were agreed upon by the Joint Aircraft Committee. General purpose bombs had both nose and tail fuzes. The nose fuze functioned when a striker head plunged a firing pin into an explosive train consisting of primer, detonator, and booster. The tail fuze, the purpose of which was to detonate the bomb if the nose fuze failed to function, was activated by a plunger operated by inertia and was the same for all bombs except for its arming vane shaft. The vane shaft varied in length according to the size of the bomb, so that the vane would be positioned sufficiently to the rear of the bomb body to be in the airstream. The tail fuze reduced the percentage of duds to about 0.1 percent. Another use for this fuze became

increasingly important. In dive bombing, the bomb had to become armed in a hurry.17 Designers reduced the revolutions of the arming vane from the 675 required in the earlier tail fuzes, to 175 in a new series, and then to only 18 revolutions. In addition, Ordnance developed for skip bombing or “masthead” bombing of ships a series of very sensitive fuzes that could delay explosion either from 4 to 5 seconds, from 8 to 11 seconds, or, later, 8 to 15 seconds, depending on the primer detonator used. These highly sensitive fuzes were produced in great quantities in 1943 and were popular in the theatres. Used with the 500-pound bomb they were credited with helping to sink the Japanese fleet. As the rapidity with which they became armed made them dangerous to use on carrier-based aircraft, at Navy request the Ordnance Department developed a series in which the arming time was increased from 18 to 150 revolutions.18

Early in 1943 an urgent requirement arose for bombs fuzed for a delay of from one hour to twenty-four hours. Maj. Gen. James H. Doolittle wanted them to hamper and restrain workers unloading ships at enemy dock installations in North Africa; Lt. Gen. Simon B. Buckner, Jr., needed them to keep the Japanese confined to their dugouts in the Aleutians.19 At the time, the Ordnance Department was developing two types of long-delay

Page 458

fuzes. One with an adjustable clockwork mechanism proved unsatisfactory because the enemy could stop clock mechanisms with magnets or by injecting acid into them. The other, a chemical fuze, was still in the experimental stage when the sudden demand from the theatres made quick action necessary. A copy of the British No. 37 Mark IV pistol was hastily tested and standardized in three lengths. A celluloid disc restrained its cocked striker. When the fuze became armed an ampoule of acetone broke and dissolved the celluloid. The delay time could be varied from 1 to 144 hours by changing the concentration of the acetone or the thickness of the disc, or both. To prevent the enemy from withdrawing the fuze, it was booby trapped. This introduced a factor of unreliability and danger. The bomb could explode in midair if the downward flight unscrewed the anti-withdrawal feature and caused it to function. Modifications aimed at greater sturdiness and safety followed, but the Air Forces still raised objections, especially to the anti-withdrawal feature, which prevented the bombs from being defused if a bombing mission were canceled.20

The Ordnance Department also worked out countless variations of fuzes to meet the needs of the using services and to keep pace with bomb and aircraft development. In 1944 new high-level “stratosphere” bombing presented problems just as the early low-level bombing had done. For example, the B-29 bomber released many bombs simultaneously from a great height, so that with quick-arming nose fuzes they sometimes bumped into each other and detonated just below the ship. Increasing the number of threads on the fuze arming screw and striker gave a longer arming time. New delay elements were incorporated both in this nose fuze and in the standard tail fuze to adapt them to use with new types of bombs such as the VB-1 (Azon-1). For glide bombs, which approached the target at a flat impact angle and sometimes had shrouded noses and special tails, Ordnance engineers designed a fuze in which the arming screw, connected with an anemometer vane mounted on the side of the bomb, entered the striker at 90 degrees instead of the top. As the war ended work was proceeding on special fuzes with long arming distances for robot bombs.21

Of the specialized fuzes, one of the most significant was the diaphragm fuze that operated by blast to explode the bomb above ground. First considered for use in the 20-pound fragmentation model, it was copied from a British fuze that utilized the air blast from a preceding bomb or an air cushion effect from the ground. A very light firing pin was secured to a flexible, slightly convex metal diaphragm. When pressure snapped the diaphragm to a concave position, the fuze functioned. A number were tested in 20-pound fragmentation bombs, some dropped singly and some in

Page 459

pairs, but the results were so disappointing that further development was abandoned. On the other hand, the British who early in the war had had the notion that a bomb burst in the air would be especially effective because of less shielding by buildings, began experiments with the blast-operated fuze in general purpose bombs and aroused the interest of US Air officers. Consequently, at the request of the Army Air Forces, the Ordnance Department modified its earlier models for trial in general purpose bombs. The new type, incorporating features that made for greater safety and even more sensitivity, was tested with 100-pound, 500-pound, and larger GP bombs. So fuzed, 500-pound bombs dropped in threes would detonate at approximately 25 feet, vertically, from each other. The fuze was standardized late in 1944 as the M149 and, at the urging of Army Air Forces, the Ordnance Department gave it high priority.22

Except for fuzing, World War II saw few changes in the general purpose and semi-armor-piercing bombs that had been standardized in the fall of 1941. In order to make the 500-pound, 1,000-pound, and 2,000-pound sizes more effective against water targets, the Navy’s Mark 30 hydrostatic fuze, which functioned by water pressure, was adopted for both Army and Navy use as the AN-MK 30. To accommodate it, the bombs were given a larger tail fuze cavity and assigned new designations.23 Another minor change followed an AAF discovery that the anti-withdrawal feature of the long-delay tail fuzes could be circumvented by removing the base plate or adapter-booster, Ordnance engineers added locks for these parts.24 In the first year of the war the Air Forces used in most operations the 500-pound and 1,000-pound general purpose bombs, which would destroy such vital targets as concrete docks, steel bridges, and light cruisers. The 500-pound was the bomb that General Arnold gave the credit for sinking about 39 Japanese ships in the Makassar Strait.25

The New Role of Fragmentation Bombs

When the United States forces began to move forward the need for a bomb to be used against troops became clear, and the theatres began to demand the 20-pound fragmentation bomb. Standardized in 1940 and adopted by the Joint Aircraft Committee early in 1942 as the AN-M41,26 this small bomb weighed not more than 23 pounds, even with a parachute attachment. It was not dropped singly, but by sixes in a cluster that would fit in an airplane’s 100-pound bomb station. Clustering was made possible by the use of an adapter consisting of a hollow rod to which the bombs were wired. When the adapter was released from the aircraft an arming wire was pulled, activating a cartridge with a steel slug that cut the wires holding the bombs. The bombs fell free, arming themselves with their own arming vanes. Later design modification eliminated the cartridge and substituted clamped straps

Page 460

The parafrag bomb in 
action

The parafrag bomb in action. Old Namlea Airdrome, Boerce Island, Netherlands Indies, is shown during a low-level bombing attack by the U.S. Fifth Air Force.

Page 461

for the wires.27 “Wicked little weapons,” according to Brig. Gen. George C. Kenney, they proved their value in the battle for New Guinea and became increasingly popular. If accurately placed, they could harass front-line infantry and disrupt lines of communication far more completely than could machine gun fire. They were especially effective against parked aircraft, airdromes, supply trains, and encampments. By the spring of 1943 the effectiveness of fragmentation bombs was so well established that the Army Air Forces requested the development of new types, and before the war was over even fighter craft were supplied with them.28

The first new antipersonnel bomb was a 4-pounder copied from the German 2-kilogram “Butterfly” bomb. It got its name from two curved sections of its case that opened on release and formed wings that rotated in the air and slowed descent. It could be fuzed to detonate in the air, on impact, at any delay up to thirty minutes, or upon being disturbed. Delivered before an attack, it could deny the enemy use of his airdromes, antiaircraft installations, and supply areas. Butterfly bombs were dropped in clusters, either in a cluster of 24 bombs that fitted into a 100-pound station or in one carrying 90 that fitted in a 500-pound station. Engineers at Aberdeen made numerous tests to find the right timing and altitude for the release of the cluster from the aircraft and the opening of the cluster, in order to forestall excessive wind drift or damage to the butterfly mechanism. At the end of a year of tests, both bomb and cluster were standardized but, as the timing of the cluster opening was still not wholly satisfactory, testing continued into the summer of 1945. Fuze failures and the tendency of the cluster to open too soon after release sometimes made experience with the bomb in the field discouraging.29

Two fragmentation bombs developed in 1943 were like the 20-pound bomb but were five and ten times its size. The demand for such bombs came from the Mediterranean theatre, and urgently from the Southwest Pacific Area where Allied officers could testify to the effectiveness of Japanese 60-kilogram fragmentation bombs. As a counterweapon, General Kenney converted general purpose bombs into fragmentation bombs by wrapping them with heavy wire and used them effectively around Japanese airdromes and bivouac areas. In the summer of 1943 the Ordnance Department designed two sizes, a 90-pound model to be used in a cluster of six in a bomber’s 500-pound station and a 260-pound model to fit in the 100-pound station. Both the lighter bomb, standardized as the M82, and the 260-pound, the M81, were similar to the 20-pound bomb, except for their fins. The cluster adapter for the M82 resembled that for the 20-pound except in size. Because one purpose of large fragmentation bombs was to reach

Page 462

targets enclosed in revetments, the AAF wanted an above-ground airburst, but until proximity and diaphragm fuzes became available late in 1944, the standard nose fuze that functioned on impact had to serve.30 Comparison tests of the single 260-pound M81 with the 20-pound M41 cluster proved that the large fragmentation bomb was better for destroying highly resistant and concentrated targets such as armored vehicles, parked aircraft, and PT boats, but that against unprotected troops and lightly armored vehicles and aircraft the cluster of 20-pound bombs was more effective. In the field, the AAF considered the 20-pound more versatile and useful than either the 260 or 90, so that a proposal of early 1944 to develop fragmentation bombs of 500 and 1,000-pound sizes was not pursued.31

Use of New Explosives

In the war-long argument over the relative merits of blast and fragmentation, the pendulum now began to swing back toward blast, partly because by 1944 the AAF would have aircraft capable of delivering larger and heavier loads. The heaviest bombers available in early 1943, the B-17 and the B-24, could carry bombs up to 4,000 pounds, but only on the wings. The B-29, expected to be ready early in 1944, was designed to carry 10,000-pounders in its interior. These considerations led the Army Air Forces officially to reverse the position taken in 1941 that no bomb larger than 2,000 pounds was required. Achieving greater blast meant renewed emphasis on large thin-case bombs of high explosive content. Reports from England in the fall of 1942 had stressed the effectiveness of such bombs. Hundreds of 4,000-pounders had been dropped over German cities and towns with such satisfactory results that the British were preparing bombs weighing as much as 12,000 pounds. Furthermore, the Air Forces had discovered from combat reports that 2,000-pound bombs had practically no effect against buildings adequately protected by sandbags. In spite of the earlier verdict, the Ordnance Department had already done some work on very large bombs. Mainly because of British interest,32 the AAF had requested a limited number of a 4,000-pound light-case type standardized in August 1941, and early information from Wright Field about B-29 capacity had inspired a request for a 10,000-pound bomb. About 600 of an experimental model 10,000-pounder were actually manufactured before the project was canceled. When Air Forces interest in large bombs revived, tests of the 4,000-pound bombs at Aberdeen indicated that loading with new and more powerful explosives was the simplest means of increasing blast effect within size limitations.33

Page 463

The earliest departure from TNT for bomb fillings was cyclonite or RDX, an explosive long known for its great power and brisance but generally considered too sensitive. The British had developed a method of desensitizing it by mixing it with beeswax and had used it with “terrible” effect34 in the 4,000-pound bomb the RAF dropped on Berlin in April 1941. The following summer Air Marshal Arthur T. Harris had pressed for large-scale production of RDX in America. The United States Navy was also interested in the explosive because of its effectiveness underwater, especially in a mixture with TNT and aluminum called torpex. But the Ordnance Department, while willing to start production for the British and the Navy, held back until May 1943 on the use of RDX in its own bombs, and then adopted only a less sensitive mixture with TNT, known as RDX Composition B. This first significant change in bomb loading came about as a result of AAF insistence that the large fragmentation bombs developed in 1943 would need the greater power of RDX Composition 13 to burst their thick walls with the greatest effect. The loading, with TNT surrounds for greater safety, was authorized.35 Though the Army Air Forces liked it and the Joint British-American Committee on Aircraft Ordnance and Armament approved it for all Army-Navy standard munitions, Composition B was used in only about 40 percent of the general purpose bombs. The reasons were two: first, the short supply caused by competition between it and high octane gasoline and synthetic rubber for production facilities and, second, the serious doubts of a number of Ordnance officers about the advisability of using it.36

Throughout World War II the Ordnance Department, believing itself in a better position to evaluate bomb fillings than were the using services, was “extremely cautious in its recommendations for any so-called improved explosive. ...”37 Much of this caution concerning RDX mixtures was justified. The most important weakness of Composition B was its tendency to detonate high-order without fuze action under the shock of impact. This made it undesirable for skip bombing. It was also more prone than TNT prematurely to deflagrate—decompose without detonating—when employed in delayed-action bombs dropped from high altitudes. Sensitivity to shock was not a consideration in the case of fragmentation bombs because they were not intended for delay fuze action on hard impact; but it was obviously a factor in the case of general purpose bombs. And the sensitivity of torpex-loaded depth bombs cost the Navy several serious accidents.38 The problem might have been solved by the new American explosive ednatol, which was used in the blast tests of 1943–44, but by the time

Page 464

it was in production in any quantity, the war had ended.39

After 1943 of far greater interest than either RDX Composition B or ednatol were the new aluminized fillings. Until World War II the use of aluminum in explosives had not been extensive, and tests in England in 1941 had failed to indicate any significant difference between aluminized explosives and amatol or Composition B. In 1943 the discovery that German bombs containing aluminum were extremely effective spurred research and led to the development of minol, a mixture of aluminum with amatol, and tritonal, a mixture with TNT. For their 4,000-pound bomb the British favored Minol 2, a mixture 20 percent aluminum, 40 percent TNT, and 40 percent ammonium nitrate, and they requested that it be used in their 4,000-pound bombs being loaded in the United States. The British had learned, by using new methods of blast measurement and interpretation, that Minol 2 produced an area of demolition approximately 80 percent greater than the area obtained with a TNT filler. Ordnance technicians had independently arrived at a similarly high opinion of the blast effect contributed by aluminum by comparing the performance of 2,000-pound and 4,000-pound bombs loaded with minol, TNT, ednatol, and the RDX mixtures. As between minol and tritonal, they preferred tritonal, which contained no ammonium nitrate because, when even the slightest degree of moisture was present in the air, aluminum acted on ammonium nitrate and produced “spewing”—the evolution of hydrogen gas—and even explosions. Tritonal was much safer, and the British were won over to it.40

When reports on the successful loading of 4,000-pound British bombs with tritonal at the Nebraska Ordnance Works came to the attention of the Army Air Forces, a request followed for further testing of the new explosive, especially with a view to using it in large, light-case bombs for jungle warfare. Ordnance engineers, comparing tritonal with Composition B and TNT, found it almost equal to Composition B in peak pressure value, yet as insensitive as TNT, and hence safe to load and use. After these tests, the Ordnance Committee recommended that tritonal supplant TNT as a loading in all general purpose and light-case bombs.41 Other bombs were filled with explosives suited to their particular purposes. For fragmentation bombs, RDX Composition B continued to be the preferred filling because it had more brisance than tritonal. The 2,000-pound semi-armor-piercing bombs developed early in 1944 were loaded with picratol, a mixture of TNT and ammonium picrate, or with Explosive D, which was of all explosives the least sensitive to shock and friction and was therefore the best to mix with TNT in a bomb that had to withstand severe shock and stress before detonating.42

Use of Air Bursts

Loading large, light-case bombs with the new aluminized explosives was one way to increase blast effect. Another way was air-burst fuzing. The idea that a bomb

Page 465

would be more effective if it were exploded in the air rather than on the ground grew out of abstract mathematical work carried on by NDRC on the theory of the interaction of shock waves: a special kind of nonacoustic reflection from the ground, known as the Mach effect, redistributed the energy of the explosive and widened the area affected by it. The theory was later supported by reports of observers in London who witnessed the great destruction wrought by German V-1 bombs that had struck the tops of trees and exploded above ground. Proximity fuzes promised to provide means of exploding bombs at roof-top level or above. As the VT fuze project got under way43 Ordnance officers consulting with scientists of NDRC concluded that a tail proximity fuze for the large, light-case bombs could be developed in a fairly short time. This plan was soon shelved in favor of an effective nose proximity fuze that would either produce air burst of itself or activate a new, supersensitive tail fuze of the cocked-firing-pin type.44

From NDRC studies begun in 1941 several experimental types of nose proximity fuzes evolved, of which the most promising were the T50 and T51. They provided a burst height of 60 feet over water when released from 10,000 feet or less, and of 18 to 42 feet when released over ground, depending on reflectivity and terrain. Following NDRC’s basic research, the Signal Corps, in coordination with the Ordnance Department, carried on work on these fuzes until late 1944, when Ordnance was given responsibility for the fuzes.45 Development was slow, both because of the very nature of the device and because it had to be adapted to bombs of various kinds and sizes and to use in new high-altitude, high-speed aircraft. One of the most difficult problems was to allow for the correct timing between drop and arming, that is, the “minimum safe air travel” for use in various tactics such as high-level, low-level, and naval bombing. One answer was a new air-arming mechanism that was given a considerable range of safe air travel by making a simple adjustment at the factory or in the field. Without waiting for entirely satisfactory solutions to this and other problems, because of intense interest in the theatres, the War Department authorized limited procurement of the T50 in late 1943. Combat tests were postponed by a decision of the Joint Chiefs of Staff forbidding employment of the fuze over land until October 1944, and then AAF distrust of the fuze caused further delay. It was not employed until February 1945 when the Seventh Air Force dropped proximity-fuzed fragmentation, general purpose, and chemical bombs at Iwo Jima. Soon afterwards the T51 was also tested in combat.46

Of the two, the Ordnance Department preferred the T51, believing it more reliable than the T50 in producing the right height of burst and better functioning because less sensitive. It was also more versatile. Whereas the T50 was limited to

Page 466

500-pound bombs, the T51 model would provide air burst on all bombs that normally took the AN-M103 nose fuze up to the light-case 4,000-pounder. Designated the T51E1 after minor modifications, the fuze was standardized in June 1945 as the M166.47 The largest bomb for which it was adopted was the 2,000-pound general purpose AN-M66. It was tested with the 4,000-pound bomb but caused too high a burst.48

The Search for More Powerful Bombs

By September 1944 some of the 4,000-pound light-case bombs were loaded and in the theatres, but the Air Forces had made little use of them. No use whatever had been made of the 10,000-pound. With the appearance of the B-29, which could carry in its interior bombs 125 inches long and 50 inches in diameter, however, the Army Air Forces decided that very large bombs would be desirable. A request for 10,000-pound bombs of the light-case, general purpose, and semi-armor-piercing types to fit the B-29 bomb bay was followed by one for a 4,000-pound general purpose bomb to be used to penetrate the very thick bombproof structures that the Japanese were expected to erect to protect their main positions. Ordnance designers, having anticipated the need of a 4,000-pound general purpose bomb of this type, had a model ready by the end of the year. Concerning the 10,000-pound bomb of the general purpose type containing a 50 percent explosive filler, they had serious reservations, based on belief that the length limitation of 125 inches prohibited a true semi-armor-piercing bomb of more than 5,500 pounds, and that a heavier one would lack the flight or penetration characteristics to be expected of its weight. The same consideration applied to the general purpose 10,000-pound bomb. Ordnance Research and Development Service was willing to undertake the development of both types but pointed out to the AAF that neither would have characteristics anywhere near ideal. The truth was, the B-29 could not carry in its bomb bay an effective 10,000-pound bomb. This fact was admitted within the Air Staff itself, and for the time being the development of very large bombs was necessarily stalemated.49

Lacking “super-super” blockbusters and the airplanes to carry them, the Air Forces had to depend on the bombs already on hand to meet the tremendously increased, requirements during and after the invasion of Europe. Improvisation had to serve. To increase payloads of all aircraft, as well as to fill efficiently the huge racks of the B-29, the Ordnance Department designed adapter clusters that would hold two or three bombs and fit in a station designed

Page 467

for one bomb. One model held three 100-pound GP bombs, another two of the 250-pound size, and another two 500-pounders. In this way, bomb loads were increased from 50 to 200 percent.50 For low-level bombing the Ordnance Department supplied general purpose bombs with anti-ricochet devices—parachute assemblies and a prong or nose spike that stuck in the ground and kept the bomb from bouncing. But for the penetration of heavily fortified German defenses, such as concrete structures with roofs from 10 to 20 feet thick, something more powerful was needed than any bomb or rocket then in use. In this exigency Ordnance engineers pushed forward modifications of 2,000-pound GP bombs to incorporate the “shaped charge” or Munroe principle.51

Also called the “hollow charge” principle, it had been applied to small fragmentation bombs as far back as 1941. Ordnance designers, modifying the 20-pound M41 in this way for use against tanks, had obtained an intense forward jet along the longitudinal axis of the bomb; and had succeeded in penetrating 3.5-inch armor plate. But difficulty with the fuze made the model unacceptable.52 Late in 1942 the Army Air Forces, having learned that the British had used the Munroe principle in a “CS” bomb designed to defeat capital ships, had asked Ordnance to develop a large shaped-charge bomb. But Air Forces interest in the project was short lived, and it was canceled less than a year after it had begun. In the interim Ordnance designers had produced two bombs that corresponded in size to 2,000-pound and 4,000-pound demolition bombs. Because of the difficulty of loading shaped-charge bombs to conform to those weights, they were designated not by pounds but by inches in diameter. The smaller was the 23-inch Ti, the larger, the 34-inch T1. Four models of each, shipped to Aberdeen after the project was canceled, were there in May 1944 when the AAF asked for the reactivation of the 23-inch Ti. Tests indicated need of further development. While that work was going on, the Navy Bureau of Ordnance asked the Army and NDRC to participate in a project to develop shaped-charge general purpose bombs. Development began on shaped-charge 100-pound, 500-pound, 1,000-pound, and 2,000-pound models, and, as the Army Air Forces also wanted these bombs, they were given an “emergency urgent” rating. Nevertheless, they did not get into combat. Testing continued into the summer of 1945.53

Page 468

The Role of Pyrotechnics

In the last year of the war bomb development was affected not only by the need to overcome strong fortifications but also by changes in Air Forces doctrine. One example was the increased use of incendiaries. By August 1944 the AAF Board had come to the conclusion that “where there is vulnerability to fire, the damage by fire is greater than by demolition,”54 a conclusion, to be sure, that an Ordnance observer had reached during the London blitz of 1940 but which the Ordnance Department had not acted upon. As primary responsibility was shifted to Chemical Warfare Service in November 1940, the decision in 1944 to increase the incendiary bomb program under the highest priority affected Ordnance very little. More important for the Ordnance Department was the change in AAF doctrine that initiated 24-hour bombing operations. Night bombing, always favored by the Royal Air Force but hitherto opposed by the United States, gave new importance to pyrotechnics. Flares and signals to be released or fired from aircraft or projected from the ground had been an Ordnance Department responsibility since 1920. The signals were cases filled with different kinds of compositions that would produce colored smoke or fireworks effects. Especially important were hand signals for downed fliers. Aircraft flares came closer to the usual bomb design. Aircraft flare AN-M26, designed to provide illumination for night bombardment, contained its illuminant in a round-nosed, finned-tail cylinder and developed 800,000 candlepower for a period of about three minutes. It had a drag sleeve that slowed its descent and a mechanical time fuze that functioned the illuminant at a predetermined time after release. A very much smaller parachute flare was employed for reconnaissance, and a tow-target flare towed by an airplane provided a practice target for antiaircraft gunners. The case designed for the M26 flare was versatile. It was modified at different times to drop “chaff” or “window”—metal straw for jamming enemy radar and thus protecting a bomber from flak—and propaganda leaflets, although for the latter the closed adapter clusters used for butterfly bombs eventually proved preferable.55

The most important developments in pyrotechnics concerned photoflash bombs for high-altitude night photography and markers to identify targets at night. The prewar M46 photoflash bomb had the round-nosed shape of a chemical bomb, weighed about 50 pounds of which half was the flashlight powder, and was functioned by a mechanical time fuze. It gave a light of 500 million candlepower. The powder consisted of an oxidant potassium perchlorate or barium nitrate combined with a fuel mixture composed of magnesium and aluminum. The Army Air Forces wanted a photoflash bomb that would give more light at high altitudes and that would be less susceptible to detonation by flak. The Ordnance Department attempted to meet the first requirement by furnishing two experimental large-sized models, one containing 50 pounds of flashlight powder,

Page 469

and the other containing 100. To provide greater safety from flak, pyrotechnics experts tried two methods, a less sensitive powder and a bomb case with thicker walls. Loading a “safe photographic powder,” developed by the British, into both the M46 case and the 250-pound general purpose bomb ease failed to provide the answer, as the British powder gave less light, pound for pound, than the American. Hence the Ordnance Department concluded that solution of the problem lay in the heavy-walled case. Finally, NDRC was called in and established the relationship between case strength and charge weight and composition. Picatinny Arsenal experimented with different combinations of case, filler, and initiating system. The result was a photoflash bomb that produced approximately three times as much light as the M46. Because of urgent need for it in the theatres, its development was given an “A” priority late in December 1944, but further research was required and continued into the postwar period.56

Target identification bombs grew out of a technique evolved by the British to improve the accuracy of their night bombing. A “Pathfinder Force” equipped with special navigational aids flew over a target in advance of the attacking force and dropped various kinds of candles and flares, some to illuminate the general area and others to mark with color the special target. One munition designed specifically for this work was a stabilized bomb that ejected sixty-one pyrotechnic candles at a predetermined altitude. In the United States the earliest research on target identification markers produced five bombs of this kind. All were a modification of the 250-pound general purpose bomb and differed one from another only in the type of candle they contained. A mechanical time nose fuze caused the bomb to eject its candles at the moment when a mechanical time flare fuze ignited them. The candles were small flares, about a foot long and one and one half inches in diameter, that burned with either red, green, or yellow light for about three minutes. Each target identification bomb carried sixty-one of these signal candles, which together made a pattern of colored light approximately 100 yards in diameter around or on a target and were designed to be visible from altitudes as high as 35,000 feet. To keep the candles from being disturbed while they were on the ground, one type of candle had in its case a small cast-iron cylinder containing black powder that would ignite at the end of the burning time of the candle, that time being from one to two minutes. These sporadic explosions were intended to keep the enemy from disturbing the candles as they lay on the ground.57 As the AAF extended its night operations, especially low-level bombing and strafing of illuminated targets by fighters and light attack bombers, need arose for ground-burning flares that would produce a minimum amount of smoke and thus leave the targets as clear as possible. For this purpose the Ordnance Department developed flare

Page 470

bombs loaded with smokeless units.58 At the end of the war Army and Navy experts agreed that future developments must be aimed at greatly increasing the candlepower, burning time, and visibility of all pyrotechnics, especially the photoflash bomb.59

Problems of High-Altitude Bombing

Meanwhile, the increased heights at which new types of aircraft could operate introduced a new problem in bomb design. “Stratosphere” bombing tests conducted at Muroc Army Air Base in the summer of 1944 profoundly affected the future of all air-to-ground munitions, bombs, and pyrotechnics alike. Ordnance ballisticians found that bombs dropped from 35,000 feet, the ceiling of the B-17’s used in the test, behaved quite differently from those dropped at lower altitudes. The fin structures did not stand up well, and the bombs, especially the 1,000 and 2,000-pounders, were unstable in flight. This discovery led to the development of heavier fins as well as parts for strengthening the fins on bombs already in the theatres. A result more significant for the future was the decision of the Chief of Air Staff to enlarge and elaborate the stratosphere bomb-testing program in 1945. With the B-29’s then available, the Ordnance Department was able for the first time to prepare bombing tables for altitudes above 35,000 feet. In the unusually clear air at Muroc the ballistic camera provided accurate data not only for extreme altitudes but for plane speeds faster than any previously known. The future design of both bombs and fuzes would have to be adapted to altitudes up to 60,000 feet, to plane speeds of 600 miles an hour, and to temperatures as low as—65° F.60

The Development Program, 1945

With these adaptations in mind, the Ordnance Department directed its long-range program for fuze development toward more versatile fuzes, with the ultimate goal a single fuze that would serve every purpose by adjustment of arming times, delay times, sensitivity, and the like. The trend in bomb development generally was toward fewer but more effective and more accurate types for use in future airplanes of higher ceilings, faster speeds, and greater carrying capacity. Specifically, it was toward larger and more powerful bombs.61

In Lancaster heavy bombers, modified for the purpose, the Royal Air Force was, by June 1944, using a 12,000-pound bomb with tremendous effect. The blast bomb of this size, with approximately 75 percent explosive content, caused entire buildings

Page 471

to disintegrate and collapse into rubble.62 In addition to this giant, the British were preparing early in 1945 a new 12,000-pound bomb called “Tallboy,” and a 22,000-pound bomb, the “Grand Slam,” to destroy heavily fortified targets such as U-boat pens and underground factories. The 12,000-pounder and the 22,000-pounder were relatively heavy-walled bombs with approximately 43 percent explosive and were fuzed in the base only. The Army Air Forces saw the possibilities of huge earth-penetrating bombs of these sizes, envisaging them as large general purpose bombs to be employed for both blast and fragmentation effect, as well as to cause cave-ins and earth shock and to reach vital underground installations.63 After a study of the so-called long bombs, the Air Forces Board asked the Ordnance Department, which already had drawings of the British bombs and fuzes, to hasten engineering studies on medium and large “pressure vessels,” the 12,000-pound and 22,000-pound general purpose bombs. The models that resulted differed from their British prototypes in being made of steel forgings welded to rolled steel plate, instead of cast steel. The smaller, 21 feet long, carried about 5,600 pounds of explosive; the larger, 25 feet long, carried about 10,000 pounds. The main difference in the design of these bombs and that of the conventional aircraft bombs was in the tail-and-fin assembly, a slender, hollow cone that took up almost half the total length of the bomb and carried four radial airfoil fins. There were three tail fuzes, but no nose fuze. To save time, the test models were equipped with British fuzes and detonators, though the absence of an air-arming feature in the British fuze was a disadvantage. On V-J Day the bombs were still in the testing stage.64

The 22,000-pound British Grand Slam, termed “the most destructive missile in the history of warfare until the invention of the atom bomb,”65 was the largest explosive bomb employed in World War II. By the end of the war the United States had a model nearly twice its size. It weighed about 44,000 pounds, of which 17,600 was high explosive. Though the ratio of explosive charge to weight was only about 41 percent, the Ordnance Department placed this colossus in the general purpose bomb category. It was loaded with tritonal. In design it resembled the Tallboy and Grand Slam, with a tail assembly that took up 122 inches of its total 322. By V-J Day several samples of the experimental model were ready for testing whenever the B-36 bomber became available.66

In the meantime Ordnance engineers, studying ways to correct the unsatisfactory features that had been of necessity copied from the British Tallboys, designed an air-arming fuze, and new fin assemblies made

Page 472

of steel instead of aluminum. As the extreme length of the British fins presented a problem of stowage as well as ballistics, the Air Forces suggested collapsible fins, The Ordnance Department objected on the grounds that they were not only liable to failure but would, by the necessary delay in opening, increase the range and deflection errors. On the whole, a long bomb tail was not economical from the standpoint of weight of explosives carried. Yet long fins gave the stability needed to place the bomb on the target. In efforts to solve the ballistics problem, Aberdeen engineers ran supersonic wind-tunnel tests of scaled-down Tallboy models to determine just what ballistic gain was present to offset the loss of space. The answer was not found before the end of the war. By that time the Ordnance Department had initiated a long-range project of research on very large bombs, Significantly for all bomb development, the project covered research on the best size and shape of bomb to fit in the bomb bays of the future. Henceforth there would presumably be a closer relationship between the weapon and its carrier.67

Testimony of the World War II Record

The largest bomb dropped by the AAF in World War II, the 4,000-pound blast bomb, would not fit in the bomb bay of the B-17 or the B-24, the heavy bombers employed in European operations, but had to be carried under the wings. The B-29 could carry the 4,000-pounder comfortably and in quantity, but as that ship did not get into combat until the late spring of 1944, it was pre-eminently a Pacific bomber. By the time the B-29 was operating in large numbers, incendiaries formed the greatest part of its load. The terrible effectiveness of incendiaries had been stressed in interim reports of the United States Strategic Bombing Survey (USSBS), a group of specialists who had been evaluating bomb damage in Europe since shortly after the invasion. Their findings showed that the M47 incendiary of about 100 pounds was twelve times as effective, bomb for bomb, as the 500-pound general purpose bomb against targets classified as readily inflammable, and one and a half times as effective against targets classified as fire-resistant. Another important conclusion of USSBS was that precision bombing of the “pin-point” or “pickle-barrel” type was a myth. Only about 20 percent of the bombs aimed at precision targets fell within the target area—a circle of 1,000-feet radius around the aiming point, The causes were various: weather conditions and enemy opposition, time limitations on training combat crews, and irregularities in equipment. The greatest promise for improvement in accuracy was the guided bomb, for whose development the AAF was responsible. Except for a brief and very successful experience with Azon bombs in Burma, Allied guided bombs had no influence in World War II.68

Page 473

The performance of the bombs developed by the Ordnance Department was difficult to evaluate, especially since bombers carried mixed loads that contained incendiaries as well as high explosives; but the record permits several conclusions. The semi-armor-piercing bombs encountered targets that defeated them. Fragmentation and general purpose bombs, on the other hand, generally possessed the four salient characteristics required: the ability to be carried by and launched from aircraft; proper flight characteristics under conditions of use; suitability of the design for mass production, handling, and storage; and, most important of all, adequate terminal ballistic effect at the target, that is, power to destroy the objective. Of these the first was largely a matter of aircraft design and thus an Air Forces responsibility. Destructive power also depended in part on the capacity of the bomber to carry a sufficiently big bomb. On the question of flight stability, Ordnance engineers admitted that several of the standard bombs were probably no more than marginally stable and needed more fin area. Production engineering problems were largely solved, especially for the general purpose bombs, the cases of which could be produced in quantity and modified to many uses. For example, the 250-pound size was adapted for pyrotechnics, the 500 and 1,000-pound cases were readily converted to chemical bombs, and the standard 1,000-pound general purpose bomb became an Azon bomb when fitted with a special tail. Toward the end of World War II the problem of fitting the proximity fuze into existing bombs led scientists of NDRC to urge that all bombs and fuzes be designed as an entity. For logistic reasons the Chief of Ordnance did not agree, and the newly established project was canceled in spite of NDRC’s belief that it would lead to new weapons of significantly greater effectiveness. For the rest, the application of new principles of design, such as the shaped-charge bomb and the very large bomb, was delayed by the AAF’s lack of interest early in the war. Most of the war was fought with the bombs standardized in 1941.69

Thus, bomb developments from Pearl Harbor onward suffered from want of a sound over-all scheme of employment determined in advance. Yet no nation at peace could establish any proved plan. World War I offered neither Air Forces nor Ordnance Department guidance in a development program for World War II, if only because the aircraft of the earlier period bore scant resemblance to the planes of the 1940s. Between wars no opportunity existed to appraise accurately the relative merits of blast and fragmentation under various circumstances, of blockbusters and showers of small bombs, of incendiaries and high explosives, of semi-armor-piercing types and shaped charges. Tests at Aberdeen during ‘the 1920s and 1930s were at best simulations of combat, so that conclusions derived from that evidence were of necessity subject to frequent change when actual fighting and unanticipated tactical conditions showed earlier assumptions faulty. Hence, during the war, shifts in Air Forces doctrine canceled development projects

Page 474

halfway completed and substituted new ones that might in turn be quickly abandoned. Experience revealed that many of the numerous types of bombs on hand were not well suited to the purpose for which they had to be used, a purpose quite different from that for which they had been originally designed. Partial divorce of case design from fuze design, and, far worse, disregard in aircraft design of the shape and size of the bombs the ship might have to carry, tended to create confusion that could only be resolved by last-minute recourse to makeshifts. In fact, in the last year of the war improvisation came to be virtually the order of the day.

Still, however short of ideal as munitions, bombs were far and away the most important weapons of the Allied air forces throughout the war. Questioning of the ultimate value of bombarding cities behind the battle lines dropped out of sight. Not only did strategic bombing missions over Europe and, in the last months of the war, over japan loom large in Allied operations, tactical bombing and strafing played an increasingly big part. In air-to-ground attack machine guns and cannon dwindled in importance as the war progressed, while rockets, though coming into ever-wider use, were still too new to rival bombs. And when the atomic bombs were dropped, most of the world concluded that bombs would henceforward be the single most valuable weapon a belligerent could employ.