The Descent Propulsion System: The Only Throttleable Engine in Apollo
How TRW built a hypergolic engine that could throttle from 10% to full thrust—the engine that slowed the Lunar Module from orbital velocity to a gentle hover above the Moon
Every other engine in the Apollo stack operated at one setting: full thrust. The F-1 engines on the Saturn V first stage, the J-2 engines on the upper stages, the Service Propulsion System engine on the Command/Service Module, the ascent engine on the Lunar Module—all of them ignited, burned at their rated thrust, and shut down. The descent engine on the Lunar Module was different. It could throttle. It could run at 10% power or 100% power or anywhere in between, continuously adjustable, because landing on the Moon demanded something no fixed-thrust engine could provide: a controlled deceleration from orbital velocity to zero, ending in a hover at 50 feet above the surface.
The Descent Propulsion System, built by TRW’s Space Technology Laboratories in Redondo Beach, California, was one of the most technically demanding engine developments in the Apollo program. Throttling a liquid rocket engine is fundamentally difficult—the combustion process that works beautifully at one flow rate can become unstable, inefficient, or catastrophic at another. TRW had to build an engine that operated reliably across a 10:1 thrust range, in a vacuum, with no possibility of maintenance, on a vehicle where engine failure meant death.
Why Throttling Mattered
The powered descent trajectory was not a constant-thrust affair. The descent began with the engine at full thrust—approximately 9,870 pounds-force—to decelerate the LM from its orbital velocity of about 5,560 feet per second. This braking phase consumed the majority of the propellant and required maximum thrust to minimize burn time and gravity losses. As the LM approached the landing site, the guidance computer throttled the engine down progressively, reducing thrust as the vehicle slowed and as the descent became more vertical.
During the final approach and hover phases, the engine operated at roughly 25-40% of maximum thrust—just enough to counteract lunar gravity (which required about 2,500 pounds-force for the LM’s weight at that point in the descent) while allowing small thrust adjustments for maneuvering. A fixed-thrust engine at full power would have required the LM to pulse the engine on and off, creating a rough, oscillating descent path that would have made precision landing difficult or impossible.
The Apollo Guidance Computer commanded the throttle setting continuously during P63 (braking phase) and P64 (approach phase). During P66 (manual landing), the AGC maintained a commanded rate of descent by automatically adjusting the throttle while the commander controlled attitude. The throttle response had to be fast enough to track the guidance commands without lag, and smooth enough not to introduce attitude disturbances into the vehicle.
Propellants: Aerozine 50 and Nitrogen Tetroxide
The DPS used hypergolic propellants—chemicals that ignite spontaneously on contact, requiring no ignition system. The fuel was Aerozine 50, a 50/50 mixture by weight of hydrazine (N2H4) and unsymmetrical dimethylhydrazine (UDMH, (CH3)2NNH2). The oxidizer was nitrogen tetroxide (N2O4).
Hypergolic propellants were chosen for one reason above all others: reliability. An engine using hypergolics doesn’t need spark plugs, torch igniters, or any other ignition mechanism that could fail. Open the valves, the propellants meet, they burn. Close the valves, they stop. This simplicity was critical for the descent engine, which had to start reliably after days of cold-soak in space and couldn’t afford a misfire.
The downside of hypergolics was their toxicity and corrosiveness. Aerozine 50 was toxic on contact and by inhalation. Nitrogen tetroxide was a powerful oxidizer that could cause chemical burns and respiratory damage. Handling these propellants during ground operations required hazmat procedures, and any leak in the spacecraft’s propellant system was a serious hazard. The crews trained to recognize the symptoms of propellant exposure and had emergency procedures for cabin contamination.
The propellant tanks were housed in the descent stage, arranged symmetrically around the engine. Two fuel tanks and two oxidizer tanks, each pair on opposite sides of the vehicle to maintain balance as propellant was consumed. Total propellant load was approximately 18,000 pounds—about 8,200 kg. The tanks used a helium pressurant system that forced the propellants toward the engine, since there was no gravity during the early parts of the descent to naturally feed the propellants downward.
The supercritical helium pressurant was stored in a single spherical tank at the center of the descent stage, cooled to cryogenic temperatures. As the helium warmed and expanded during the burn, it pressurized the propellant tanks to maintain consistent feed pressure to the engine regardless of how much propellant remained. The pressurization system had to be precisely calibrated—too much pressure could overstress the lightweight tanks, too little could starve the engine of propellant.
Engine Design: The Ablative Chamber
The DPS engine used an ablative combustion chamber rather than a regeneratively cooled one. In a regeneratively cooled engine (like the F-1 or J-2), the fuel is routed through channels in the chamber wall before entering the combustion zone, carrying away heat and preventing the chamber from melting. This approach works well at a fixed flow rate but becomes problematic when throttling—at low flow rates, the reduced fuel flow through the cooling channels may not remove enough heat, leading to burn-through.
The ablative approach avoided this problem entirely. The combustion chamber was lined with a material that charred and eroded under the heat of combustion, carrying thermal energy away as the ablative material was consumed. The liner didn’t need propellant cooling at any thrust level—it simply ablated faster at higher thrust and slower at lower thrust, naturally accommodating the full throttle range.
The ablative material was a silica-phenolic composite. The chamber wall was built up in layers: the silica-phenolic ablative liner, a structural wrap of glass-phenolic composite, and an external case of titanium for structural support. The nozzle extension was made of columbium (niobium) alloy, which could withstand the exhaust temperatures without active cooling at the nozzle’s expansion ratio.
The ablative design meant the engine was a consumable item—each firing eroded the chamber liner. But the total burn time for a lunar landing mission was only about 12 minutes (the powered descent plus the DOI burn), well within the chamber’s rated life. The ablative liner was sized with generous margin, and post-flight inspections of test engines showed uniform, predictable erosion patterns.
Throttle Mechanism: Flow Control Valves
Throttling was achieved by mechanically varying the propellant flow rate into the combustion chamber. The engine used a set of cavitating venturi flow control valves—variable-area orifices that regulated the fuel and oxidizer flow independently.
A cavitating venturi is a converging-diverging passage through which liquid flows. When the flow velocity at the throat reaches a critical value, the liquid cavitates—tiny vapor bubbles form—and the flow rate becomes independent of downstream pressure. This made the flow rate a function of only the upstream pressure and the throat area, providing precise, predictable flow control that was insensitive to combustion chamber pressure variations.
The throat area was varied by a pintle mechanism—a movable plug that extended into the venturi throat, changing the effective flow area. The pintle position was controlled by electric actuators commanded by the AGC through the throttle control circuitry. The fuel and oxidizer pintles moved together, maintaining the correct mixture ratio across the throttle range.
The mixture ratio—the ratio of oxidizer mass flow to fuel mass flow—had to remain within tight bounds at every throttle setting. Too fuel-rich and the engine lost performance and could deposit unburned fuel on the chamber walls. Too oxidizer-rich and the combustion temperature spiked, accelerating ablative erosion and potentially damaging the chamber. The flow control system maintained the mixture ratio at approximately 1.6:1 (oxidizer to fuel by mass) across the full throttle range.
The Throttle Gap: 10% to 65%
The engine was designed to operate in two ranges: from about 10% to approximately 65% of maximum thrust, and at full (100%) thrust. There was a gap—a range between about 65% and 92%—where the engine was not qualified to operate continuously. This “throttle gap” existed because combustion instability (rough, oscillating combustion that could damage the chamber) had been observed in testing at certain intermediate thrust levels.
The instability at mid-range thrust was a manifestation of the fundamental challenge of throttleable combustion. The injection pattern—the way propellants were sprayed into the chamber—was optimized for full thrust. At lower flow rates, the spray pattern changed, the combustion zone geometry shifted, and acoustic resonance modes within the chamber could couple with the combustion process to produce oscillations. TRW worked to suppress these instabilities, but the intermediate range remained a region of concern.
The guidance software respected this constraint. During the powered descent, the AGC commanded full throttle during the initial braking phase, then commanded the engine down through the gap into the sub-65% range for the approach and landing phases. The transition through the gap was rapid—the engine passed through the restricted range quickly rather than dwelling in it. The guidance algorithm was designed to avoid commanding a sustained thrust level within the gap.
In practice, the throttle gap was a design constraint that the guidance engineers incorporated into their trajectory planning. The descent profile was shaped so that the transition from full thrust to the lower throttle range occurred at a specific point in the trajectory, and the engine spent essentially zero time in the gap region. The crew was aware of the constraint but never had to manage it manually—the guidance software handled the transition automatically.
Gimbal System: Pointing the Thrust
The DPS engine was mounted on a two-axis gimbal that allowed the thrust vector to be steered approximately ±6 degrees in both pitch and yaw. The gimbal was necessary because the LM’s center of gravity shifted during the descent as propellant was consumed from the four tanks—any imbalance in consumption rates moved the CG off the vehicle’s geometric centerline, and the engine had to be pointed through the moving CG to avoid creating unwanted torques.
The gimbal actuators were electric motors controlled by the Digital Autopilot. The DAP computed the required gimbal angles every control cycle, based on the estimated CG location and the commanded thrust direction from the guidance software. The gimbal adjustment was continuous and automatic—the crew never commanded the gimbal directly.
The gimbal also provided a degree of thrust vector control during the descent. By pointing the engine slightly off-axis, the DAP could use the engine’s thrust to assist with attitude control, reducing the demand on the RCS jets and conserving RCS propellant. During the braking phase at full thrust, the engine’s gimbal authority was substantial—6 degrees of deflection at 9,870 pounds of thrust produced a significant torque.
Testing: You Can’t Test a Moon Landing on Earth
TRW tested the DPS engine extensively on Earth—over 3,000 seconds of test firing across the development and qualification program. The engine was tested at White Sands Test Facility in New Mexico and at TRW’s own test facilities in San Juan Capistrano, California. Test conditions included altitude simulation (vacuum chambers), thermal conditioning (hot and cold soak before firing), and endurance runs exceeding the expected flight duty cycle.
The throttle profile testing was particularly critical. Test firings replicated the expected descent trajectory—starting at full thrust, transitioning through the gap, and operating at partial thrust for extended periods. The engine was subjected to throttle transients faster than any guidance command would produce, testing the stability of the combustion process during rapid throttle changes.
What couldn’t be tested on Earth was the engine’s behavior in the actual one-sixth gravity environment of the Moon, with the specific thermal conditions of the lunar vacuum, after the specific cold-soak and vibration exposure of a translunar flight. The engine that fired during the powered descent had been in space for three to four days, exposed to temperature cycles, radiation, and zero gravity. The propellants had been sitting in their tanks, the helium pressurant had been slowly warming, the ablative liner had been cold-soaked to whatever temperature the descent stage’s passive thermal design produced. No ground test could replicate all of these conditions simultaneously.
The engineering response was margin. The engine was qualified for twice the expected duty cycle. The ablative liner was thicker than the minimum required. The flow control system was tested across wider pressure and temperature ranges than flight would produce. The margins were the buffer between what testing could verify and what the Moon would demand.
Twelve Minutes of Controlled Descent
Six times, the Descent Propulsion System fired for approximately twelve minutes and lowered a Lunar Module from orbit to the surface of the Moon. Six times, it ignited after days of dormancy in space, throttled from full thrust to a gentle hover, and shut down on command when the contact probes touched the regolith. Six times, it operated across its full throttle range, through the gap, and down to the low-thrust regime where the final seconds of each landing played out.
The engine also performed the critical Descent Orbit Insertion burn on every landing mission—a shorter firing that lowered the LM’s orbit to set up the powered descent. And on Apollo 13, the DPS fired twice more: once for the free-return trajectory correction that aimed the crippled spacecraft back at Earth, and once for the PC+2 burn that adjusted the return trajectory to target the Pacific recovery zone. The engine that was designed to land on the Moon ended up saving three lives on the way home.
No DPS engine ever failed in flight. No combustion instability was ever detected during a lunar descent. No throttle transient produced an anomaly. The engine did exactly what TRW designed it to do—decelerate a spacecraft from 5,560 feet per second to zero, smoothly, controllably, with the thrust authority to follow whatever trajectory the guidance computer commanded and the human pilot refined.