National Aeronautics and Space Administration (NASA) has had a long and successful history of contributions toward the nation’s goal of improved aircraft fuel efficiency. Specifically, the NASA Glenn Research Center (GRC) has played a direct role in commercial aircraft propulsion system improvements through concept development, component testing, analysis, and model development for aircraft engine inlets, fans, compressors, turbines, and nozzles. This chapter will focus primarily on aerothermodynamic improvements to aircraft engine systems that were enabled through NASA research efforts.
National Aeronautics and Space Administration (NASA) has had a long and successful history of contributions toward the nation’s goal of improved aircraft fuel efficiency. Specifically, the NASA Glenn Research Center (GRC) has played a direct role in commercial aircraft propulsion system improvements through concept development, component testing, analysis, and model development for aircraft engine inlets, fans, compressors, turbines, and nozzles. This chapter will focus primarily on aerothermodynamic improvements to aircraft engine systems that were enabled through NASA research efforts.
There are multiple motivations for improving aircraft fuel efficiency and thereby reducing fuel consumption. There is an economic motivation in that fuel consumption is a large factor in the cost of operation of aircraft, which directly impacts the profitability of airlines, aircraft and engine manufacturers, and associated industries. The public benefits from improved fuel efficiency through more affordable travel. Reduced fuel consumption is also related to increased u.S. energy security and lower reliance on foreign oil. Finally, in recent years environmental concerns over global warming and air quality have increased the motivation to reduce fuel burn and the associated carbon dioxide and other emissions. As the nation’s civil aeronautics research agency, NASA has a large stake in ensuring improvements in fuel efficiency of the aviation sector.
Aircraft fuel burn per seat-mile has decreased dramatically over the last 50+ years (Fig. 3.1) (Rutherford, 2012). This improvement can be traced to many aircraft improvements including those in aircraft aerodynamics, vehicle weight, and aircraft engine efficiency. A large fraction of this improvement can be traced to aircraft engine fuel efficiency improvements enabled by increases in engine bypass ratio (BPR), cycle pressure ratio, turbine inlet temperature, and component efficiencies over the past 70 years. Many of these improvements were enabled by research efforts at GRC working in collaboration with NASA Langley Research Center, NASA Ames Research Center, universities, aircraft suppliers, and aircraft engine industry partners.
Figure 3.1 . Average fuel burn for new jet aircraft, 1960 to 2010. (From Rutherford, D. 2012. overturning conventional wisdom on aircraft efficiency trends. The International Council on Clean Transportation, http://www.theicct.org/blogs/staff/overturning-conventional-wisdom-aircraft-efficiency-trends. Creative Commons CC BY- SA 3.0 license, https://creativecommons.org/licenses/by-sa/3.0/legalcode (accessed August 31, 2016).)
Before discussing details of the history of GRC contributions to improved fuel efficiency for aircraft engines, it is important to understand the underlying physics motivating these research and development efforts. The following discussion will walk through the mathematical equations describing aeropropulsion fuel efficiency before the remainder of the paper delves into specific contributions of GRC in this area.
The high-level starting point for any discussion of aircraft fuel efficiency is the well-known Breguet range Equation (3.1). In this equation, increased aircraft range can be viewed as a surrogate for reduced aircraft fuel burn for a fixed mission. The Breguet range equation shows that propulsion system contributions to improved aircraft range (and reduced fuel burn) come primarily through reduction in the propulsion system thrust-specific fuel consumption (TSFC, sometimes denoted as SFC), as well as secondarily from reductions in the engine weight:
where TSFC represents engine thrust-specific fuel consumption, W fuel is fuel weight, W PL is payload weight, and W 0 is aircraft empty weight. The Velocity is that of the aircraft, and Lift and Drag represent the aerodynamic quantities of the aircraft performance. As will be explained later in Section 3.6, engine architectural changes can have a dramatic impact on the engine TSFC. Figure 3.2 plots the TSFC benefits resulting from some of these architectural changes such as the major trend from turbojet to low- and high-bypass turbofan engines that began in the 1960s and has continued to the present day.
Figure 3.2 . State-of-the-art thrust-specific fuel consumption (TSFC) trends with subsonic engine architecture.
TSFC can be further decomposed as shown in Equation (3.2). For a given aircraft flight velocity and fuel energy per unit mass it can be seen that TSFC is inversely proportional to overall efficiency ηo:
overall efficiency η o is primarily a function of propulsive η pr and thermal η th efficiencies (Eq. (3.3)), as transmission efficiency η tr is generally close to 1.0:
Figure 3.3 shows a plot of thermal and propulsive efficiency trends spanning the history of jet engine development and highlights the mutual contributions of thermal and propulsive efficiency to the overall efficiency. It is also worth noting that even modern aircraft/engine combinations such as the Boeing 777/GE90 and more recently developed fuel-efficient aircraft including the Boeing 787 still leave room for future gains in both thermal and propulsive efficiency. It is also clear from the figure that the gains in overall efficiency are increased most by simultaneous improvements in the core engine and the propulsor. Section 3.6 will discuss some of the future concepts toward capturing those potential gains.
Figure 3.3 . Comparison of historical engine thermal (η th ) and effective propulsion (η p ) efficiency improvements. BPR, bypass ratio; LTO, landing and takeoff; n lr , transmission efficiency. (From Epstein, A. H. 2014. Aeropropulsion for commercial aviation in the twenty-first century and research directions needed. AIAA J. 52:901-911. Reproduced by permission of United Technologies Corporation, Pratt & Whitney.)
The gas turbine engine Brayton cycle ideal thermal efficiency η B is set by the pressure ratio of the cycle (PR, or also known as the overall pressure ratio (oPR)):
where TR is the temperature ratio and γ is the ratio of specific heats. This ideal thermal efficiency, however, assumes that the compression system components have no aerodynamic loss.
The actual thermal efficiency of the cycle will depend on the component efficiencies in both the compressor and turbine. As will be seen later in Section 3.6.4 of this chapter, much work at GRC has supported improvements in these component efficiencies. In addition, this ideal thermal efficiency requires increased turbine inlet temperatures to fully realize the thermal efficiency potential. This is illustrated in Figure 3.4, which shows trends of ideal thermal efficiency and specific power for various cycle pressure ratios and turbine inlet temperatures. This figure illustrates that there is a synergistic relationship between cycle pressure ratio and turbine inlet temperature. Higher pressure ratios, and the accompanying advantages in thermal efficiency, must be coupled with complementary increases in turbine inlet temperature, or the pressure ratio advantage is lost. Additionally, Figure 3.4(b) demonstrates that an increased turbine inlet temperature results in a higher power density, higher thrust-to-weight engine regardless of pressure ratio. This explains the especially strong emphasis in military engines on increased turbine inlet temperature. oPRs have continued to rise for both aircraft and power turbine applications, reflective of the direct impact of this parameter on fuel burn reduction.
Figure 3.4 . Brayton cycle thermal efficiency and specific power trends.
Figure 3.5 shows a historical progression of increased turbine inlet temperatures enabled by advanced cooling strategies as well as advanced materials. Figure 3.6 demonstrates these materials improvements, and includes the more recent application of thermal barrier coatings to further increase turbine inlet temperature capability. It should be noted that thermal barrier coatings are a technology that work synergistically with turbine internal cooling to reduce the underlying turbine metal temperature. Beginning with uncooled metals before 1960, turbine inlet temperatures have progressively increased due to the introduction of increasingly more sophisticated cooling designs and advanced materials, including thermal barrier coatings. It can be seen from Figures 3.5 and 3.6 that approximately two-thirds of the historical increase in turbine inlet temperature has been enabled by improved turbine cooling schemes and about one-third by improved turbine materials. As discussed in Chapter 5, the introduction of ceramic-based turbine base materials will offer a step-change in turbine inlet temperatures in the future.
Figure 3.5 . Turbine inlet temperature trends with technology improvements. (From Ballal, D., and J. Zelina. 2003. Progress in aero-engine technology, 1939-2003. AIAA 2003-4412.2.)
Figure 3.6 . Turbine component material temperature capability improvements, showing increase in operational temperature of turbine components. Y-PSZ, yttrium partially stabilized zirconia. (Adapted from Schultze, u., C. Leyens, K. Fritscher, et al., 2003. Some recent trends in research and technology of advanced thermal barrier coatings. Aerosp. Sci. Technol. 7:73-80. Copyright 2003, published by Elsevier Masson SAS. All rights reserved.)
Figure 3.7 highlights the strong benefit of increased turbine inlet temperature enabled by this cooling and materials development in the increased core specific power and resultant thrust-to- weight of the engine. In addition to enabling the benefits of higher engine cycle oPR and thermal efficiency, raising turbine inlet temperature (T 4) increases the thrust-to-weight of the engine which has dramatic benefits at the aircraft system level, particularly for military and high-speed flight. Consider also that as engine thermal efficiency and overall pressure ratio increase, the compressor exit temperature (combustor inlet temperature), T 3, increases due to increased compressive heating. For a fixed T4, the amount of allowable energy addition in the combustor decreases and the thrust of the engine must decrease for a given engine core flow rate. Therefore, increasing allowable T4 enables engines having acceptable thrust-to-weight and core power density. This also becomes important as the overall size of the engine is limited by airframe mounting considerations—for a given thrust, higher T 4 can improve integration of the engine with the airframe. This becomes particularly important with the rise of very high-BPR engines and the resulting large fans used to provide thrust. Similar to the OPR trend discussed earlier, the trend toward higher turbine inlet temperatures for both aviation and industrial ground power applications of gas turbines has continued to the present day because of the dramatic fuel burn benefits.
Figure 3.7 . Engine-specific power and thrust increase with turbine rotor inlet temperature, T 4. HP is power, m is mass flow rate, Y is ratio of specific heats, R is ideal gas constant, T is gas temperature, and T2 is compressor inlet temperature. (From Koff, B. L. 1991. Spanning the globe with jet propulsion. AIAA 91-2187.)
Another technology which can potentially reduce aircraft fuel burn is the idea of engine boundary layer ingestion (BLI). It can be shown by control volume analysis that the ingestion of airframe boundary layer fluid into the engines can result in a net aircraft fuel efficiency benefit if the detrimental effects of the resulting non-uniform velocity profile entering the engine can be mitigated. For aircraft architectures which are proposed to benefit from BLI, a smaller, higher power density core can enable a greater percentage of BLI along with a higher BPR, both of which can further reduce fuel burn.
The propulsive efficiency of any thrust-producing device is given by
where v is the flight speed of the vehicle and c is the velocity of the air ejected by the thrust- producing device. This equation demonstrates that it is more propulsively efficient to eject a large quantity of low velocity air rather than a smaller quantity of high velocity air for a given thrust requirement. Engine propulsive efficiency is strongly dependent on engine architecture and flight speed with high-bypass turbofan engines being the architecture of choice for cruise speeds typical of the large commercial aviation market.
Improving aircraft fuel efficiency has been an economic consideration since the earliest days of aviation, beginning with the development of improved engines based on the reciprocating engine and propeller combination. Great strides were made in the early development of these propulsion systems, but it was the impetus given by World War II that accelerated the need for higher performance aircraft and initiated the formation of NASA Glenn Research Center. From its genesis in 1943 until its absorption into the new NASA Agency in 1958, GRC was instrumental in the development of turbojet and early turbofan engine technology. When Frank Whittle and Hans von ohain independently began to develop the gas turbine jet engine or turbojet engine for flight application in the late 1930s, it sparked a revolutionary new means of aircraft propulsion, offering advantages for higher-speed flight over the conventional reciprocating engine and propeller combination. Both men realized that the jet engine was uniquely suited to providing power for flight because it was compatible with the flow of air through the engine, unlike a reciprocating engine (Conner, 2001).
In the early stages of turbojet engine development, the military was keenly aware of the obvious advantages of its high-speed flight capability and heavily supported development in Germany, Great Britain, and then in the united States. Because of fears of a German invasion, the British entered into an agreement to send plans for the Whittle engine to the united States in 1941. General Electric (GE) was chosen to develop the engine for production, owing to their expertise in superchargers. In March 1942, the General Electric I-A Whittle-derived engine ran on the test stand, and it flew in a two-engine arrangement on the Bell Airacomet XP-59A aircraft in october 1942. Although GRC had initially focused solely on air- and liquid-cooled reciprocating engines, the dawn of the jet engine age became a reality at the center with the delivery of the GE I-A for testing in the new Static Test Laboratory in 1943 (Dawson, 1991).
The GRC Altitude Wind Tunnel was completed in 1944 and was the first wind tunnel designed to test aircraft engines at simulated altitude conditions. The facility was large enough for both propeller and engine mount and was initially conceived for piston engine research, but was quickly converted to test turbojet and turboprop engines upon their introduction. Between 1944 and its conversion to a vacuum facility for rockets under the new NASA space directive in 1958, a great number of engine performance tests were conducted in the facility and led to dramatic improvements in turbojet and turboprop engine fuel efficiency. Among the success stories attributable to work in this important facility were the solution of cooling problems for the R-3350 engine in 1944, GE TG-180 and TG-190 (also known as the J-47) engine and afterburner performance tests from 1945 to 1950, Westinghouse 24C-7 and 24C-8 engine and afterburner performance and cooling tests from 1950 to 1952, and Allison J-71 and T-38 engine tests from 1952 to 1955 (Dawson, 1991).
One of the primary design choices in the development of early turbojet engines was between axial and centrifugal compressors. The centrifugal compressor of the Whittle and von ohain concepts was simpler and more reliable, but the multistage axial compressor offered potential advantages in efficiency and pressure ratio if the complex aerodynamics and mechanical design issues could be mastered. The axial compressor quickly became the compression system of choice in the united States, but not without initial troubles refining early designs. These early challenges with multistage axial compressors highlighted the need for component research to improve efficiencies and enable the higher pressure ratios promised by the architecture. GRC took a leading role in component development in its Compressor and Turbine Division during the 1940s and 1950s. Many single-stage and multistage compressor tests were conducted at the Center in the Engine Research Building (ERB), providing essential data to the industry to validate both industry and NASA-developed models. The ERB was completed in 1942, again predominantly for piston engine component research, but was upgraded in 1944 to enable testing of compressors and turbines for jet engines. This unique facility is still in use today for testing of turbomachinery, combustion, heat transfer and other engine components. Among the turbomachinery component tests conducted in ERB in the late 1940s and 1950s were numerous experiments related to the Wright J-65 and GE J-47 engine series. In fact, testing in the Altitude Wind Tunnel and ERB enabled the success of the GE J-47 turbojet (General Electric company designation TG-190) as the first axial-flow turbojet approved for civil use in the united States in 1949. It was used in many types of military aircraft, and more than 30,000 were manufactured before production ceased in 1956. It saw continued service in the U.S. military until 1978. A culmination of this early period of compressor testing at GRC was published as a series of classified reports in 1956 and eventually declassified and republished in 1965 as “Aerodynamic Design of Axial-Flow Compressors,” NASA SP-36 (Bullock and Johnsen, 1965). This NASA publication has provided great value to the axial compressor design community for many years and is still considered the authoritative publication on multistage axial compressor design theory and practice.
A similar research trajectory was playing out in the area of fundamental turbine heat transfer and cooling research during the period 1943 to 1957. A key figure in this effort was Ernst Eckert. Eckert had joined von ohain and other German scientists in the united States after World War II and worked initially at the u.S. Air Force’s Wright Field in Dayton, ohio. In 1949 he came to GRC and led the Center’s efforts at improving turbine cooling methods. Referring to Figure 3.5, the work conducted at GRC in this time period was instrumental in the industry acceptance of increasingly complex turbine cooling methods beginning with internally cooled hollow turbine blades and continuing toward more exotic cooling schemes such as film cooling and transpiration cooling.
In 1957, the Compressor and Turbine Division was disbanded as GRC moved toward nuclear and space research in response to the Soviet launch of Sputnik. Although turbojet and early turbofan engine development continued in the aviation industry during this period, it was not until 1966 that the Center turned its attention back to aeronautics research. By 1966, the commercial aviation industry had grown to the point that issues of capacity, congestion, noise, and pollution associated with airports had become a major issue. GRC was called upon to reinitiate aeronautics research and to help solve some of these growing issues.
In the early 1960s, turbofan engines began to emerge—an engine architecture that would dramatically reduce fuel burn over the next several decades. As compressor pressure ratios were increased to enable improved thermal efficiency for turbojet engines, designers began to incorporate dual-spool compressor concepts to allow for better efficiency through optimal design speed of each spool. Initially, this design change was not intended to create a low pressure spool capable of providing an appreciable bypass flow and thrust, but slowly the additional benefits of increasing low spool bypass flow and the incorporation of a fan stage for thrust were introduced into production engines. Figure 3.8 shows cross-sectional schematics of low-bypass and the high-bypass turbofan engine architecture that has become the predominant large commercial aircraft propulsion architecture today.
Figure 3.8 . Turbofan engine architectures. (a) Low-bypass turbofan. (b) High-bypass turbofan. T 2 is compressor inlet temperature and T 4 is turbine inlet temperature.
When NASA reinitiated aeronautics research in 1966, turbofan engine development became a large part of the research focus. The Quiet Engine Program looked at engine noise benefits that would be enabled along with the fuel burn reduction benefits of higher bypass turbofans. Meanwhile, high bypass turbofan engines were being developed by the industry. General Electric was developing the CF6 engine, often considered the first high bypass commercial turbofan engine. The GE CF6 was developed out of their military TF-39 engine, and both the high-bypass fan used in the CF6 engine and the installation technology for high-bypass turbofan engines in general were based on developments made by NASA and military programs (u.S. Government, 1991). Figure 3.2 demonstrates that a step change in engine thrust-specific fuel consumption was enabled by the introduction of the JT9D engine in 1970 and the CF6 engine in 1971 as well as subsequent high-bypass-ratio turbofan engines.
Interest in aircraft fuel efficiency increased dramatically in the 1970s because of the sharp rise in jet fuel prices and their effect on the airline industry. oil (and jet fuel) prices remained relatively stable for a long period from 1945 through the early 1970s, followed by spikes that increased fuel prices by a factor of 4 in the mid-1970s and a factor of 8 by 1980 (Shetty and Hansman, 2012). This dramatic increase in fuel prices put the airlines under deep financial pressure, driving Pan American World Airlines to the brink of bankruptcy in 1974 and ultimately to bankruptcy filing in 1991 (Bowles, 2010).
In response to this rapid increase in fuel prices, NASA established the Aircraft Energy Efficiency (ACEE) Program in 1975. The goal of this program was to accelerate the development of various aeronautical technologies that would make future transport aircraft up to 50% more fuel efficient. The baseline engines used for this goal were the Pratt & Whitney JT9D-7A and General Electric CF6-50C. The ACEE program was composed of six projects, three of which related to engine technology and led by NASA Glenn Research Center (Bowles, 2010).
The first of these projects was the Engine Component Improvement (ECI) Project. The goal of this project was to increase aircraft engine fuel efficiency by 5% through redesign of specific engine components. The second project was the Energy Efficient Engine (E3) Project. The E3 Project had more aggressive goals than the ECI Project in that the goal was to design a new engine rather than simply improve existing components. A goal of 12% fuel reduction (installed thrust-specific fuel consumption compared to the GE CF6-50C was established along with improvements in direct operating cost, noise, emissions, and performance retention. An often-overlooked goal of the E3 Project was a 50% reduction in the rate of performance deterioration compared to the CF6-50C. Since aircraft engines have a long lifespan in the commercial aviation fleet, the rate of performance deterioration can have a dramatic impact on overall fleet fuel consumption. This aspect is missed if one only compares performance of new engines. The third GRC project under the ACEE Program was the Advanced Turboprop Project. This project proposed to take the dramatic step of incorporating large, unducted propellers as the main propulsor for high-subsonic (Mach number M approximately 0.8) commercial aircraft. It is well known that at lower flight speeds propellers offer lower SFC because of their low pressure ratio and high effective bypass ratio. However, at M = 0.8 these benefits typically diminish dramatically because of high relative Mach numbers, and acoustic issues become problematic. But with fuel prices spiking, the promise of reduced fuel burn engine concepts such as the advanced turboprop was very enticing to the aviation industry.
The E3 Project from 1975 to 1984 developed many engine core technologies that were introduced into engine products into the 1990s and beyond. Specifically, GE’s large GE90 engine (Fig. 3.9), which powers the Boeing 777 aircraft, benefited greatly from the E3 Project efforts. To summarize, the E3 Project goals were to (1) reduce SFC by 12%, (2) reduce SFC performance deterioration by 50%, (3) reduce direct operating costs by 5%, (4) meet Federal Aviation Administration noise regulations, and (5) meet EPA then-proposed emissions standards (Ciepluch et al., 1987). The E3 Project achieved higher propulsive efficiency by using a low-pressure-ratio fan and higher thermal efficiency by using higher overall pressure ratio, higher turbine inlet temperatures, and improved component efficiencies. These are common themes in the effort to reduce SFC, and continue to be the main drivers for such efforts even today under NASA projects such as Environmentally Responsible Aviation (ERA) and the Subsonic Fixed Wing (SFW) Project.
Figure 3.9 . General Electric GE90 engine cross section.
Some of the features of the GE E3 effort included a 10-stage, 23:1 pressure ratio compressor (note that the compressor pressure ratio is only a part of the cycle oPR—one must include the fan, low-spool pressure ratio to arrive at the oPR), a highly efficient two-stage high-pressure turbine (HPT) and five-stage low-pressure turbine (LPT), and component efficiencies above the previous state of the art. Along with increased cycle temperatures, reduced turbine cooling flows were achieved through a combination of materials development and cooling concept improvement (Davis and Stearns, 1985).
The Advanced Turboprop Project under ACEE had a vision for a 20% to 30% fuel consumption reduction relative to then-current engines. Major challenges existed in making such an architecture viable for large civilian aircraft. Like propellers, turboprops (or “propfans”) were more efficient at lower flight speeds because of the high relative tip Mach numbers associated with such large-diameter propulsors. The challenge was to enable highly efficient turboprop operation at Mach 0.8 flight speeds and higher altitude flight as well as to mitigate the noise issues inherent in unducted configurations, having no nacelle to shield and absorb radiated noise. The technical solution to both the noise and high-speed efficiency problems was to use swept blades more representative of fan blades than typical propeller blade shapes—hence the commonly used term “propfan.” The swept blade geometry would result in a lower tip Mach number for a given flight speed and would potentially be able to offset the noise disadvantage of propfans (Bowles, 2010).
Although much progress was made on the development of a viable propfan through both the NASA/Allison/Pratt & Whitney/HamiLTOn Standard single rotation concept and the later counterrotating GE “unducted fan” (uDF) concept (Fig. 3.10), various factors kept these concepts from coming to fruition in the market. First, potential negative public perception of propellor-like engine architectures made the airframers reluctant to deviate from their established commitment to turbofan engines, despite the large benefits in fuel burn reduction. Perhaps more importantly, fuel prices by 1986 had retreated back to nearly pre-1970 values in inflation-adjusted terms. This greatly reduced the urgency for the airline industry to adopt a radical change in engine architecture and ended heavy NASA investment in unducted configurations by the late 1980s. The idea would however return in the mid-2000s with the spike in fuel prices.
Figure 3.10 . General Electric unducted fan engine. (From Domke, B. 2007. The GE36 unducted Fan (uDF) used two contra-rotating propellers with eight blades each. http://www.b-domke.de/AviationImages/ Rarebird/0809.html (accessed April 28, 2017). Copyright 2007. With permission.)
Throughout NASA Glenn Research Center’s history, the use of the Center’s unique experimental capabilities for compressor and turbine testing and the emphasis on providing return to the nation on its taxpayer-funded research has resulted in the production of open experimental datasets. In the 1970s and 1980s GRC produced a number of compressor datasets that have been used by the turbomachinery community as a basis for the validation and development of turbomachinery analysis tools, including the growing field of computational fluid dynamics (CFD) codes. Laser Doppler velocimetry (LDV) was customized to measure the axial and tangential velocity inside the rotating passages of transonic compressors. The transonic fan NASA Rotor 67 was the first major dataset acquired with a single-channel LDV, which captured the shock and wake structure in an isolated transonic fan (Hathaway et al., 1986; Strazisar, 1985, 1989; Wood et al., 1987). Subsequently, NASA Stage 67 (Rotor 67 + Stator 67) was the first dataset that captured the unsteady fan rotor/stator blade row interactions with the same single-channel LDV system (Hathaway et al., 1987; Suder et al., 1987). A two-channel laser anemometer system was later developed and utilized to measure both axial and tangential velocity components simultaneously in NASA Rotor 37 (Reid and Moore, 1978). NASA Rotor 37 is perhaps the most widely referenced compressor geometry for such datasets, having been the basis for the American Society of Mechanical Engineering’s (ASME’s) International Gas Turbine Institute CFD blind test case. NASA Rotor 37 has an extensive set of LDV data across the rotor operating range from maximum flow to near-stall conditions at 70% speed (fully subsonic), 80% and 90% speed (transonic), and 100% design rotor speed (fully supersonic in the rotor frame of reference). The data are best summarized in Suder (1996), and an example of the measurement detail is provided in Figure 3.11, which shows the shock boundary layer interaction at 70% span and shock/tip leakage vortex interaction at 95% span for a 0.5% span rotor tip clearance. The ASME blind test case results, shown in Figure 3.12, compare the NASA Rotor 37 experimental and CFD results of overall performance at 100% design speed as well as the radial distribution of pressure ratio, temperature ratio, and efficiency. The eight CFD codes in Figure 3.12 represented the state-of-the-art (SOA) prediction tools from around the globe in 1994. Note the discrepancies not only in the level of the performance parameter but also the shape of the radial distribution, which indicated the codes were not accurately predicting the flow physics of this compressor rotor in isolation. The Advisory Group for Aerospace Research and Development also used the NASA Rotor 37 benchmark data set to compare results from a large number of Navier-Stokes CFD codes (Dunham, 1998). These test case activities highlighted the large range of results produced by the various codes, some of which is attributable to how the codes were employed in addition to the underlying code algorithms and methods. These discrepancies between the CFD and experimental results have led to significant improvements in CFD mesh generation, turbulence model implementation, and tip clearance modeling.
Figure 3.11 . NASA Rotor 37 Laser Doppler velocimetry data. (From Suder, K. L. 1996. Experimental investigation of the flow field in a transonic, axial flow compressor with respect to the development of blockage and loss. Ph.D. Thesis (NASA TM-107310), Case Western Reserve University, Cleveland, OH.)
Figure 3.12 . NASA Rotor 37 American Society of Mechanical Engineering blind test case results (1994). CFD is computational fluid dynamics, m is the mass flow rate and m choke is the choking mass flow rate. (From Suder, K. L. 1996. Experimental investigation of the flow field in a transonic, axial flow compressor with respect to the development of blockage and loss. Ph.D. Thesis (NASA TM-107310), Case Western Reserve university, Cleveland, OH.)
Additional experimental test cases produced by GRC include the NASA Stage 35 (Van Zante et al., 2002) which incorporates a full compressor stage versus the rotor-only approach of the Rotor 37 test case. In addition, NASA built a 5-foot diameter (5 ft = 1.524 m) centrifugal compressor to make detailed measurements for code validation and the results are summarized in Hathaway et al. (1993). Centrifugal compressor scaling studies (Skoch and Moore, 1987) and code validation datasets (Skoch et al., 1997) were used to improve centrifugal compressor CFD codes and the resulting designs. In the turbine area, an example of one of the widely employed test cases is the NASA Transonic Cascade Heat Transfer dataset (Giel et al., 1999), which has been used to validate turbine heat transfer tools across the community (Fig. 3.13). For example, these endwall heat transfer data were instrumental in the development and assessment of the v2-f and Spalart-Allmaras turbulence models (Durbin and Reif, 2001).
Figure 3.13 . NASA Transonic Cascade Heat Transfer data.
NASA has also directly contributed to CFD analysis improvement through development of NASA in-house turbomachinery codes that have contributed to the body of knowledge in the field. A prime example of this contribution is the APNASA code (Adamczyk, 1984). This Navier-Stokes code offers the ability to accurately model the deterministic impact of blade rows throughout a multistage turbomachine without the massive time and expense that would be required to resolve the unsteady full-wheel flowfield for all stages. This is particularly important for multistage compressors, where such an unsteady calculation would be prohibitive, even with today’s computers. The APNASA code has been distributed to the u.S. aircraft and industrial gas turbine industry and is in common use today. other NASA-sponsored navier-Stokes CFD codes that have made a substantial impact on the turbomachinery analysis field include Glenn-HT, TURBO, H3D, ADPAC, and SWIFT. The Glenn-HT code development has focused on turbine cooling and heat transfer applications. It has incorporated the ability to resolve the complicated turbine cooling passages and film cooling holes that were discussed earlier in this chapter as methods to increase turbine inlet temperatures (Fig. 3.14). Several first-of-their-kind demonstrations of turbine heat transfer analyses have been carried out using the Glenn-HT code including internal passage heat transfer, film-cooled external heat transfer, and turbine tip clearance heat transfer. The TuRBo code was developed under GRC funding and enables full unsteady navier-Stokes simulations of multistage compressors and turbines. This kind of unsteady analysis capability has found excellent application in studying the impact of distorted inlet flows on downstream fan aerodynamic performance.
Figure 3.14 . Turbine tip flow structures predicted with modern computational fluid dynamics (CFD).
APNASA, TuRBo, Glenn-HT, H3D, and SWIFT were all recently validated against NASA rotor 37 and NASA stage 35 test cases as part of a NASA turbomachinery code assessment activity. The results were reported at the 2009 AIAA Aerospace Sciences Meeting (Ameri, 2009; Celestina and Mulac, 2009; Chima, 2009; Hah, 2009, Herrick et al., 2009). The results indicated strong agreement among the codes for compressor speedline and stall, with some of the codes highlighting detailed flow phenomena such as leakage flows, resulting in better exit flow profile prediction using advanced modeling techniques such as unsteady Reynolds-averaged navier- Stokes equations, large eddy simulation, and detailed spatial resolution of small geometric and flow features. NASA CFD developments and applications to turbomachinery problems have contributed significantly to turbomachinery flow physics insight from synergistic computational and experimental investigations. Such turbomachinery flow physics features as shock structure, tip leakage flows, turbine cooling flows, blade row interaction, stall inception and flow control have been studied and better understood through GRC efforts.
Recent NASA system studies conducted under the Subsonic Fixed Wing and Environmentally Responsible Aviation Projects indicate that the propulsion system plays a large role in the predicted improvement in aircraft fuel burn for the n+2 timeframe (engine technology readiness level (TRL) 6 by 2020 with potential entry into service (EIS) by 2025) (Fig. 3.15), as well as for the n+3 timeframe (about 5 years beyond n+2). In Figure 3.15, advanced engine technologies of all kinds, including both core and propulsor improvements, are included in the large bar representing engines. A smaller bar of 3.3% represents the potential benefit of boundary layer ingestion for the “accelerated technology development” configuration. Airframe technologies represented include large contributions from hybrid laminar flow control (a way to reduce airframe drag by reducing turbulent boundary layer shear), Pultruded Rod Stitched Efficient unitized Structure—an advanced composite structure that may enable the hybrid wing-body (HWB) concept, and the large effect of the HWB concept itself as a fuel burn reduction technology owing to its improved lift-to-drag ratio relative to the traditional tube-and-wing configuration. note that the reduction in aircraft size, drag, and weight due to engine fuel burn reduction is categorized under aircraft improvements for bookkeeping purposes. Therefore, the engine technology plays a more significant role than Figure 3.15 portrays to provide more fuel-efficient aircraft. This fact is evidenced by the latest trend to re-engine commercial aircraft such as the Airbus 320 and Boeing 737 as opposed to developing a new aircraft.
Figure 3.15 . NASA fuel burn reduction estimates for future aircraft. BLI, boundary layer ingestion; EIS, entry into service; HLFC, hybrid laminar flow control; HWB, hybrid wing-body; LFC, laminar flow control; PRSEuS, Pultruded Rod Stitched Efficient unitized Structure; TE, trailing edge; TRL, technology readiness level.
The SFW Project’s n+3 studies have focused primarily on advanced aircraft configurations, which can serve as “collectors” for technologies that may apply to multiple long-term aircraft concepts. Among the key propulsion technologies identified by these n+3 studies are more compact, high-efficiency gas generators, higher bypass ratios enabled by various methods of distributed propulsion, boundary-layer-ingesting engines, and hybrid turbo-electric engines using either battery or fuel cell energy sources that have potential for significant reduction in emissions, fuel burn, and noise.
Much of the “leveling-off” of aircraft fuel burn reductions seen in Figure 3.1 from 1990 onward are attributable to the relatively stable price of jet fuel from 1985 to the early 2000s. However, increased energy prices tends to place a renewed emphasis on both alternative aircraft and engine architectures as well as more aggressive engine core technology development leading to higher overall pressure ratio and turbine inlet temperature (T 4) cycles. In addition, global warming fears have risen during this past decade, a factor that places additional emphasis on reducing aircraft fuel burn and resultant carbon dioxide emissions. The following sections will describe recent NASA and industry efforts at meeting this need for the aviation sector.
Propulsion systems incorporating open rotors have the potential for game-changing reductions in fuel burn because of their low fan pressure ratio and thus increased propulsive efficiency. To reduce aircraft fuel burn, open rotors (or propfans or unducted fans, as they were known then) were studied in the late 1980s under the NASA Advanced Turboprop Project as a result of the aforementioned oil price spikes of the previous decade. The uDF, or GE36 engine, was one example of this development effort. The UDF was installed on the MD-80 aircraft as a flight demonstration of the technology. Because of limitations of the design and modeling methodology, it was necessary to compromise the GE36 aerodynamics so the engine could meet noise goals. When oil prices dropped in the 1990s, technology development in the area of high-speed propellers ended. Recent uncertainty in oil prices in combination with climate change concerns and the desire for reduced emissions has resulted in a renewed interest in open-rotor systems.
NASA has been collaborating with General Electric Aviation and the Federal Aviation Administration to explore the design space for lower noise while maintaining the high propulsive efficiency from a counter-rotating open-rotor system. Candidate technologies for lower noise were investigated as well as installation effects such as pylon and fuselage integration. Advances in computational fluid dynamics over the last 20 years enable three-dimensional (3D) tailoring of blade shapes to minimize noise while still maintaining efficiency. These modeling advances increase the possibility of meeting both noise and efficiency goals simultaneously for the new generation of open-rotor designs. Figure 3.16 shows an open-rotor model tested at NASA Glenn Research Center recently.
Figure 3.16 . NASA-General Electric open-rotor testing configuration.
During the test campaign six different blade sets or unique combinations of fore and aft blades were evaluated for their aerodynamic performance and acoustic characteristics. one of the blade sets, the Historical Baseline blade set, is representative of 1990s blade design. Aerodynamic and acoustic measurements of the Historical Baseline blade set were used as a benchmark dataset to improve modeling and simulation capabilities for open rotors. The other five blade sets represent modern designs that incorporate various 3D design features and other strategies to reduce the acoustic signature but maintain performance. The open-rotor test campaign is documented in Van Zante (2013) and Van Zante et al. (2014), and the following paragraphs provide a brief synopsis of the activity.
The open-rotor test program consists of three phases: (1) takeoff and approach aerodynamics and acoustics, (2) diagnostics, and (3) cruise performance. For phases 1 and 2 the open Rotor
Propulsion Rig (ORPR) is installed in the 9- by 15-Foot Low-Speed Wind Tunnel (9x15 LSWT) at GRC. oRPR was completely refurbished for the current test entry and also underwent significant upgrades such as a new digital telemetry system for rotor force and strain gage monitoring. For the third phase of testing the rig was installed in the 8- by 6-Foot Supersonic Wind Tunnel (8x6 SWT) for cruise performance testing.
NASA acquired a substantial amount of aerodynamic and acoustic data on a variety of blade geometries for an isolated configuration during the phase 1 testing. Figure 3.17 (Suder et al., 2013) compares the fuel burn and noise levels of the GE36 (1980s open rotor) and turbofan engine to a modern open-rotor design. It is clear from Figure 3.17 that the modern open-rotor designs provide significant improvements in both fuel burn and noise relative to the 1980s GE36 UDF design; thereby making them a viable propulsor concept for the next generation of fuel- efficient aircraft.
Figure 3.17 . Modern open rotor designs provide greater than 25% reduction in fuel burn and about 15 EPndB noise margin to International Civil Aviation organization Chapter 4 standard (ICAo, 2008). (From Suder, K. L., J. Delaat, C. Hughes, D. Arend, et al., 2013. NASA environmentally responsible aviation project’s propulsion technology phase I overview and highlights of accomplishments. AIAA 20130414. Work of the u.S. Government.)
The diagnostics program acquired a comprehensive, detailed data set, which is not only useful for modeling these systems but also for understanding how future progress is possible. Measurements were acquired in an isolated configuration as well as with the generic pylon installed upstream of the rotors. The pylon installed data will be needed to assess the aerodynamic and acoustic penalties associated with an aircraft installation. Four different measurement techniques were applied during the diagnostics testing, each with a specific objective. The acoustic phased-array technique identified noise source locations on the blades as well as on the trailing edge of the pylon. Farfield acoustic data were acquired with the pylon installed to determine the acoustic “adder” that must be applied to account for a realistic installation on the aircraft. Pressure-sensitive paint was used to quantify the magnitude and infer information about the time history of static pressure fluctuations on the forward and aft rotor airfoils as well as the trailing edge of the generic pylon. Stereo particle image velocimetry is the fourth measurement technique and was used to quantify the velocity characteristics and trajectory of the forward rotor wakes and tip vortex in support of tone noise predictions. In addition, second-order statistics (turbulence intensities) were determined from the measurements in support of broadband noise predictions.
Phase 3 of the test campaign determined the rotor aerodynamic performance at cruise Mach number of approximately 0.78 in the GRC 8x6 SWT. In addition, unsteady pressure field measurements were acquired near the rotor tips from a linear array of pressure transducers mounted in a translating plate. This type of data is useful in analyzing the rotor pressure field interaction with the aircraft fuselage.
The data gathered and understanding obtained from the testing will be instrumental in solving some of the challenges in making open-rotor systems viable. The future design intent is to use improved aero and acoustic tools to mitigate the installation effects. In order to perform a direct comparison of an open-rotor system to a high-BPR ducted propulsor, NASA designed a common aircraft platform to compare the tradeoff between fuel burn and noise reduction (see Hendricks et al., 2013). The NASA notional aircraft design was a modern 162-passenger airplane with rear fuselage-mounted engines and having a cruising Mach number of 0.78 at 35,000 ft and a mission range of 3250 nautical miles. A comparison of the fuel burn and noise for the open-rotor and ducted high-bypass propulsors are shown in Figure 3.18. The aircraft with the open-rotor propulsor provided an additional 9% reduction in fuel burn despite the increased weight of the engine and at the expense of an increase of 7 dB cum in noise relative to the ducted propulsor for this notional aircraft size and mission.
Figure 3.18 . Comparison of advanced turbofan and open rotor on common aircraft platform. BPR, bypass ratio; TF, turbofan; UHB, ultrahigh-bypass. (From Hendricks, E. S., J. J. Berton, W. J. Haller, et al., 2013. Updated assessments of an open-rotor airplane using advanced blade designs. AIAA 2013-3628. Work of the u.S. Government.)
In summary, the modern open-rotor designs provide significant margin in Stage 4 noise requirements and offer substantial reductions in fuel burn. In addition, more research on installation effects and certifications must be addressed before open-rotor propulsion systems are installed on commercial aircraft. Also, it is unlikely that open-rotor systems will be able to match the acoustic margin of ducted systems because open-rotor systems by definition have no duct (and acoustic liner), and as a result, have greater flow and acoustic interactions with the airframe. The next section discusses the development of the ultrahigh-bypass (UHB) ducted propulsor, where the question to consider is, “Will the modern geared-turbofan engine, once optimized, provide comparable fuel burn reductions as an open-rotor system?”
NASA’s aggressive noise and fuel burn reduction goals are driving aircraft engine designs to higher bypass ratios and larger fan diameters. Aircraft engine noise and fuel burn reduction are directly correlated to fan size, fan pressure ratio and fan bypass ratio. As the fan size increases, there is a corresponding drop in fan pressure ratio and an increase in fan BPR. At some point, as the fan size continues to increase, a minimum is reached between fan size and weight and drag. The larger, heavier nacelle produces more drag during flight, and overcomes the advantages of a larger fan. Hence, a technology paradigm shift is needed to reduce the minimum point, which is produced by introducing advanced fan and core technology. A shift of this type was produced by Pratt & Whitney (P&W) with their geared-turbofan (GTF) UHB engine design. UHB engines are defined as engines with a fan BPR equal to or greater than 12. NASA in cooperation with P&W has been investigating UHB technology over the last 20 years, but the GTF is the first generation of UHB engines that will see EIS with an aircraft manufacturer. The paradigm shift produced by the GTF is achieved by operating the fan and core in such a way as to optimize the performance of both. Direct-drive turbofans necessarily operate the fan and low-pressure turbine at the same speed. At low fan speeds, the LPT is operating at faroff-design conditions, and its efficiency goes down, increasing fuel burn. P&W introduced a gearbox into their GTF engine design that allows the fan and LPT to operate at different speeds—thus more optimum, higher efficiency conditions—and so reduced fuel burn. As BPR increases, the mean radius ratio of the fan and LPT increases. Consequently, if the fan is to rotate at its optimum blade speed, the LPT will spin slowly so that additional LPT stages will be required to extract sufficient energy to drive the fan. Introducing a planetary reduction gearbox with a suitable gear ratio between the low-pressure shaft and the fan enables both the fan and LPT to operate at their optimum speeds. A geared turbofan uses a larger fan that moves more air at a lower speed, allowing the same thrust as its nongeared counterpart, but with less energy expended.
Fan propulsive efficiency increases with decreasing fan pressure ratio, but direct-drive turbofans are limited in their ability to operate at very low fan pressure ratios. The GTF architecture can enable further reductions in fan pressure ratio compared with direct-drive turbofans, thereby increasing propulsive efficiency and reducing fuel burn. The fan pressure ratio curve for the first generation GTF is between 1.2 and 1.5, but as the fan BPR increases the fan pressure ratio decreases. So the next-generation GTF will be required to operate at the lower end of the fan pressure ratio curve, and at a significant increase in fan BPR, to achieve the second paradigm shift necessary to reduce the fuel burn minimum point even further.
The UHB engine technology associated with the first-generation P&W GTF was close to reaching NASA’s n+1 noise and fuel burn reduction goals, but additional technologies are needed to achieve the n+2 goals. Figure 3.19 illustrates the technology roadmap NASA is following and the additional technologies that will be needed for not only the propulsion system but for the aircraft system as well. Whereas the first-generation GTF operated at a BPR of around 9 to 12, the second-generation GTF will necessarily need to operate at a BPR from 15 to 18, possibly as high as 20, with correspondingly lower fan pressure ratios between 1.2 and 1.4. As a result, NASA and P&W have again teamed to develop propulsion and noise reduction technology for the next-generation GTF.
Figure 3.19 . ultrahigh bypass (UHB) propulsion technology roadmap. BPR, bypass ratio; EIS, entry into service; TRL, technology readiness level. (From Suder, K. L., J. Delaat, C. Hughes, et al., 2013. NASA environmentally responsible aviation project’s propulsion technology phase I overview and highlights of accomplishments. AIAA 2013-0414. Work of the U.S. Government.)
NASA and P&W have been collaboratively designing a scale model of the GTF Gen 2, with a 22-in. (56-cm) fan diameter, for testing in the GRC 9- by 15-Foot Low-Speed Wind Tunnel. This test will investigate new three-dimensional fan geometries and advanced inlet designs to increase propulsive efficiency and lower nacelle weight. At the same time, new variable-area nozzle (VAn) technologies are being investigated. Because of the wide range of flight conditions the UHB propulsion cycle must operate over, the fan nozzle area is required to vary as much as 50% to achieve the proper fan operating conditions. However, traditional VAn designs are heavy, and so NASA is investigating advanced, lighter weight designs using shape memory alloy technology. Investigating advanced noise-reduction technologies are also in the NASA plans to meet the aggressive noise goals. The next generation of over-the-rotor acoustic treatment (oTR) and acoustically treated soft vanes (SVs) is focusing on achieving 3 to 4 dB of noise reduction with a minimal impact on aerodynamic performance, optimally less than 0.5% in fan efficiency, including testing of the advanced oTR/SV designs using existing 22-in.-scale-model turbofan test hardware.
Embedded engines with boundary layer ingestion offer an additional fuel burn benefit of up to 5% to 10% because of their reacceleration of fluid slowed by the viscous drag of the vehicle. This technology benefits the propulsive efficiency of the vehicle as described in Equation (3.5) by reducing the jetting velocity (c) compared to a podded engine and reducing the vehicle wake deficit (see Fig. 3.20). The potential benefit depends upon the percentage of the boundary layer from the vehicle ingested into the engines, so some concepts attempt to capture a larger percentage of this boundary layer by using distributed propulsors across the upper surface of the vehicle. Blended-wing-body vehicles offer an attractive method to leverage boundary-layer-ingesting engines because of their larger surface area, which results in a larger boundary layer and more flexibility in engine mounting on the upper surface of the lifting body. Figure 3.21 shows a concept from ingesting the boundary layer on the NASA-Boeing blended wing-body aircraft. Figures 3.22 and 3.23 show additional BLI-related concepts, including the NASA-Massachusetts Institute of Technology-Pratt & Whitney “double-bubble” configuration and the NASA in-house Turboelectric Distributed Propulsion concept, respectively.
Figure 3.20 . Propulsion benefits of boundary layer ingestion (BLI), in terms of blade tip speed U relative to station 0, upstream of engine. (a) Conventional (jet) propulsion. (b) BLI propulsion.
Figure 3.21 . NASA-Boeing blended wing-body concept.
Figure 3.22 . NASA, Massachusetts Institute of Technology, and Pratt & Whitney double bubble aircraft concept.
Figure 3.23 . NASA turboelectric distributed propulsion concept.
One of the challenges for BLI engines, however, is the potential loss in fan efficiency and degradation of life due to the periodic distortion experienced by the rotating fan. NASA and united Technologies Research Center (UTRC) are jointly investigating fan designs that can mitigate this problem and flow control technologies that can make the fan inflow more uniform. The goal is to demonstrate an embedded integrated inlet and distortion-tolerant fan system that provides the identified aircraft benefits by achieving less than a 2% loss in fan efficiency while maintaining ample stability margin. The study used an existing NASA Research Announcement (NRA) sponsored blended-wing-body design such as is depicted in Figure 3.21, to define the design constraints for the inlet boundary layer and the requirements for a relevant embedded engine configuration. NASA partnered with UTRC, Pratt & Whitney Aircraft Engines and Virginia Polytechnic and State university (Virginia Tech) through the NRA to exploit the optimal design space and to design and build an integrated inlet and fan embedded system. A sampling of the relevant publications supporting this activity inclusive of simulated aircraft boundary layer, the embedded inlet and distortion tolerant fan design, and the aeromechanics analysis is found in Arend et al., 2012; Florea et al., 2012; Bakhle et al., 2012; Bakhle et al., 2014; Tilman et al., 2011; Ferrar et al., 2009; Florea et al., 2009.
NASA is testing a distortion-tolerant fan with a relevant boundary layer inflow field in the 8- by 6-Foot Supersonic Wind Tunnel at GRC. The arrangement of this embedded propulsor experiment is shown in Figure 3.24 where a false floor was inserted in the tunnel to mount the inlet/fan hardware. note the rods located far upstream of the embedded fan inlet to provide a thick inlet boundary layer. Downstream of the rods and upstream of the inlet, the false floor contains a porous section to provide bleed control to adjust the incoming fan/inlet boundary layer to simulate that of a HWB vehicle such as the one shown in Figure 3.21. The main objective of the test is to assess the ability of the fan to sustain high performance with minimal loss and to maintain a sufficient stability margin. The test is in progress as of this writing and is expected to finish before summer 2017. Through this effort, distortion-tolerant fan technology and system-level benefits will be validated along with the design and analysis tools required to model the relevant physics.
Figure 3.24 . Boundary layer ingestion (BLI) fan test rig installed in NASA Glenn 8- by 6-Foot Supersonic Wind Tunnel (8×6 SWT). (a) Bars upstream of fan are used to thicken boundary layer, and downstream bleed plates are used to customize boundary layer upstream of fan inlet. (b) close-up of the integrated inlet and fan installation in the 8x6 SWT.
The previous sections on open-rotor propulsors, ultrahigh-bypass engines, and boundary-layer ingesting engines addressed improvements in propulsive efficiency. Returning to Figure 3.3, recall that it is imperative to make improvements in both propulsive efficiency and thermal efficiency in order to make the biggest impact on overall engine efficiency and resulting fuel burn reductions. In this section the areas of NASA research and development to improve thermal efficiency are presented.
In the core turbomachinery area, the emphasis is on increasing the overall pressure ratio of the compression system while maintaining or improving aerodynamic efficiency and increasing the turbine inlet temperature (T4) while reducing nitrogen oxide (NOx) emissions from the combustor. These are challenging goals, because in both cases these are competing constraints. Another challenge is related to the need for more compact, high-oPR, high-bypass-ratio engines. These competing demands require ever-smaller rear compressor stage blade heights along with increased combustor inlet temperature T3 values. one potential solution to these demands is an axi-centrifugal compressor, whereby the rear axial stages of the multistage compressor are replaced by a centrifugal rear stage that would be able to operate at a higher efficiency for the small corrected mass-flow values required of such cycles. Higher temperature materials and/ or innovative cooling schemes would potentially be required to enable this concept. NASA is currently studying this and other potential solutions to this challenging problem. NASA has also recently funded a set of NASA Research Announcement awards focusing on better understanding and mitigating turbine and compressor tip clearance flows, which can enable reduced aerodynamic loss and increased pressure ratio cycle engines. The awards are also producing experimental data for use in computational fluid dynamics validation efforts across the turbomachinery community, in the ongoing spirit of NASA-led development of turbomachinery experimental databases. Refer to Reid and Key (2015), Volino (2017), and Katz (2017).
Increasing compressor oPR either drives the design toward more stages or higher stage loading, in which case the overall efficiency of the compression system tends to suffer because of either increased wetted area and drag losses or increased boundary layer separation and mixing loss, respectively. overall engine size and weight constraints, engine operability, and rotor dynamics issues can also limit the use of additional compressor stages, so often the solution to increased oPR is higher stage loading. The emphasis within the industry and in the NASA research programs is to push the component efficiency-loading curve higher such that either a higher efficiency can be attained at a given loading or a higher loading can be achieved for a given component efficiency.
The NASA Environmentally Responsible Aviation Project focused on the compressor technologies to enable high-efficiency and high overall Pressure Ratio core engines. Specifically, the goal of the ERA highly loaded compressor activity was to increase efficiency and to increase pressure rise by 30% relative to the ERA baseline engine (GE90 engine on the 777-200) to achieve a 2.5% reduction in engine specific fuel consumption. Refer to Suder et al. (2013) and Van Zante and Suder (2015) for background on the NASA ERA Propulsion activities. Two test and analysis campaigns explored the design space to improve the compressor oPR (blade loading) and efficiency without negatively impacting weight, length, diameter, and operability. The first test campaign (NASA ERA Phase 1) investigated the front two stages of a legacy high-pressure ratio six-stage core compressor to determine what limits blade loading. The second test campaign (NASA ERA Phase 2) focused on two builds of the front stages of a new compressor design. A pictorial view of the design space explored is found in Figure 3.25. The dashed line represents state of the art for blade loading (represented as the change in enthalpy divided by the square of the rotor tip rotational speed) and efficiency. As shown, the higher that the blade loading is the more difficult it is to achieve high efficiency. Any compressor with a design point above the dashed line would represent a design that was better than the SoA.
Figure 3.25 . Compressor design space for Environmentally Responsible Aviation Phase 1 and Phase 2 relative to the state-of-the-art best current practices as indicated by the dashed line, representing the change in enthalpy dH ave divided by the square of the rotor tip rotational speed U 2 tip. (From Van Zante, D. E., and K. L. Suder. 2015. Environmentally responsible aviation: propulsion research to enable fuel burn, noise, and emissions reduction. ISABE 2015-20209. Work of the U.S. Government.)
In ERA Phase 1, a legacy high-oPR compressor design that fell short of the efficiency design goals was investigated. This design pushed the SoA design space to higher blade-loading levels (pressure rise per stage) with increased efficiency relative to the best current designs. Unfortunately, the efficiency goals were not obtained at this high blade loading (refer to Fig. 3.25).
The high losses were attributed to the front two stages of this highly loaded six-stage compressor design. The front two stages are transonic across the span, and therefore their performance is very sensitive to variations in the effective flow area, which can affect the location and strength of the passage shocks and further impact flow separations and/or low momentum and loss regions due to the shock and/or blade row interactions. Therefore, the goals in ERA Phase 1 were to isolate, analyze, and test the first two stages of a transonic SoA high-pressure compressor in order to (1) understand the flow physics that resulted in high losses, (2) characterize the blade row interactions and their impact on loss, and (3) validate the design methodology and capability of the prediction tools by comparisons with the experimental results.
NASA tested the first two stages using SoA research instrumentation to investigate the loss mechanisms and interaction effects of embedded transonic highly loaded compressor stages. The high-speed multistage compressor test facility, W7 in the Engine Research Building at NASA Glenn Research Center, was used to run this test. The inlet to the core compressor modeled the inlet conditions to a high-pressure compressor (HPC) of an engine, inclusive of fan frame struts and a transition duct from the low-pressure compressor (LPC) to the HPC compressor. The test plan focused on making steady and unsteady measurements for the single stage and then again after adding the second stage to enable evaluation of the performance and losses in each stage. This approach enabled the ability to sort out the loss contributions from each stage and provided detailed data to define the inlet boundary conditions to the compressor.
For both 1- and 2-stage configurations, detailed data were taken at 97% design speed, acquiring data from leading-edge (LE) instrumentation, wall statics, over the rotor Kulites (a piezo-electric device to measure instantaneous pressure), and traversing probes. The results indicated that stage 2 was choking at a mass flow rate that prevented stage 1 from reaching its peak efficiency point, leading to a stage mismatch issue. The mismatch is thought to be due to a loss in the first stage that was not predicted by design tools. Assessment of Stator 1 LE measurements in both test configurations revealed that the level of performance at this location is unaffected by the presence of the second stage. Therefore, the major source of unexplained loss resulted from the first stage of the compressor. For additional details and discussion of the CFD analysis and experimental test results refer to Celestina, et al. (2012), and Prahst, et al. (2015).
ERA Phase 2 utilized a completely new core compressor design strategy and leveraged lessons learned from the Phase 1 compressor design. The Phase 2 compressor was designed for increased efficiency and blade loading. Refer to Figure 3.25 and note that the Phase 2 compressor efficiency levels are higher than those of Phase 1 and that the blade-loading levels were increased relative to best current design but not to the higher levels of blade loading that were attempted in the Phase 1 design (discussed in the previous paragraphs). For ERA Phase 2, NASA tested the first three stages of a high-efficiency, high-oPR core compressor design in the same NASA facility as the Phase 1 testing. The Phase 2 compressor test campaign consisted of a Build 1 test and a Build 2 test where the primary difference is that Build 2 was designed to achieve higher compressor blade loading (pressure rise per stage) at the same efficiency levels of Build 2, as shown in Figure 3.25. The higher blade loading of Build 2 provides an overall system benefit because it allows for the compressor bleed locations to be moved further upstream, thereby reducing the compressor work required to provide the bleed flow. Extensive CFD simulations that have been conducted are not only in agreement with each other but are also in agreement with the design intent. Build 2 testing is complete, and initial results indicate the compressor has met its design intent.
The ERA goal is a 50% reduction in fuel burn below current technology aircraft, while achieving a 75% reduction in landing and take-off (LTO) nitrogen oxides (NOx) below Committee on Aviation Environment Protection CAEP-6 standard requirements (Suder et al., 2013). Achieving this goal requires development of high-power-density, high-thermal-efficiency cores. High-power-density cores enable UHB systems by increasing the bypass ratio with minimal changes in engine diameter required to achieve the higher bypass. not only does this enable the UHB engines to be installed under the wing, but this also contributes to the reduced drag and weight associated with the larger diameter UHB engines. The technical challenges associated with high-power-density, highly efficient cores are that they result in (1) higher combustor inlet pressures and temperatures, which encourages NOx production and (2) higher engine exhaust temperatures and jet velocities, which increase noise and add weight.
The approach is to
In the following sections the results will be divided into two elements: (1) one that addresses increase in engine oPR by working on the compressor technologies and (2) CMC material development to increase T4 and reduce cooling flow. The benefits of these technologies to reduce fuel burn are illustrated by the system study results shown in Figure 3.26 (Tong, 2010).
Figure 3.26 . Specific fuel consumption (SFC) reduction due to increased overall pressure ratio (OPR) and increased turbine blade inlet temperature T 41 as a function of reduced coolant flow (From Tong, M. T. 2010. An assessment of the impact of emerging high-temperature materials on engine cycle performance. ASME Paper GT2010-22361. Work of the U.S. Government.)
one of the constraints on ever-increasing OPR and T 4 for reduced fuel burn is the increased emphasis on emissions. It was noted earlier that the Energy Efficient Engine Project anticipated revisions to the emissions regulations and included the meeting of these regulations in their goal set. Since then, emissions standards have become even more stringent because of local air quality concerns near airports. Specifically, NOx emissions are a major concern, and this presents a challenge for increasing OPR and T4 as shown in Figure 3.27. For a given level of combustor technology, NOx emissions increase dramatically with increasing oPR and cycle temperature.
Figure 3.27 . Trade space between engine overall compressor pressure ratio and nitrogen oxide (NOx) emissions. (From Suder, K. L., J. Delaat, C. Hughes, et al., 2013. NASA environmentally responsible aviation project’s propulsion technology phase I overview and highlights of accomplishments. AIAA 20130414. Work of the u.S. Government.)
The strategy is to advance combustor mixing technology in concert with oPR and T4 advances to maintain or reduce NOx along with thrust-specific fuel consumption.
NASA is addressing these challenges of higher oPR and higher T 4 through a combination of materials development, compressor testing, and computational analysis. In the materials area, high-temperature CMC combustor, turbine vane, and engine nozzle components are being developed to allow for higher engine temperatures and reduced cooling flow requirements. The reduction of cooling flow in the HPT vane additionally reduces NOx emissions by reducing the combustor exit temperature for a given turbine rotor inlet temperature and freeing coolant usage for combustor dilution jets. The plan is to advance the technology readiness (TRL) level of CMC components through design and fabrication of larger, more complex models than have been currently demonstrated and to test these models in a relevant environment in NASA and out-of-house laboratories.
The NASA Glenn Research Center is continuing to develop advanced turbine cooling concepts, including ideas such as an “antivortex” row of film cooling holes having bifurcated exits (Fig. 3.28), which can offer dramatic improvements in film-cooling effectiveness and reduced cooling flows. A recent area of research delves into the optimized cooling of ceramic-based turbine materials, which have unique constraints for cooling compared to metal parts. Because of their reduced thermal gradient capability, CMCs and other ceramic-based turbine components may need to de-emphasize internal cooling and rely more on external film cooling. This combined cooling/materials problem continues the historical trend of synergistic turbine cooling and materials improvements toward reduced fuel burn engines. Development continues on providing robust EBCs for CMC components to protect the ceramic material from the erosive effects of high-temperature water-laden gas.
Figure 3.28 . NASA anti-vortex film cooling concept with bifurcated exits.
The NASA Subsonic Fixed Wing Project has recently initiated a number of research awards focusing on enabling continued improvements to engine oPR and thermal efficiency through a better understanding and mitigation of turbomachinery tip clearance and endwall flow losses. As engine cores increase in oPR and reduce in size to enable further increases in turbofan BPR, the tip, endwall, and leakage flows become dominant sources of loss in the engine core. The advocacy of this research topic was strengthened through a series of turbomachinery white papers that were developed under the auspices of the NASA-led Turbomachinery Technical Working Group, which continues today as a forum for better collaboration between NASA, industry, university, and other u.S. Federal Government Agencies.
The history of the NASA Glenn Research Center (1943 to present) coincides with an era of dramatic improvement in aircraft fuel efficiency and performance. The Center has contributed greatly to this improvement through full engine testing, engine component testing and development, analytical tool and model development, and research investigating fundamental flow physics insight and computational fluid dynamics validation in partnership with the aircraft engine industry. Beginning with the early reciprocating engines with propellers and progressively through the development of turbojet, turbofan, and potential unducted fan concepts, GRC has played a leading role in advocacy for new engine architectures in fundamental and applied research programs. Through this partnership with industry, the fuel burn per passenger-mile and engine specific fuel consumption has been reduced by more than 50%, with similar improvements envisioned in the next decades through component, engine, and aircraft concepts championed by GRC and its research staff. This chapter has summarized a high-level view of these contributions and how they have been achieved through a consistent focus on concept development and research development based on the underlying physics of jet propulsion and improved aircraft engine efficiency.